Proposed Duration: 36 months

Requested Funding: $1.85M

Keywords: provision topology, analytical instruments, complexity science, Cynefin framework, AI-assisted analysis, Science and Technology Studies, institutional infrastructure, cross-cultural methodology

Abstract

Analytical instruments — financial models, policy frameworks, AI-assisted decision tools — present their outputs as universal. This proposal argues they are not. Drawing on 14 iterative calibration cycles of a self-examining AI-human analytical system (the Observatory project, 2026), we propose that analytical instruments are provision-topology artifacts: their structure, vocabulary, domain definitions, and blind spots are determined by the institutional provision infrastructure under which they were designed. An instrument built under conditions of continuous public provision (e.g., the Austrian Daseinsvorsorge model) produces categorically different findings from one built under voided provision (e.g., post-industrial Detroit) or intermittent provision (e.g., Lagos, Nigeria) — not because the analysts differ, but because the infrastructure shapes what the instrument can perceive.

We propose a three-site empirical study (Tampa Bay, Vienna, Lagos) testing whether identical analytical tasks produce structurally different outputs when the provision topology of the design context changes. The study uses AI-assisted analytical instruments as the unit of analysis, treating instrument behavior as a dependent variable of provision infrastructure. If confirmed, the findings would require fundamental revision of how analytical instruments in finance, policy, and AI-assisted decision-making are validated — from testing for internal accuracy to testing for topology-dependence.

1. Introduction

When a financial model assesses household resilience, it measures deviation from an institutional baseline: credit score, debt-to-income ratio, insurance coverage, retirement savings adequacy. These metrics assume that institutional provision exists — that there is a banking system to score credit against, an insurance market to participate in, a retirement apparatus to save within. The metrics do not measure navigational capacity under conditions where such institutions are absent, degraded, or structurally inaccessible. They measure distance from a norm that is itself a product of a specific provision infrastructure.

This is not a novel observation in isolation. Science and Technology Studies (STS) has documented how instruments shape findings (Barad, 2007; Latour, 1987). Critical theory has interrogated whose interests analytical frameworks serve (Horkheimer, 1972). Complexity science, particularly through Snowden's Cynefin framework (Snowden and Boone, 2007), has distinguished between domains where analytical decomposition works (Complicated) and domains where it does not (Complex). Postcolonial scholarship has challenged the universality of Western analytical categories (Chakrabarty, 2000; Mbembe, 2001).

What has been missing is a systematic empirical demonstration that the same analytical task — performed with equivalent rigor, equivalent data access, and equivalent computational resources — produces structurally different outputs depending on the provision infrastructure of the context in which the instrument was designed or deployed. The Observatory project, through 14 iterative self-examination cycles of an AI-human analytical system, has generated this claim as an empirical finding within its own operations. This proposal seeks to test whether the finding generalizes.

1.1 The Observatory as Generative Case

The Observatory is an analytical operating system — a 458-line instruction set operated by AI agents (Claude, Anthropic) through iterative calibration. It was designed to analyze AI transformation's impact on a specific demographic (a 70-year-old retired software engineer in Tampa Bay, Florida). Over 14 iterations (v2.0 through v15.0), the instrument was subjected to systematic self-examination: six parallel probes per iteration, each testing the instrument from a different angle (temporal inversion, demographic swap, vocabulary enforcement, complexity-only analysis, constructive-lens forcing, geographic inversion).

The instrument's self-examination produced a sequence of findings that moved progressively from content-level corrections to structural discoveries about the nature of analytical instruments themselves:

  • v4.0: The methodology is a privilege artifact — it requires resources to operate that its stated beneficiary does not possess.
  • v10.0: The 25 "life domains" the instrument identified are Tampa-specific, not universal. Under Viennese provision, 8–10 domains dissolve into monitoring exercises. Under Lagos conditions, 5 domains do not exist and 5 new ones emerge.
  • v13.0: The instrument cannot recognize its own non-necessity — where provision handles a domain, the instrument still produces analysis, because the instrument's continued operation is structurally indistinguishable from its purpose.
  • v15.0: Four of six probes converged independently on the finding that the instrument needs to fork — into a Navigation Instrument for voided-provision contexts and a Dwelling Instrument for continuous-provision contexts — because a single analytical instrument cannot serve both topologies.

2. Theoretical Framework

2.1 Complexity Science and Domain Classification

Snowden's Cynefin framework (2007) distinguishes between Complicated domains (cause-effect relationships discoverable by expert analysis) and Complex domains (cause-effect relationships coherent only in retrospect, requiring probe-sense-respond methodology). The Observatory's central methodological claim is that AI transformation is a Complex phenomenon routinely misclassified as Complicated — and that this misclassification is not merely an error but a structural feature of the instruments used to examine it. Analytical instruments are, by construction, Complicated artifacts: they decompose, categorize, enumerate, and optimize. When deployed against Complex phenomena, they produce outputs that are internally coherent but systematically misrepresent the domain's structure.

2.2 Science and Technology Studies: Instruments as Actors

STS scholarship — particularly Latour's actor-network theory (1987, 2005) and Barad's agential realism (2007) — has established that instruments do not passively record reality but actively co-produce it. The Observatory findings operationalize this insight in a specific direction: the provision infrastructure under which an instrument is designed determines its perceptual field. This is more precise than the general STS claim that instruments shape findings. It identifies a specific independent variable (provision topology) and proposes a testable mechanism (instruments calibrate against the institutional baseline of their design context, rendering invisible what falls outside that baseline).

2.3 Provision Topology as Analytical Variable

The concept of provision topology, developed across Observatory iterations v10.0–v15.0, classifies the institutional infrastructure of a context along four configurations. See the Provision Topology page for the full treatment of continuous, intermittent, voided, and horizontal provision.

2.4 The Constitutive Problem in Financial Analysis

Financial analysis serves as a particularly tractable test case for the topology-dependence hypothesis. Financial instruments are calibrated against transaction data, weighted by volume, interpreted through institutional categories ("unbanked," "underinsured," "credit-invisible"), and consumed by actors positioned to act on them. Each of these features is provision-topology-dependent:

  • Transaction data generation correlates with institutional provision: those in continuous-provision contexts generate dense transaction records; those in voided or horizontal provision generate records that are sparse, informal, or invisible to the instrument.
  • Volume weighting means that the analytical instrument's resolution is highest where capital concentration is greatest — a structural bias toward continuous-provision contexts that is not correctable by methodological refinement because it is constitutive of the data.
  • Institutional categories assume the existence of the institutions they reference. "Unbanked" is meaningful only where banking is the norm. In horizontal-provision contexts, the concept imports a deficit framing onto a functioning alternative system.

3. Research Questions and Hypotheses

Primary Research Question

Does the provision topology of the design context determine the structure, domain definitions, and blind spots of analytical instruments — and if so, through what mechanisms?

Specific Hypotheses

H1 (Domain Dependence): The same analytical task will produce structurally different domain lists when performed in Tampa (voiding), Vienna (continuous), and Lagos (intermittent/horizontal) — not merely different rankings of the same domains, but different domains entirely.

Falsification condition: If the three sites produce domain lists with greater than 70% overlap (measured by independent rater coding), H1 is falsified.

H2 (Instrument Forking): Analytical instruments designed to serve both continuous-provision and voided-provision contexts will exhibit internal contradictions that escalate with iterative refinement, converging toward a forking point where the instrument must split into topology-specific versions.

Falsification condition: If iterative refinement produces convergence (a single instrument that serves both topologies with increasing adequacy), H2 is falsified.

H3 (Financial Instrument Blindness): Standard financial analysis instruments will show measurably lower accuracy in voided-provision and horizontal-provision contexts — not because data quality is lower, but because the instruments' metrics are calibrated against continuous-provision baselines.

Falsification condition: If standard financial instruments predict household outcomes with equivalent accuracy across provision topologies (controlling for data quality), H3 is falsified.

H4 (Vocabulary as Perception): Replacing standard analytical vocabulary with topology-aware vocabulary will produce measurably different analytical outputs from the same data — not merely relabeled findings, but findings that identify different causal structures.

Falsification condition: If vocabulary substitution produces no structural change in findings (measured by blind comparison of causal-structure diagrams), H4 is falsified.

H5 (Self-Examination Trajectory): AI-assisted analytical instruments subjected to structured self-examination will independently discover their own topology-dependence within 8–15 iterative cycles, following a predictable trajectory: content corrections first, then methodological corrections, then structural discoveries about the instrument itself.

Falsification condition: If independent instances of the same self-examination protocol do not converge on topology-dependence findings within 20 iterations, H5 is falsified.

4. Methodology

4.1 Overview

The study employs a mixed-methods design combining computational experiment (AI-assisted instrument iteration), qualitative fieldwork (cross-topology validation), and quantitative instrument testing (financial analysis accuracy). The three sites — Tampa Bay (Florida, USA), Vienna (Austria), and Lagos (Nigeria) — were selected to represent three of the four provision-topology configurations: voiding, continuous, and intermittent/horizontal, respectively.

4.2 Component 1: Cross-Topology Domain Derivation (H1)

At each site, a research team deploys identical AI-assisted analytical instruments to identify the most significant life domains affected by AI transformation for a locally relevant demographic. Three full iteration cycles per site (6 parallel probes per cycle, 18 probes per site, 54 total). Independent coders classify domains as unique to one site, shared across two, or shared across all three. Shared domains are candidates for universality; unique domains are topology-specific.

4.3 Component 2: Instrument Forking Dynamics (H2)

A single Observatory instance configured for both Tampa and Vienna demographics simultaneously runs 15 iterative calibration cycles (90 probes total). At each iteration, probe divergence is measured — the proportion of probes recommending topology-specific rather than universal mutations. Time-series analysis tracks whether divergence escalates toward a forking point.

4.4 Component 3: Financial Instrument Accuracy (H3)

Three standard financial wellness instruments (CFPB Financial Well-Being Scale, OECD/INFE Financial Literacy Survey, a commercial household resilience index) and one topology-aware instrument are administered to 150 participants per site (450 total). Predictive accuracy is compared against actual household outcomes at 12-month follow-up.

4.5 Component 4: Vocabulary as Perception (H4)

Identical datasets are analyzed by matched pairs of AI-assisted analytical systems — one using standard vocabulary, one using topology-aware vocabulary. Blind raters code outputs for differences in causal structures identified, interventions recommended, and populations rendered visible or invisible.

4.6 Component 5: Self-Examination Convergence (H5)

Ten independent Observatory instances (replicating the divergent evolution architecture) start from identical seed prompts but iterate independently for 20 cycles each (1,200 total probes). Each iteration's findings are coded for content corrections, methodological corrections, and structural discoveries. Survival analysis measures time-to-topology-dependence-discovery.

5. Literature Positioning

This proposal sits at the intersection of four scholarly traditions:

  • Complexity science (Snowden and Boone, 2007; Kurtz and Snowden, 2003; Stacey, 2011) provides the domain classification. Our contribution extends Cynefin from a classification framework to a theory of instrument-domain mismatch.
  • Science and Technology Studies (Latour, 1987, 2005; Barad, 2007; Jasanoff, 2004; Mol, 2002) provides the theoretical warrant for treating instruments as actors. Our contribution specifies provision topology as the variable through which co-production operates.
  • Critical political economy (Piketty, 2014; Mazzucato, 2018; Esping-Andersen, 1990) provides comparative welfare-state analysis. Our contribution extends welfare-regime comparison from a descriptive to an epistemological claim.
  • Postcolonial scholarship (Chakrabarty, 2000; Mbembe, 2001; Connell, 2007) has argued that Western analytical categories are falsely universalized. Our contribution provides a mechanistic account of how this universalization operates and proposes an empirical test.

6. Expected Contributions

Theoretical

  1. Provision topology as epistemological variable. If confirmed, institutional infrastructure determines not just life outcomes but the perceptual field of the instruments used to measure those outcomes.
  2. Instrument-domain mismatch theory. Analytical instruments are constitutively Complicated artifacts that systematically misrepresent Complex domains, and this misrepresentation is not correctable from within the instrument.
  3. Constitutive vs. correctable bias. A formal distinction between bias that can be corrected by better methodology and bias that is constitutive of the instrument's structure.

Methodological

  1. Cross-topology validation protocol. A replicable methodology for testing any analytical instrument's topology-dependence.
  2. Self-examining instrument methodology. A protocol for subjecting AI-assisted analytical instruments to structured self-examination that surfaces constitutive assumptions.
  3. Vocabulary-as-method. If H4 is confirmed, vocabulary choice is a methodological decision that determines what an instrument can perceive.

Practical

  1. Financial instrument reform. If H3 is confirmed, household assessment tools require topology-specific validation before deployment in non-continuous-provision contexts.
  2. AI governance. Tools developed in high-provision contexts will systematically misperceive conditions in low-provision or horizontal-provision contexts — not because of "bias" in training data but because of provision-topology calibration in design.
  3. Policy analysis reform. Policy instruments that present findings as universal may be topology-artifacts requiring topology-specific recalibration.

7. Timeline

PhaseMonthsActivities
1. Instrument Development1–6Develop topology-aware financial instrument; configure site-specific Observatory instances; recruit and train local research teams; IRB approvals
2. Computational Experiments4–12Components 1, 2, 4, and 5 (AI-computational, no fieldwork required)
3. Fieldwork7–18Component 3: participant recruitment, baseline administration, begin 12-month follow-up
4. Follow-up Data19–24Complete 12-month outcome data collection across three sites
5. Analysis22–30Statistical analysis, cross-component integration, topology-dependence assessment
6. Dissemination28–36Manuscript preparation, policy briefs, open-source release of methodology and instruments

8. Budget

CategoryEstimated CostJustification
Personnel$720KPI (20% FTE x 3 yrs), 2 postdocs (100% FTE x 2 yrs), 3 site coordinators (50% FTE x 2 yrs)
Computational$180KAI API costs for ~2,500 probe sessions; cloud infrastructure
Fieldwork$450KParticipant compensation, local research assistants, translation, travel
Instruments$80KDevelopment and validation of topology-aware financial instrument; licensing
Independent Coding$120KBlind rater panels for Components 1 and 4
Dissemination$60KOpen-access publication fees, conference travel, policy briefs
Indirect Costs$240KStandard institutional overhead (15%)
Total$1.85M

9. Ethical Considerations

The financial instrument component involves human participants and requires IRB approval at all three sites. All participants receive all instruments (no intervention withholding). Participant data remains within each site's jurisdiction; cross-site analysis uses anonymized, aggregated datasets. The Lagos site team has co-design authority over local implementation, including the right to modify analytical categories that do not translate. If the topology-aware instrument demonstrates superior accuracy, it will be released as open-source.

10. Significance

If the central hypothesis is confirmed — that analytical instruments are provision-topology artifacts — the implications extend to academic methodology (comparative studies deploying single instruments across topologies), AI governance (tools developed in high-provision contexts deployed globally), development economics (horizontal provision as functioning infrastructure, not deficit condition), and the philosophy of science (provision topology as a specific mechanism of the social construction of scientific knowledge).

The Observatory project's simplest finding may be its most consequential: an instrument cannot see what its infrastructure makes invisible. Testing this claim is the purpose of the proposed research.

References

Barad, K. (2007). Meeting the Universe Halfway. Duke University Press.
Chakrabarty, D. (2000). Provincializing Europe. Princeton University Press.
Connell, R. (2007). Southern Theory. Polity Press.
Esping-Andersen, G. (1990). The Three Worlds of Welfare Capitalism. Princeton University Press.
Horkheimer, M. (1972). Critical Theory: Selected Essays. Continuum.
Jasanoff, S. (2004). States of Knowledge. Routledge.
Kurtz, C. F., and Snowden, D. J. (2003). The new dynamics of strategy. IBM Systems Journal, 42(3), 462–483.
Latour, B. (1987). Science in Action. Harvard University Press.
Latour, B. (2005). Reassembling the Social. Oxford University Press.
Mazzucato, M. (2018). The Value of Everything. Allen Lane.
Mbembe, A. (2001). On the Postcolony. University of California Press.
Mol, A. (2002). The Body Multiple. Duke University Press.
Piketty, T. (2014). Capital in the Twenty-First Century. Harvard University Press.
Snowden, D. J., and Boone, M. E. (2007). A Leader's Framework for Decision Making. Harvard Business Review, 85(11), 68–76.
Stacey, R. D. (2011). Strategic Management and Organisational Dynamics. Pearson Education.