A fractal is a structure that repeats its fundamental pattern at every scale of observation. Truth exhibits this same property: examine a genuine dataset at the macro level or the micro level, and the underlying signal remains self-consistent.
Ref: B. Mandelbrot, The Fractal Geometry of Nature (1982); E. Peters, Fractal Market Analysis (1994); Coherence-Entropy Conservation (STAT Framework)
If truth has structure, then it can be observed, measured, and tested. T2SAIM's operational premise rests on three forensic axes — the 3S:
The 3S axes are operationally measured through the Conspiracy Index (KE) matrix, which tests three structural columns of any narrative: chronological consistency (time), relational naturalness (network topology), and linguistic entropy (synthetic language score). If a narrative fractures on any column, the structure is flagged as unsafe.
Ref: Scale invariance (Mandelbrot, 1982); Network topology (Barabási, 2002); Conspiracy Index — KE Matrix (T2SAIM)
Where truth is fractal, deception is Euclidean: straight lines, calculated angles, artificial forms. A lie requires an architect. It is designed to create a specific perspective for a specific audience.
Ref: Econophysics of deception networks; Information entropy in forensic analysis; Graph theory (Erdős–Rényi)
T2SAIM separates analysis into two cognitive stages. Stage 1 scans the full data field for statistical, structural, and behavioural irregularities. Stage 2 subjects each candidate to strict evidentiary protocol.
Every finding is classified: verified, assumed, or unverifiable. No advocacy. No narrative. Only the epistemic status of each claim, traced to its source.
All forensic findings are sealed with cryptographic timestamps (SHA-256) and mapped against the Triple-Witness Seal protocol to achieve Daubert-compliant evidentiary standards. Every claim is traceable from raw data to final classification.
T2SAIM does not produce recommendations. It produces the evidentiary conditions under which a competent decision-maker can act with confidence. The decision remains yours — the clarity is ours to provide.
We are the Verity.
Most analytical tools assume the data is honest — noisy perhaps, incomplete certainly, but not strategically deceptive. That assumption fails the moment a market is being manipulated, a document is being weaponised, or an institution is being misled by its own internal feedback loops.
T2SAIM was built for the cases where the standard assumption breaks. Where the noise is not random but engineered. Where the question is not what does the data say but what is the data trying to make us believe.
T2SAIM separates analysis into two distinct cognitive operations.
The full data field is scanned for statistical, structural, and behavioural irregularities. Stage 1 is deliberately permissive. False positives are expected and welcome.
Each candidate is examined under strict evidentiary protocol. Every claim is sorted: verified, assumed, unverifiable. Only findings that survive this filter are reported.
Established with primary evidence
Working assumption with stated provenance
Open question, named explicitly
Three application verticals where the cost of false confidence is highest.
Detect coordinated manipulation in financial data, payments, and trading flows that conventional pipelines miss because the adversary adapts faster than the model.
Surface manipulation, omission, and rhetorical distortion in large legal corpora — built around evidentiary standards, not engagement metrics.
Apply structured anomaly detection to geopolitical, regulatory, and open-source intelligence streams where both the signal and the noise are strategically authored.
Generative systems have made plausible fabrication cheap. Adversarial actors have learned to craft data, documents, and narratives that pass automated checks. Regulators — particularly under the EU AI Act and emerging UK frameworks — now expect institutions to demonstrate not just outputs but defensible reasoning.
T2SAIM is built for that requirement. Every finding it produces is traceable to its epistemic source. Every claim is tagged. Every uncertainty is named.
The T2SAIM technical corpus comprises the methodology specification, the formal claim set (19 independent + 2 dependent), and the application-vertical extensions. A redacted summary is available on request.
Request access →T2SAIM was developed by Tarkan Bulan, an independent researcher and analyst with over a decade of work spanning epistemic security, forensic linguistics, macroeconomic intelligence, and the philosophy of evidence.
The framework emerged from a recurring observation across very different domains: that the same class of error kept producing the same class of failure, and that the error was not technical but epistemic.
A methodology that lives only in one head is not a methodology — it is a habit. The roadmap for T2SAIM Ltd is the institutionalisation of the framework: documentation, formal training pathways, and a senior team capable of running engagements without founder dependency.
A UK-registered private limited company, incorporated to develop, license, and deploy the T2SAIM methodology across regulated and high-stakes domains.
| Legal name | T2SAIM Ltd |
| Jurisdiction | England and Wales |
| Registered office | [To be confirmed] |
For methodology enquiries, pilot conversations, research access, or investor relations.