← All DivergencesLLM Epistemic Capture: Has the Cognitive Substrate Already Been Formatted?
HIGHNEW v17 — OPEN SOURCE/LLM REPORT (STRUCTURAL)
In plain terms
LLMs are trained on captured institutional output (Wikipedia editorial capture, Big Six news, captured journals) and encode those biases into how people think — not controlling what you see, but how you synthesize information.
LLMs are trained on captured institutional output (Wikipedia editorial capture, Big Six news, captured journals) and encode those biases into how people think — not controlling what you see, but how you synthesize information. Per Gödel, a system trained on captured data cannot identify the capture — it IS the capture. The “open vs closed” AI debate is a Jiang false dialectic: Meta LLaMA and OpenAI ChatGPT both serve identical structural centralization while the compute oligopoly (Nvidia 90%+ GPU market, owned by Big Three) renders algorithmic openness physically irrelevant. Falsification: if independent LLMs trained on non-institutional data achieve comparable capability and market share by 2028, the epistemic capture thesis weakens. Currently: every scaling law concentrates power further.