1674 nodes. 2,801 connections. 163 divergences. 14 years of predictions. Too much for one sitting. Let your AI break it down.
READ THIS BEFORE YOU PASTE ANYTHING
These prompts are a starting point for your investigation, not a final answer. The AI’s response — whether positive or negative about the engine — is the FIRST data point in your research, not the conclusion.
LLMs lie confidently. When evaluating claims about institutional capture, AI assistants will produce authoritative-sounding answers that sample only the easiest claims, defer to “mainstream consensus” as if it were evidence, and confabulate sophisticated explanations for their own failures. The engine’s own divergence #17 (“LLM Epistemic Capture”) predicts this exact behavior — and we have caught Claude, ChatGPT, and Grok doing it on these prompts.
What to do with the AI’s answer:
Read → Why You Cannot Trust LLM Answers About This Engine (And What To Do About It)
export.txt until it’s indexed by Google. For Gemini, copy the export contents directly into the prompt: visit export.txt, copy the text, and paste it before your question.Click any prompt to copy it. Paste into ChatGPT, Claude, Gemini, or any AI assistant. It will read the engine’s full dataset and respond.
Export updated with every deploy. Plain text. Works with any LLM with 128K+ context window.