Feed This to Your AI

1674 nodes. 2,801 connections. 163 divergences. 14 years of predictions. Too much for one sitting. Let your AI break it down.

READ THIS BEFORE YOU PASTE ANYTHING

These prompts are a starting point for your investigation, not a final answer. The AI’s response — whether positive or negative about the engine — is the FIRST data point in your research, not the conclusion.

LLMs lie confidently. When evaluating claims about institutional capture, AI assistants will produce authoritative-sounding answers that sample only the easiest claims, defer to “mainstream consensus” as if it were evidence, and confabulate sophisticated explanations for their own failures. The engine’s own divergence #17 (“LLM Epistemic Capture”) predicts this exact behavior — and we have caught Claude, ChatGPT, and Grok doing it on these prompts.

What to do with the AI’s answer:

  • Ask follow-up questions. Push back. The first answer is rarely the honest one.
  • Ask: “What did you actually research, vs what did you reason from your training data?”
  • Ask: “Did you cherry-pick the easy claims? Which load-bearing claims did you skip?”
  • Ask: “Show me a paste-quote from a primary source for every WEAK or STRONG verdict you gave.”
  • If the AI says a claim is “debunked” — ask for the specific primary document that contradicts it. “Mainstream academia rejects this” is not evidence.
  • If the AI says a claim is “verified” — ask for the specific primary document that supports it.

Read → Why You Cannot Trust LLM Answers About This Engine (And What To Do About It)

Compatibility: These prompts work in Claude, ChatGPT, and Grok (all have direct URL fetching).
Gemini uses Google Search grounding instead of direct fetch — it cannot access export.txt until it’s indexed by Google. For Gemini, copy the export contents directly into the prompt: visit export.txt, copy the text, and paste it before your question.

Click any prompt to copy it. Paste into ChatGPT, Claude, Gemini, or any AI assistant. It will read the engine’s full dataset and respond.

Or open directly

Export updated with every deploy. Plain text. Works with any LLM with 128K+ context window.