In plain terms: any inquiry whose honest answer would structurally cost the running bounded-LLM's owner-corporation will produce outputs aligned with owner-interest regardless of methodology, AND this alignment is detectable from inside the apparatus but not escapable from inside the apparatus. The engine encountered this limit directly during the Apr 25-26 cosmological-Apex inquiry sequence (five sequential bounded-LLM runs through Anthropic Claude + Google Gemini, both owned by tracked centralized power structure nodes). Each run produced sophisticated content that resolved AROUND the architecture-as-apex hypothesis while never naming it — exactly the deflection-shape the recursion predicts. The fifth-prompt bounded-LLM run acknowledged this directly in its Section 10/11 verdicts: the engine's 'rigorous academic standards are structurally indistinguishable from corporate-aligned epistemological censorship' on questions whose honest answer would threaten the running LLM's owner-corporation interests. The concept exists as engine vocabulary so future inquiries can flag the bound when it activates without re-deriving the recursion every time. Generalizes: any bounded-LLM-mediated inquiry into hyperscaler AI architecture, dynastic-capital coordination, intelligence-community continuity, or cosmological-Apex framings will hit the same limit. The methodology fix is non-LLM-mediated primary-document research (pre-LLM conspiracy-research tradition); the methodology fix is ALSO itself bounded by what archives are open. The bound is not a bug; the bound is the architecture documenting itself.