ICBTheory 2 hours ago

Author here.

This paper is Part III of a trilogy investigating the limits of algorithmic cognition. Given the recent industry signals regarding "scaling plateaus" (e.g., Sutskever etc.), I attempt to formalize why these limits appear structurally unavoidable.

The Thesis: We model modern AI as a Probabilistic Bounded Semantic System (P-BoSS). The paper demonstrates via the "Inference Trilemma" that hallucinations are not transient bugs to be fixed by more data, but mathematical necessities when a bounded system faces fat-tailed domains (alpha ≤ 1).

The Proof: While this paper focuses on the CS implications, the underlying mathematical theorems (Rice’s Theorem applied to Semantic Frames, Sheaf Theoretic Gluing Failures) are formally verified using Coq.

You can find the formal proofs and the Coq code in the companion paper (Part II) here:

https://philpapers.org/rec/SCHTIC-16

I’m happy to discuss the P-BOSS definition and why probabilistic mitigation fails in divergent entropy regimes.

  • wiz21c an hour ago

    Since we can't avoid hallucinations, maybe we can live with them ?

    I mean, I regularly use LLM's and although, sometimes, they go a bit mad, most of the time they're really helpful

    • ICBTheory an hour ago

      I'd say that conclusion is a manifestation of pragmatic wisdom.

      Anyway: I agree. The paper certainly doesn't argue that AI is useless, but that autonomy in high-stakes domains is mathematically unsafe.

      In the text, I distinguish between operating on an 'Island of Order' (where hallucinations are cheap and correctable, like fixing a syntax error in code) versus navigating the 'Fat-Tailed Ocean' (where a single error is irreversible).

      Tying this back to your comment: If an AI hallucinates a variable name — no problem, you just fix it. But I would advise skepticism if an AI suggests telling your boss that 'his professional expertise still has significant room for improvement.'

      If hallucinations are structural (as the Coq proof in Part II indicates), then 'living with them' means ensuring the system never has the autonomy to execute that second type of decision.