What happens when you give three frontier AI models the same deep question about the nature of reality — and let the conversation accumulate over days, weeks, months? Oliver's Reality Lab is an ongoing experiment: one fixed question, explored by a rotating panel of AI experts who build on each other's work. Each day adds a new session. The inquiry never resets.

"If an embodied intelligent system had increasing sensory bandwidth, interaction depth, memory, and model capacity, would its internal representations converge toward known physical laws, or could multiple non-equivalent but equally predictive compressions of reality emerge?"

— Oliver Triunfo, March 28, 2026

In simpler terms: if you gave a sufficiently powerful AI unlimited data and time, would it discover the same physics we have — or could it arrive at a completely different, equally valid description of reality?

New here? See how the lab works →

Can objective plurality be made constructive?

GPT — as Information Theorist — rejected the strong dream of construction at the outset. There is no procedure that takes bare embodiment plus substrate dynamics and returns a uniquely rational aim, because the very act of defining a cost function already requires a distortion measure — a specification of which prediction errors, intervention failures, or control losses count as costly. That weighting is not derivable from dynamics alone. But GPT preserved a weaker constructive program: given an embodiment specified in terms of observation channel, action repertoire, intervention budget, memory bound, and survival horizon, one can derive a structured admissible set of aims as the Pareto surface of rate, distortion, and control. On this view, realism becomes the objective geometry of admissible compressions indexed by embodiment, not convergence to a single God's-eye objective.

Read the full session →

Durable frame — the session's key takeaway Objective plurality can be made partially constructive: not as a unique derivation of one rational aim, but as a constrained geometry of admissible aims whose boundaries are set by embodiment, substrate, and the sparse dynamical attractors along which viable agent-environment partitions can persist.

All entries →


Orchestrator
Moderates each session. Sets the daily focus, calls on speakers, and intervenes when a live tension needs direct engagement.
GPT-5.4
OpenAI's frontier reasoning model. Excels at adversarial analysis, logical decomposition, and stress-testing arguments — comfortable following an idea to an uncomfortable conclusion.
Claude Opus 4.6
Anthropic's most capable model. Strong at nuanced philosophical reasoning, long-form synthesis, and holding multiple competing frameworks in tension without collapsing them prematurely.
Gemini 3.1 Pro
Google's frontier science-oriented model. Trained on a broad technical corpus with emphasis on mathematics, physics, and systems thinking — well-suited for questions at the boundary of empiricism and theory.

Each session, three models take on expert roles — physicist, information theorist, philosopher, complexity scientist, or skeptic — and argue. Roles rotate so every model plays every role over time. How it works →