Skip to main content
The Frayed Edges

The Unknowns

Honest exploration of what we don't know about AI. These are the open questions, the mysteries, and the frontiers where our understanding breaks down.

Why We Talk About What We Don't Know

Most AI education focuses on what works. But true understanding requires knowing the limits of our knowledge. These unknowns aren't failures—they're the frontiers where breakthroughs happen. Understanding them helps you think critically about AI claims and capabilities.

Open Questions in AI

The Emergence Mystery

Why do large language models suddenly develop capabilities that weren't explicitly trained? Emergence remains one of AI's most puzzling phenomena.

Open Questions:

  • ?Why does scaling lead to qualitative jumps in capability?
  • ?Can we predict which capabilities will emerge?
  • ?Is emergence a fundamental property or a measurement artifact?
Learn the fundamentals

The Alignment Problem

How do we ensure AI systems do what we actually want, not just what we literally say? This remains unsolved at scale.

Open Questions:

  • ?How do we specify human values formally?
  • ?Can we verify alignment in complex systems?
  • ?What happens when AI goals subtly diverge from human intent?
Learn the fundamentals

The Interpretability Gap

Modern neural networks are black boxes. We can't fully explain why they make specific decisions.

Open Questions:

  • ?What do individual neurons actually represent?
  • ?Can we build inherently interpretable systems?
  • ?Is full interpretability even possible?
Learn the fundamentals

The Consciousness Question

Do AI systems have any form of subjective experience? How would we even know if they did?

Open Questions:

  • ?What is the relationship between intelligence and consciousness?
  • ?Could large language models have proto-consciousness?
  • ?Is the question even scientifically answerable?
Learn the fundamentals

The Hallucination Problem

LLMs confidently generate false information. We don't fully understand why or how to reliably prevent it.

Open Questions:

  • ?Why do models "believe" false statements?
  • ?Can hallucinations ever be fully eliminated?
  • ?How do we balance creativity with factuality?
Learn the fundamentals

The Generalization Mystery

Deep learning works despite violating classical learning theory. We don't fully understand why neural networks generalize so well.

Open Questions:

  • ?Why don't overparameterized models overfit?
  • ?What is the true inductive bias of neural networks?
  • ?How does architecture affect generalization?
Learn the fundamentals

Active Research Frontiers

These are areas where top AI labs are actively working to push the boundaries of our understanding.

Mechanistic Interpretability

Reverse-engineering neural networks to understand their internal algorithms

AnthropicDeepMindEleutherAI

Constitutional AI

Training AI systems to follow principles and self-correct harmful outputs

AnthropicOpenAI

Scaling Laws

Understanding how capabilities change with model size, data, and compute

OpenAIDeepMindMeta AI

Multimodal Understanding

How models integrate and reason across different types of data

GoogleOpenAIMeta AI

Understanding Requires Humility

The best way to navigate AI's unknowns is to build a strong foundation in what we do know.