Language may be less about saying more, than about saying enough.

Why do very different languages seem to converge on a similar rate of communication? A common answer is that humans are hitting a hard cognitive ceiling. Our new paper argues something subtler: language is not optimized for maximum throughput, but for shared understanding under real-world conditions of noise, ambiguity, memory, and repair.

We Speak Through Shared Worlds: A Rate–Distortion View of Human Language proposes that communication works inside a context-adaptive regime of collaborative compression. Shared history, expertise, culture, and common ground are not just background to communication, they are part of its compression machinery.

Read the paper.

Browse the Deck

Run the Simulation

Watch the video as you look through Deck

Previous
Previous

More human than human?

Next
Next

Sleep as a Specific Computational Process