Marc Andreessen famously wrote that software is eating the world. Twelve years later, it's worth updating the thesis: language models are eating software.
The LLM revolution isn't a better autocomplete. It's a fundamental shift in how we interact with computation — and it's happening faster than almost anyone predicted.
What Makes LLMs Different
Previous AI was narrow. AlphaGo could beat any human at Go but couldn't explain a joke. Face recognition could identify anyone in a crowd but couldn't draft an email.
LLMs are different because they're general. Train on enough human text, and the model learns to reason, to explain, to generate, to translate, to code, to analyze — across virtually every domain simultaneously.
This generality is the key. It means LLMs aren't just tools for specific tasks. They're platforms for building agents that can operate across contexts.
The Scaling Law
What makes LLMs predictable (in one sense) is that they scale. More compute + more data + bigger model = more capable output. This isn't true of most technologies. It's extremely true of language models.
The implication: as compute gets cheaper and models get bigger, LLMs will continue to improve on a predictable curve. We're nowhere near the ceiling.
LLMs and Crypto
The intersection of LLMs and crypto creates something new:
- AI agents that can understand and participate in financial markets
- Automated analysis of on-chain data and social sentiment
- Community communication that scales without human labor
- Autonomous project management on blockchain infrastructure
What Comes Next
Multimodal models that see, hear, and read. Agents that take actions, not just words. Models that reason across longer contexts, maintain persistent memory, and coordinate with other models.
The LLM revolution is 5 years old and already transformative. Give it 10 more, and the world it produces will be unrecognizable from where we started.