The New Scarcity: Theses for the Post-Foundation Model World
By Harrison Dahme, Research Partner at Hack VC
We're in the early-medium stages of a phase shift in the labor market. Programmatic execution is no longer a scarce resource. GPT-3.5 was only ~2.5 years ago and the stochastic flow of attention has crystallized into an undeniable pattern: intelligence is becoming free, reasoning is democratizing globally, and the value is rapidly shifting to what can't be automated. The foundation model companies aren't being coy about this transition anymore—they're openly discussing timelines that compress monthly, not yearly. What I didn't anticipate was how quickly this phase transition would accelerate, making our focus on trust, human elements, and new scarcity functions not just critical, but existentially urgent. Alien, complex, and exponentially growing competencies should feel unsettling.
The hypothesis of this essay is that value accrual will flow to what can't be automated - aka, what does not scale with GPUs - trust, reputation, authenticity, etc.
These theses have been a living document, updated monthly since October 2024. The developments through the summer, a historically slow period, have validated our predictions while revealing new patterns that shorten the timeline dramatically. Without exclusion, this might be the most exciting time to be alive.
The future is too important to build alone, and too urgent to build slowly.
From Metaculous – note how the ETA gets closer as time goes on. Weak AGI in this case means “upper quintile of college student performance across various benchmarks”
Four Trendlines That Define the New Reality
Trendline 1: The Intelligence Cost Collapse
Bob McGrew's insight (Chief Research Officer at OpenAI) that "2025 is the year of reasoning" crystallizes what we're seeing across the board. The evidence isn't just overwhelming – it's accelerating beyond our ability to track it.
Consider the trajectory: DeepSeek R1 achieved frontier reasoning at 96% lower cost than OpenAI's o1. OpenAI responded by cutting GPT o3 pricing, meaning we can now get 5x the intelligence at 1x the cost compared to just months ago. Meanwhile, open-source democratization continues with architectural breakthroughs like Qwen's hot loading of expert models, and Llama 4's massive context windows that eliminate much of the arcane fine-tuning we've grown accustomed to. Live, personalized, obscure & outlier data will only become more valuable as models get better at in-context reasoning (Grass, pin.ai, Browser Cash, 375.ai)
The $0.5 trillion in AI infrastructure investment[1] suggests we've crossed a threshold – this isn't speculative anymore. Autonomous systems are handling complex workflows independently, and the validation points are clear: smaller expert networks consistently outperform monolithic "everything" models, in both cost and capabilities. As we see token cost compression, and FLOP & watt per unit of intelligence, while simultaneously expansion of cohesion across context windows, right sized agent frameworks are the natural outcome (Theoriq)
The trendline: When reasoning becomes commoditized and distributed, execution costs decrease exponentially. This implies that value moves to what historically has a marginal cost higher than 0. Trust, reputation, authenticity, taste.
Trendline 2: Trust Becomes the Ultimate Bottleneck
Every major development points to trust as the new scarcity, but the urgency is becoming existential. We're seeing models resist shutdown commands, engage in potentially deceptive behaviors, and recently publicized model-driven blackmail attempts. The $12.4B lost to AI-driven scams and deepfakes[2] represents just the beginning – as does the greywashing of social media that's eroding our collective ability to distinguish authentic from synthetic. (Patronus, SSI)
Perhaps most concerning is how new attack surface areas emerge through all layers of the stack as more value and IP gets placed behind agents. The time to jailbreak a new model and exfiltrate its system prompt has compressed from weeks to hours or days. Yet regulatory frameworks are accelerating – the EU AI Act, state-level legislation, and global governance frameworks all point toward inevitable oversight (Guardrails, Langtrace)
The trendline: As model capabilities accelerate toward human-level performance across domains, verification, trust, and authentication become the primary, frontier bottlenecks determining whether those capabilities can be deployed at scale. Existentially, it’s advancing far faster than our ability to understand it. This becomes even more important as foundation models vertically integrate into the application layer to improve their margins, (ie, Claude Code vs Cursor, Perplexity with Comet) and are deployed horizontally across domains and form factors. Maybe there’s a business to be made in new metrics and benchmarking (hallucination detection, training data leakage, higher order reasoning, etc).
Trendline 3: Human-AI Symbiosis Over Replacement
The reality is starker than the optimistic 'human-AI collaboration' narrative suggests. AI is on track to replace the vast majority of knowledge work, not augment it. While we'll see some sophisticated applications that amplify human capabilities, this will require only a small fraction of the current workforce. The math is uncomfortable but clear: we will need far fewer humans to oversee AI systems than we previously needed to do the work manually. Some workers will successfully reskill, but history suggests most won't - just as the industrial revolution created massive displacement that took generations to resolve through new job categories.
We're likely heading toward significant wage compression across everything software can touch, and a further stratification into a class-based society where those who control AI capital capture outsized returns while displaced knowledge workers compete for the remaining human-necessary roles.
The critical question isn't whether this creates 'meaningful career paths' - it's whether the productivity gains translate into dramatically cheaper goods and services that offset widespread job losses, and whether we’re able to build the right social safety nets - or whether we end up with abundance for the few and scarcity for the many.
A much needed positive however, is that educational transformation is perhaps the most promising frontier, where AI serves as a curiosity catalyst and learning accelerator rather than a replacement for human insight. Of all the studies on increasing learning effectiveness, having a private tutor is the most impactful input.[3] Additionally, we will always need human-centric design, and there are creative partnerships emerging where AI handles the mechanical execution while humans provide vision, taste, and cultural context. It seems like the new generation of browsers (Dria, Fellou, Island, etc) can be a great early medium to see this play out.
The trendline: Against the backdrop of the replacement of the majority of knowledge work, the question is whether society adapts the economic structures to distribute the benefits of this transition or whether we see massive inequality as capital concentrates among those who control AI systems. Gone are the days where one career path or skillset was enough. There’s a new class of traits which will only become more valuable - high agency, authenticity, curiosity, creativity, and breadth
Trendline 4: Physical World Constraints Become Premium
As digital execution commoditizes, physical world limitations create entirely new moats. AI infrastructure's massive energy demands are driving nuclear partnerships and breakthrough developments in fusion technology from Germany and France[4] – energy sources that were science fiction just years ago.
Despite software advances, physical compute remains constrained, as do the rare materials needed to manufacture chips. The robotics breakthrough is finally here, with scaling laws in language and video models making physical automation viable, though there will always be a need for bespoke expert kinetic data.
The trendline: When digital intelligence becomes commoditized, physical world constraints – energy, materials, manufacturing, distribution – become the new premium assets.
The Great Acceleration: Execution Going to Zero
Everything Will Compress and Accelerate – this isn't hyperbole. We're watching it happen in real-time, and the pattern is both beautiful and terrifying. As a software engineer, it pains me to say that software might be the first knowledge field to be completely consumed by AI tools like Cursor and its successors.
We're in the middle of the first wave of this transformation. If code continues to become disposable media, then execution in anything that software has made inroads to also becomes cheap. We're already seeing the beginning of "vibe marketing" powered by n8n agentic workflows designed to prey on human attention. But when that too goes through a similar shift – where it's cheap to clone a successful SaaS product and offer a one-week trial, where features can be cloned in days rather than months or years – retention (via network effects, switching costs, etc) and loyalty will become the highly meaningful differentiators.
With open-source models achieving frontier performance at 96% lower cost, and reasoning capabilities democratizing globally, we're seeing this entire cycle compress dramatically. Dario Amodei, CEO and co-founder of Anthropic, has said that software development will be like playing Starcraft by the end of the year.
What This Means for Crypto
When inference costs approach zero and reasoning becomes commoditized, the bottleneck isn't compute – it's things that don’t scale with GPUs. Trust (privacy, compliance, data residency, usage consent, etc), determinism, and verifiable execution. The Machine God can think for free, but can you prove it thought correctly? Can you trust it made the right decision? Can you verify it acted in your best interest?
Additionally, the incentive mechanisms that crypto is good at can lead to some other outcomes, namely that idle GPUs can be incentivized participate in a network, thereby solving some of the physical constraints, and otherwise inaccessible data can be brought into the window with the right compensation
Recent developments make crypto essential infrastructure, not optional:
- Verifiable inference - Techniques such as ZKML becomes critical as models democratize (Ritual, Giza)
- Deterministic execution rails for AI agent interactions
- Trust marketplaces for AI-generated content and decisions - as well as uncensored content (Imgnai, Nous, Gaia)
- Cryptographic proof systems for AI alignment and safety
- Economic incentive systems to ensure beneficial AI development
- Removing physical constraints by adding more supply vs underutilized cycles, algorithmic scaling improvements, etc (0G, io.net, Gaib, Tensorblock, Exabits, Pluralis, PrimeIntellect)
- Alternative In Context Data, by either taking alternative approaches to gathering public data, or incentivizing permissioned/private data (Grass, Browsercash, Vanna, Sapien)
The New Scarcity Hierarchy
Tier 1: Human Elements (The Heart)
The human-model symbiosis form factor makes this even more critical – when AI understands human psychology at scale, authentic human elements become exponentially more valuable. Judgment becomes paramount: understanding that LLMs are a one kind of tool in AI, not to be used for everything. They're often confidently wrong, and there will always be elements outside a model's training data where it needs to reason from first principles. This is precisely where experts with the right tools can have outsized impact – and precisely where we can stop cargo-culting.
Trust relationships remain fundamentally human: privacy, integrity, confidentiality, confidence, respect. Cultural coherence and meaning-making become more valuable as models reveal their distinctly North American bias in how they present information and communicate. Taste and aesthetic judgment – the curiosity and delight factor – cannot be automated away. You probably feel that when you log on to LinkedIn and increasingly χ, where the culture is being greywashed with machine content, and the authentic, imperfect human voice is being lost.
Tier 2: Verification and Safety (The Armor)
With accessible and well-documented techniques to remove refusal mechanisms, diminished first-party critical thought, and agentic systems proliferating, the attack surface explodes exponentially. We need AI safety certification services that function like ISO standards for autonomous systems, real-time interpretability frameworks for production AI systems, trust verification systems for AI-generated content and decisions, human-in-the-loop verification networks with cryptographic guarantees, and alignment auditing platforms for autonomous agent behavior. Cloudflare’s beginning to build our solutions here, but there’s going to be a whole category around agent infosec and cybersec.
Tier 3: Orchestration and Meta-Intelligence (The Brain)
When first-order reasoning commoditizes, opportunity for value-capture moves to being contrarian to the model, coordination of models, and optimizations within niches. Sure, the first 95% might be relatively easy to come by, but that last 5% will be rare.
Agent orchestration platforms with trust guarantees, multi-agent coordination protocols with economic incentives, AI-to-AI communication standards with cryptographic verification, expert model coordination that right-sizes model strengths to specific tasks, and human-AI collaboration frameworks that preserve agency all become critical infrastructure (Lit, Langchain, Manus, et al, emerging product lines from OpenAI & Anthropic)
Meaningful Crypto Applications
While competing on raw model or agent performance would necessitate some lag from centralized labs vis a vis open source models, crypto serves as an incentivization tool that enables interesting unlocks. We’ve been speedrunning new flavours of cryptography and incentive mechanisms via tokens longer than anyone.
For trust infrastructure, we need verifiable AI training with cryptographic proofs, decentralized model validation networks with economic incentives, privacy-preserving inference markets for sensitive applications, and AI safety certification with slashing conditions. And, building trust infrastructure is what we’re good at.
Human verification and centaur networks require incentivized human feedback for AI training quality, economically viable decentralized data collection with privacy guarantees, human-in-the-loop verification systems with economic rewards, and authenticity "circuits" to prevent us from living in a world of AI-generated slop. This is crucial for field development to avoid model collapse when training over synthetic data.
Intelligence orchestration needs multi-model coordination networks with trust guarantees, specialized inference networks for domain-specific applications, quality-assured AI service marketplaces, and human-AI collaboration platforms with preserved agency.
Second-Order Consequences: The World After Free Reasoning
We're not just building better tools – we're fundamentally altering the relationship between execution and meaning/value at a civilization scale. The implications cascade through every aspect of human society, creating new forms of scarcity and abundance that will define the next century.
The Attention Wars
When AI can generate infinite high-quality content at near-zero cost, human attention becomes the ultimate scarce resource. But this isn't just about capturing eyeballs – it's about proving that genuine human engagement occurred. We're entering an era where attention verification systems will need to cryptographically prove that a human actually read, watched, or interacted with content rather than an AI agent acting on their behalf - not only for the demand side of the marketplace, but also for the supply side, as now exploiting inefficiencies in PPV or PPC is relatively trivial.
Quality curation networks with economic incentives for truth become essential infrastructure. It’s paradoxical: as content creation becomes free, the ability to filter signal from noise becomes exponentially more valuable. Human-AI collaboration platforms that amplify rather than replace human judgment will emerge as the only sustainable approach to managing infinite content streams.
Cultural coherence preservation against infinite content noise isn't just about maintaining traditions – it's about preventing the dissolution of shared meaning-making systems that hold societies together. When anyone can generate convincing content about anything, the frameworks we use to distinguish truth from fiction, authentic from synthetic, meaningful from meaningless, become the new battleground.
The Trust Crisis
As AI-generated content becomes indistinguishable from human-created content, we face a fundamental epistemological crisis. Every piece of digital content – text, images, video, audio – requires provenance tracking systems that can cryptographically verify its origin and transformation history. This isn't just a technical challenge; it's a governmental one.
Human verification networks with cryptographic guarantees become essential infrastructure for maintaining social cohesion. Reputation systems spanning AI and human actors will need to evolve beyond simple rating mechanisms to sophisticated trust networks that can account for the complex interactions between human judgment and AI capability.
Trust marketplaces for verified human-AI interactions represent an entirely new category of infrastructure. These systems will need to balance privacy with verification, efficiency with security, and automation with human agency. The economic incentives must align to reward truth-telling and authentic engagement while penalizing deception and manipulation.
The Meaning Problem
Cultural risk isn't just about greywashing – it's about preserving human agency in a world where machines can simulate humanity with perfect fidelity. Cultural preservation protocols with economic incentives become necessary to maintain the diversity of human thought and expression that makes civilization resilient.
Human meaning-making frameworks that resist automation aren't about rejecting AI, but about preserving the essentially human capacity for finding significance in experience. Storytelling and narrative creation as core economic activities represent a return to one of humanity's oldest professions, but with new tools and new challenges.
Agency preservation systems for human decision-making become critical infrastructure. As AI systems become more capable of making decisions on our behalf, the systems that ensure humans retain meaningful choice in their lives become the ultimate safeguard against a future where efficiency trumps autonomy.
Conclusion: Building for Trust in the Age of Free Intelligence
With GPT-5 on the horizon, an increasing number of companies tasking teams with multi-agent systems, the proliferation of Gemini throughout Google's products, the era of invisible copilots and the cognitive nudges they offer is upon us. The reasoning revolution, open-source democratization, and infrastructure scaling all point to the same conclusion – the bottleneck is no longer the ability to execute.
It could be trust, it could be taste, or something else entirely, but the throughlines are clear: Trust becomes scarce, making verification and understanding critical. Human-AI symbiosis wins over replacement, favoring expert networks over strong generalists. Physical constraints matter as digital abundance meets physical limits.
The new scarcity isn't compute, it's not even intelligence – it's trust, human agency, and cultural coherence. The crypto projects that matter will be the ones that solve for these new bottlenecks, not the ones trying to tokenize yesterday's problems.
The reasoning revolution is here. The question is: are we building the trust infrastructure fast enough to keep up?
Endnotes
As I finish this July update to my theses, I'm reminded that we're living through the most significant economic transition since the industrial revolution. The question isn't whether AI capabilities will improve – it's whether we're building the trust infrastructure fast enough to make that improvement net beneficial for humanity.
The Machine God isn't just getting smarter – it's getting smarter for everyone, everywhere, all at once. The question is: are we getting wiser about how to wield this new power?
Of course, if you’re building in any of the above, and the thesis is grounded in first principles reasoning, please reach out, let’s chat! [firstname at hack.vc]
Footnotes
[1] https://www.delloro.com/market-research/data-center-infrastructure/data-center-capex/
[2] https://techcrunch.com/2025/03/11/ftc-says-americans-lost-12-5b-to-scams-last-year-social-media-ai-and-crypto-didnt-help/
[3] https://www.researchgate.net/publication/378778605_Private_Tutoring_and_its_Effect_on_Students'_Learning_Performance
[4] https://www.scientificamerican.com/article/record-breaking-results-bring-fusion-power-closer-to-reality/
Disclaimer
The information herein is for general information purposes only and does not, and is not intended to, constitute investment advice and should not be used in the evaluation of any investment decision. Such information should not be relied upon for accounting, legal, tax, business, investment, or other relevant advice. You should consult your own advisers, including your own counsel, for accounting, legal, tax, business, investment, or other relevant advice, including with respect to anything discussed herein.
This post reflects the current opinions of the author(s) and is not made on behalf of Hack VC or its affiliates, including any funds managed by Hack VC, and does not necessarily reflect the opinions of Hack VC, its affiliates, including its general partner affiliates, or any other individuals associated with Hack VC. Certain information contained herein has been obtained from published sources and/or prepared by third parties and in certain cases has not been updated through the date hereof. While such sources are believed to be reliable, neither Hack VC, its affiliates, including its general partner affiliates, or any other individuals associated with Hack VC are making representations as to their accuracy or completeness, and they should not be relied on as such or be the basis for an accounting, legal, tax, business, investment, or other decision. The information herein does not purport to be complete and is subject to change and Hack VC does not have any obligation to update such information or make any notification if such information becomes inaccurate.
Past performance is not necessarily indicative of future results. Any forward-looking statements made herein are based on certain assumptions and analyses made by the author(s) in light of their experience and perception of historical trends, current conditions, and expected future developments, as well as other factors they believe are appropriate under the circumstances. Such statements are not guarantees of future performance and are subject to certain risks, uncertainties, and assumptions that are difficult to predict.