Software reliability has improved dramatically thanks to a popular trend called shifting left. By pushing testing, security checks, and quality assurance earlier in the development cycle, teams catch risks before they become costly.
AI, however, seems to be doing the opposite. Large language models are increasingly being applied not to well-defined, measurable problems—but to fuzzier, harder-to-quantify domains where success is ambiguous and failure is slow to surface. In other words: instead of shifting left, AI is shifting right.
AI Eats the (Early-Stage Funding) World
The release of ChatGPT in 2023 reshaped venture capital overnight. We can see this by inspecting the portfolio of Y Combinator (YC) startups. I collected YC company descriptions across cohorts and used the Sturdy Statistics API to automatically structure the data.
The hierarchical “sunburst plot” below shows the data from 2020–2023. The inner ring shows high-level themes, which branch into more granular themes in the outer ring of the plot. Clicking on a high-level theme expands it, making the 2nd level topics easier to read.
Before ChatGPT, we can see that YC investments spanned a wide range: YC covered finance, healthcare, logistics, supply chain, and general business tools. And it’s not all software: we see Space Technologies and Autonomous Vehicles in the portfolio. The AI-related fields Data Analysis and Automation appear in the list, but as two categories among many.
After 2023, the landscape changed completely. The following figure shows the themes in YC company descriptions since 2023:
We can see that virtually all prominent categories are now AI-related: we have automation, data analysis, and developer tools… The diversification of the early 2020s vanished. AI really did eat the early-stage software world.
(If you like, you can explore the data yourself in the interactive dashboard.)
Waves of Investment
Thus far, we’ve looked at aggregate data before and after the release ChatGPT. If we zoom in and inspect the year-to-year shifts in investment, a surprising twist in the story emerges.
Take a look at this slope plot, which shows the change in rank for topics between 2023 and 2024. Each side of the plot lists topics by rank, and a line connects their 2023 position to their 2024 position; topics which decreased in prominence have negative slopes, shown in red. Topics which surged in prominence have positive slopes, shown in blue.
The First Wave
In 2023, the first wave of AI startups targeted high-leverage analytical fields: finance, insurance, logistics, and sales. These sectors prize marginal advantages and are quick to adopt new tools. Historically, these sectors have been at the cutting edge of machine learning adoption.
This investment mix makes perfect sense, given the initial promises of AI.
The Second Wave
In 2024, however, many of these analytical use cases disappeared from the YC portfolio: we find them in the bottom-right corner of the slope plot. In their place, we see fuzzier domains such as telehealth, manufacturing problem detection, and therapeutic bioengineering.
Is it possible that the 2023 cohort was so successful, and became so dominant within analytical fields, that entrepreneurs needed to look elsewhere for opportunity? Or instead has investment moved to fields where success is harder to measure—potentially making them more forgiving of AI’s flaws?
The Third Wave
Let’s take a look at this slope plot to see how things changed from 2024 to 2025:
In 2025, even the fuzzier domains give way. The newest batch instead emphasizes meta solutions: context management, simulation environments, data access control, and AI security. In other words, we see AI startups servicing other AI startups. There are technical moonshots like AI-accelerated clinical trials, but ones that will take years to evaluate.
The pattern is striking: it looks like the more data-driven the field, the faster it rejects wholesale LLM automation. Fields with clear success metrics and immediate feedback (such as financial analytical, logistics optimization, or risk management) saw rapid dropoffs once practitioners realized the tools weren’t reliable enough to perform meaningful data-driven analysis. Since then, each successive wave of investment has retreated into domains with vaguer outcomes.
Parting Thoughts
The data show a clear trajectory: LLM applications are shifting right, away from analytical fields with hard metrics, to fuzzier domains with ambiguous outcomes, to meta-level tools and moonshots whose impact may only be measurable years into the future.
To be clear, there are valuable near-term uses for LLMs: translation, copyediting, summarizing content, presenting structured data in narrative or conversational form, and writing certain types of code all come to mind.
But the trillion dollar bets being placed on full-scale automation across the economy risk overlooking a sobering truth: the more measurable the field, the faster it has already rejected wholesale LLM automation.
My concern is that this recognition will eventually spread to fuzzier fields too—just on a longer timescale. If so, the scale of investment by that time, along with the attendant costs of failure, could be enormous.
Footnotes
[1] This data was organized by leveraging hierarchical mixture models.
[2] The slope plots are accessible here.
[3] The annotated dataset is publicly accessible at index_c50d731139f74898b4c004937ae77a83. This is most easily accessed using sturdy-stats-sdk. Additional documentation.