Most people are starting to ask themselves a single question: will I lose my job if AI is able to do it better than me? But after spending months in the research, I can say that seems to be the wrong question. Or rather, it’s one-fifth of the right question. And it’s not even the most important fifth.
I often chat with people who’ve convinced themselves they’re either safe or doomed based entirely on this one dimension. A radiologist sees that AI can read medical images and panics. An electrician knows that AI can’t fish wires through a basement crawl space and relaxes. Both are making the same mistake, seeing only one factor affecting their future.
What actually determines whether AI displaces you at work looks a lot more like five specific and easy to undestand factors. Two of them push toward job loss or displacement. Three push against it. And the three that push against it are, actually quite a bit more interesting. Partly because they’re less obvious, and partly because they’re the ones you can actually do something about.
The first variable is the one everyone focuses on: technical exposure. Can AI do your actual daily tasks? Not your job title. Your tasks. A “financial analyst” might spend 70% of their time on work an LLM can handle and 30% on judgment calls that require a human in the room. That 70% is real exposure, and ignoring it would be foolish. But treating it as the whole story is equally foolish, and that’s what most analysis does.
The second variable is adoption speed: how fast companies will actually deploy AI into your specific workflow. This is where people consistently get the timeline wrong. A tech startup can plug in a new API over a weekend. A hospital system has to navigate FDA clearance, liability questions, union contracts, IT integration across dozens of legacy systems, and probably a two-year pilot program. A hospital receptionist and a startup’s customer support agent might have identical technical exposure, but one will feel the effects years before the other. We track real-time signals of how fast firms are actually moving, and the variation is enormous.
And here’s something most analysis misses entirely: even within fast-moving companies, adoption is uneven. At firms that rolled out GitHub Copilot, roughly half of the developers never used it. Think about that. The tool is right there, free, integrated into their editor. And half of them ignore it. The speed at which a company buys a tool and the speed at which its workers actually adopt it are two very different things.
So those are the two pressure variables. If you stopped here, you’d conclude that any job with high exposure in a fast-adopting industry is toast. And that’s exactly the conclusion most AI-and-jobs commentary reaches. But it’s wrong, because it ignores three forces pushing in the other direction. These are the interesting ones.
The first buffer is worker adaptability. When your role changes, how well positioned are you to move into a new one? This sounds soft, but it turns out to depend on surprisingly measurable things: how transferable your skills are across occupations, whether you have savings to weather a transition, whether there are other employers nearby. And, uncomfortably, your age. Not because older workers can’t learn, but because the economics of retraining look different at 55 than at 25.
Manning and Aguirre built a composite index of these factors, and the variation is striking. Software developers score very high. Their skills transfer widely, they have financial cushions, employers are everywhere. Food preparation workers score low. Not because they’re less capable, but because their skills are narrow, their savings are thin, and their geographic options are limited. The system creates the vulnerability, not the individual. (Our productivity data shows that workers who can adapt are already seeing real gains.)
The second buffer is the one I find most underrated, maybe because it requires you to think one step beyond the obvious. It’s demand elasticity: when AI makes the output of an occupation cheaper, do people buy more of it?
Consider what happened with ATMs. When they were introduced in the late 1960s, everyone assumed bank tellers were finished. The opposite happened. ATMs made it cheaper to operate a branch, so banks opened more branches, and the total number of tellers went up for roughly three decades. The cost per unit of banking fell, demand for banking rose, and that demand increase more than offset the automation. (Teller employment did eventually decline once online banking eliminated the need for branches altogether. The buffer was powerful, but it wasn’t permanent.)
The same dynamic played out with textile looms, accounting software, and electronic trading. So the question for any occupation is: when AI makes this cheaper, will people want more of it? Legal research is a good candidate. Most people who need legal help can’t afford it, so making it cheaper could dramatically expand the market. Payroll processing is not. Companies have exactly as many paychecks to process as they have employees, no matter how cheap it gets. (We write more about this on our demand elasticity explainer.)
The third buffer is the one that flips the entire narrative for some occupations. It’s the question of whether AI replaces you or makes you better.
Think about the difference between a call center agent and a management consultant. A call center agent handles often routine conversations about returns or billing issues from start to finish. An AI chatbot can often be a direct substitute for that whole workflow. A management consultant uses judgment, builds relationships, synthesizes messy inputs from many stakeholders. AI makes them faster at the analysis parts, but the human is still driving the engagement. One is a replacement pattern. The other is augmentation. You can see this distinction clearly in our customer service automation forecast versus the high-skill wage premium, which is rising precisely because AI complements those workers.
A large-scale CFO survey asked executives directly: for each role, is AI primarily enhancing your workers or replacing them? The pattern is clear. Jobs heavy on interpersonal interaction and physical presence tend to be augmented. Jobs that are mostly information processing tend toward replacement.
So: two forces push toward displacement, three push against it. The net risk score is the balance between them, on a 1–10 scale. And this is where the picture gets genuinely surprising.
Take radiologists. Their technical exposure is very high. Image analysis is one of AI’s strongest capabilities. If you only looked at that number, you’d assume they’re in serious trouble. But adoption in hospitals is glacially slow. Their specialized skills transfer across medicine. And AI makes radiologists faster at reading scans without replacing the diagnosis itself. Strong complementarity. Their net risk is much lower than their exposure alone would suggest.
Conversely, some jobs with moderate exposure end up with higher net risk because every buffer is weak. Companies in their sector adopt fast, workers have limited transferable skills, demand is fixed, and AI is a direct substitute.
An important caveat: as of early 2026, aggregate labor data from Yale and Dallas Fed shows no detectable aggregate AI displacement or job loss. These are not forward-looking projections, they are observations on what has happened so far, and its showing basically zero substantial job loss. The displacement everyone is worried about hasn’t shown up in the data yet. That doesn’t mean it won’t. But it’s worth keeping in mind when the headlines get loud.
The chart above lets you explore these dynamics across 342 occupations. Click any block to see all five scores and how they combine. For the aggregate picture, what happens when you add up all 342 occupations, see our overall US displacement forecast. And if you’re wondering why the short-term picture might look worse before it gets better, the J-Curve explainer covers that.
There’s one more thing this framework can’t capture, and it might be the most important thing of all. These five variables score the risks to existing jobs. But technology doesn’t just destroy tasks. It creates new ones. Roughly 60% of workers today are employed in occupations that didn’t exist in 1940. That’s not a minor footnote. It has been, historically, the dominant response to technological change. Not shuffling people between existing jobs, but the emergence of entirely new work that nobody predicted. There’s no reason to think AI will be different.
None of this is destiny. Read that again. Every one of these five variables can be changed. By policy, by companies, by individuals. But you can’t change what you don’t understand. And understanding means asking five questions instead of one.
For the full scoring methodology, including data sources, weights, and academic citations for each variable, see the methodology section below. To explore how AI affects the specific tasks in your job, try the task-level visualizer. Or browse all 300+ sources behind this analysis.