About This Project

Society is trying to figure out what AI means for work — and the answers keep changing.

For years I’ve wanted a single place that synthesizes what we actually know about AI’s impact on economic opportunity — not the hype, not the doom, but the evidence. This dashboard makes that process visible by tracking how predictions about displacement, wages, adoption, and corporate behavior evolve as new research, data, and real-world evidence emerge. The goal is to help the people who need it most — leaders in workforce development, education, philanthropy, and policy — have a more thoughtful, evidence-grounded response to what’s coming.

It’s a weekend project, built in the open, and very much a work in progress.

If you have ideas on how to make it better, I’d love to hear from you: LinkedIn / X

Who is behind this?

Matt Zieger is building this as a personal project, to learn and help everyone navigate this new and uncertain world. While this is not formally affiliated with his day job, he’s the Chief Program & Partnership Officer at the GitLab Foundation, where he leads the AI for Economic Opportunity Fund and is the co-founder and chair of OpportunityAI.

Methodology & Sources

How this dashboard collects, classifies, and scores the research behind every prediction

Society is trying to figure out what AI means for work — and the answers keep changing. This dashboard makes that process visible by tracking how predictions about displacement, wages, adoption, and corporate behavior evolve as new research, data, and real-world evidence emerge. Every prediction is backed by individually cited sources across four evidence tiers, from peer-reviewed RCTs to earnings call mentions. You'll see the estimates shift, the ranges widen or narrow, and the consensus form (or fracture) over time. This isn't a forecast — it's a live map of what we know, what we don't, and where the evidence is pointing.

How We Calculate the Headline Number

The large number shown on every tile and prediction page is a weighted average of all historical data points from the selected evidence tiers. It is not a single study's finding or a simple arithmetic mean — it accounts for both the quality of each source and how recently it was published.

Evidence Tier Weighting

Each data point receives a base weight determined by its evidence tier. Higher-quality sources have proportionally more influence on the headline number.

Tier 1

4× weight

Tier 2

2× weight

Tier 3

1× weight

Tier 4

0.5× weight

A single Tier 1 peer-reviewed study carries the influence of a Tier 4 social media post. This ensures that the headline number is anchored to the strongest available evidence.

Recency Bias

Within each tier, more recent data points are weighted more heavily. The recency multiplier scales linearly from 1.0× for the oldest data point to 2.0× for the newest. This means the most recent source of any tier has double the recency weight of the oldest, reflecting the fast-moving nature of AI research where newer studies incorporate better data and methods. If all data points share the same date, no recency adjustment is applied.

Combined Formula

Each data point's total weight is:
weight = tierWeight × recencyWeight

The headline number is then the weighted mean:
mean = ∑(value × weight) / ∑(weight), rounded to one decimal place.

For example, a Tier 1 study published recently (weight = 4 × 2.0 = 8.0) will have 16× the influence of an older Tier 4 post (weight = 0.5 × 1.0 = 0.5).

Source Validity & Filtering

Only data points from your currently selected evidence tiers are included in the calculation. Toggling tiers on or off immediately recalculates the headline number. This lets you see how the estimate changes when restricted to, say, only Tier 1 peer-reviewed research vs. the full range of sources. When no data points remain after filtering, the number falls back to the prediction's baseline value.

Trend Arrow

The trend arrow (▲/▼) compares the chronologically first and last data points across selected tiers. A red arrow indicates a “bad” direction — rising displacement or falling wages — while a green arrow indicates a “good” direction. The arrow reflects the raw trend in the data, not the weighted average.

Evidence Tier Framework

Not all evidence is created equal. Every source is classified into one of four tiers so readers can immediately judge the strength of the evidence behind any claim. The tier filter on the dashboard lets you strip away lower-quality sources to see how the picture changes.

Tier 1 — Verified Data & Research

The highest-confidence evidence. These sources carry either peer-review scrutiny, legal liability for accuracy, or are produced by government statistical agencies with established methodologies. The majority of the prediction data on this dashboard comes from Tier 1 sources.

Randomized controlled trials (RCTs) — Experimental studies where workers or firms are randomly assigned to use AI tools vs. a control group. These provide the cleanest causal estimates of productivity and displacement effects. Key examples in this dashboard include the Noy & Zhang MIT writing experiment and the Brynjolfsson et al. customer service study.

Peer-reviewed journal articles and NBER working papers — Published in top economics and computer science journals (American Economic Review, Quarterly Journal of Economics, PNAS, Science). NBER working papers undergo internal review and are widely treated as near-publication quality in economics. The research pipeline automatically discovers new NBER papers via their Labor Studies and Productivity RSS feeds.

Government statistical data — Bureau of Labor Statistics employment and wage series, Census Bureau surveys (including the Business Trends and Impact Survey), OECD labor force statistics, Federal Reserve economic data (including Dallas Fed CPS analysis), and Yale Budget Lab research. These provide the baseline macro indicators against which AI-specific effects are measured.

SEC filings and earnings transcripts — 10-K annual reports, 10-Q quarterly reports, and 8-K current reports filed with the Securities and Exchange Commission. The pipeline automatically searches SEC EDGAR for AI and workforce language across all public filings. Companies face legal liability for material misstatements, making these disclosures about AI investment, workforce restructuring, and productivity gains unusually reliable.

Tier 2 — Institutional Analysis

Expert analysis from credible institutions. These sources apply rigorous methodology but may not have undergone full peer review, or they synthesize existing research rather than producing new empirical data.

Think tank and policy research — Brookings Institution (automatically tracked via RSS), McKinsey Global Institute, Goldman Sachs Research, and similar organizations that employ PhD-level researchers and publish with editorial oversight. Their work often bridges academic research and policy audiences.

International organization research — IMF staff discussion notes and working papers (automatically tracked via OpenAlex), ILO Global Employment Trends, World Bank development reports, and OECD Employment Outlook. The pipeline filters IMF and IZA publications specifically for AI-labor relevance.

Working papers and preprints — Papers on arXiv, SSRN, and the IZA Discussion Paper Series (all automatically tracked). The AI-labor field moves fast enough that waiting for journal publication would mean ignoring 6–18 months of current work. These are included but flagged as pre-review.

Industry and consulting research — McKinsey, Deloitte, PwC, J.P. Morgan, and similar firms that survey enterprise AI adoption at scale. These appear as curated sources in specific predictions rather than being programmatically discovered. They offer practitioner perspectives but may carry selection bias toward firms already investing in AI.

Job posting data — Aggregate posting volumes from the Adzuna API, supplemented with published research from Indeed Hiring Lab, Lightcast, and LinkedIn Economic Graph. Year-over-year changes in posting volume across AI-exposed occupations serve as an early signal of labor demand shifts.

Tier 3 — Journalism & Commentary

Professional reporting and informed analysis. The scrolling news ticker at the top of the dashboard automatically pulls AI-labor headlines from Google News RSS feeds, classified by sentiment. A smaller number of news articles also appear as curated sources within specific predictions.

Major news outlets — Wall Street Journal, Financial Times, Fortune, CNBC, Forbes, and The Atlantic. These provide real-time coverage of layoffs, hiring freezes, AI deployment announcements, and policy developments that often precede formal data by months. Cited in predictions when they report original data or on-the-record corporate disclosures.

News ticker — The dashboard's live ticker aggregates from four Google News RSS keyword feeds (AI jobs, AI layoffs, AI hiring, AI employment), deduplicates headlines, and classifies each as displacement, advancement, or neutral using ~80 curated sentiment terms. This refreshes hourly and provides context for how the topic is being covered, but ticker headlines are not used as evidence for predictions.

Tier 4 — Informal & Social

Unvetted, anecdotal, and crowd-sourced signals. These are used sparingly — only a handful of Tier 4 sources currently appear in the prediction data. They are included when they capture ground-level workforce sentiment that hasn't yet surfaced in formal data, but they carry significant noise and selection bias.

Social media and blogs — Occasional Twitter/X threads, Substack essays, and Medium posts from researchers or practitioners. Currently a very small portion of total sources. Included only when the author has relevant domain expertise or when the analysis cites primary data that can be independently verified.

Weighted lowest in scoring — Tier 4 sources receive the lowest composite score weight, meaning they appear at the bottom of ranked lists and can be filtered out entirely using the evidence tier controls. The dashboard is designed to work without them — filtering to Tiers 1–2 still covers the vast majority of the evidence base.

Research Discovery Pipeline

The Research Feed is powered by an automated pipeline that queries 11 academic and institutional sources, deduplicates results, links papers to specific predictions, and ranks them by a composite score. New papers are discovered weekly and scored on tier, relevance, citation velocity, and author significance.

Semantic Scholar

AI-powered academic search engine covering 200M+ papers across all disciplines. Primary source for citation counts and full metadata.

OpenAlex

Open catalog of the world's research, used for concept-based discovery, institution filtering (IMF, IZA), and author tracking.

arXiv

Preprint server for economics, CS, and quantitative research. Captures cutting-edge work before journal publication.

Scopus (Elsevier)

Largest abstract and citation database for peer-reviewed literature. Provides high-quality venue and citation metadata.

NBER

National Bureau of Economic Research working papers. Tier 1 source for labor economics and macroeconomic research.

Brookings Institution

Policy-oriented research on labor markets, technology, and AI. Tracked via their labor-markets, technology, and AI RSS feeds.

IMF Working Papers

International Monetary Fund research on AI and labor policy, accessed via OpenAlex institutional filtering.

IZA Discussion Papers

Institute of Labor Economics (Bonn). One of the largest labor economics working paper series in the world.

CORE

Aggregator of open access research from 10,000+ repositories. Catches papers that keyword-in-title searches miss.

SEC EDGAR

U.S. Securities and Exchange Commission filings. Tracked for AI mentions in 10-K reports and earnings call transcripts.

Job Postings (Adzuna)

Aggregate job posting data used for tracking real-time labor demand shifts in AI-exposed occupations.

News & Headlines

The scrolling news ticker at the top of the dashboard pulls from four Google News RSS feeds covering AI jobs, layoffs, hiring, and employment. Headlines from the past 7 days are deduplicated and classified by sentiment — displacement, advancement, or neutral — using keyword-based scoring against ~80 curated terms. This provides real-time context for how AI-labor topics are being covered in mainstream and trade media. Headlines refresh hourly.

Corporate & Labor Market Signals

Beyond academic research, the dashboard tracks two real-world data streams that provide leading indicators of how AI is actually affecting employers and workers.

SEC EDGAR Filings

Full-text search of 10-K, 10-Q, and 8-K filings for AI and workforce language. When public companies disclose workforce restructuring, automation plans, or AI-driven productivity gains in their regulatory filings, those disclosures appear here. Classified as Tier 1 evidence because SEC filings carry legal liability for accuracy.

Job Posting Data

Aggregate job posting volumes tracked across AI-exposed occupation categories — including AI/ML roles, customer service, data entry, AI-augmented engineering, and prompt engineering. Sourced from the Adzuna API and supplemented with published data from Indeed Hiring Lab, Lightcast, and LinkedIn Economic Graph. Year-over-year changes in posting volume serve as an early signal of labor demand shifts.

Tracked Researchers

The pipeline monitors 18 leading researchers in AI economics and labor markets. When any of these authors publish new work, their papers are automatically surfaced regardless of keyword match. These researchers were selected for their sustained contributions to the empirical study of AI's workforce effects.

Erik Brynjolfsson · StanfordDaron Acemoglu · MITDavid Autor · MITTyna Eloundou · OpenAIDaniel Rock · WhartonShakked Noy · MITWhitney Zhang · MITR. Maria del Rio-Chanona · UCL / ILOAndrea Filippetti · CNR ItalyAnna Salomons · UtrechtPascual Restrepo · Boston UniversityCarl Benedikt Frey · OxfordMichael Osborne · OxfordEd Felten · PrincetonManav Raj · WhartonLindsey Raymond · MITBharat Chandar · StanfordMolly Kinder · Brookings

Scoring & Ranking

Papers in the weekly digest are ranked by a composite score that balances evidence quality, topical relevance, scholarly impact, and researcher significance.

Evidence Tier

Tier 1 papers receive the highest weight, scaling down through Tier 4. This ensures peer-reviewed research and government data dominate the rankings.

Keyword Relevance

Papers are scored on how many prediction-relevant keywords appear in the title and abstract. Higher specificity to AI-labor topics means a higher score.

Citation Velocity

Citations per year since publication, capped to prevent older seminal papers from dominating. Rewards work gaining traction in the field.

Tracked Author

Papers from the 18 monitored researchers receive a significant bonus, surfacing new work from leading experts regardless of keyword match.

Prediction Linking

Each paper is automatically analyzed against 15 prediction categories using keyword matching on the title and abstract. Categories span displacement (overall, white collar, tech, creative, education, healthcare admin, customer service, total jobs lost), wage impacts (median, geographic, entry-level, high-skill premium, freelancer rates), AI adoption rates, and corporate signals (earnings call mentions). When a paper matches one or more predictions, it appears as supporting evidence on that prediction's detail page.

Limitations & Caveats

This is not a forecast model. Predictions reflect the range of estimates found in published research, not outputs from a proprietary model. When researchers disagree, that disagreement is shown.

Coverage is not exhaustive. The pipeline queries English-language sources only and may miss relevant work published in other languages, behind paywalls not indexed by our sources, or in disciplines outside economics and computer science.

Keyword matching has limits. Automated classification relies on keyword matching in titles and abstracts. Some relevant papers may not use expected terminology, and some irrelevant papers may use similar language in a different context.

Citation data lags publication. Newly published papers have few or no citations. The scoring system accounts for this but recent work may be underranked relative to its eventual impact.

Predictions are point-in-time snapshots. The research landscape is evolving rapidly. Estimates that appear stable today may shift significantly as new large-scale studies are completed.

Update schedule: The research digest runs weekly (Mondays) and scans all 11 academic and institutional sources for new publications from the previous 14 days. The news ticker refreshes hourly with a 7-day lookback. SEC filing searches and job posting data update on each page load. Prediction data points are updated when new evidence materially changes an estimate. Source counts and confidence intervals are recalculated on each build.