Taha Merghani

The Ratchet

I published an analysis of what it means to work retail with a research background. I called myself a "human API in the aisle." The argument was specific: stores are drifting cyber-physical systems, and workers like me are the error-correction layer that resolves the gap between the digital model and the physical reality. The inventory system says there are twelve units. There are nine, and three are damaged. I handle that residual. The robot cannot.

I called it autophenomenology. You analyze a system by becoming one of its components and observing what the system forces the component to learn.

Today I am applying the same lens to the other side of the labor market. The side I was trained to occupy.

I have a master's degree from Georgia Tech. I published at NeurIPS and NAACL workshops. I did internships at Apple Siri and MIT CSAIL. I studied under Jacob Eisenstein. And I stock shelves at Walmart while Harvard MBAs face their worst job market in a decade.

These facts are not unrelated.

The headline numbers are stark. Nearly a quarter of job-seeking Harvard MBA graduates from the class of 2024 were unemployed three months after graduation. This is the largest portion to struggle in the market in the last decade. Stanford reported 22%. Wharton reported 20%. The share of graduates from top 15 American business schools accepting offers within three months dropped from a five-year average of 92% to 84% in a single year.

The broader pattern is worse. The Bureau of Labor Statistics shows 248,000 white-collar jobs cut since May 2024. Revelio Labs analysis finds white-collar job postings declined 35.8% from Q1 2023 to Q1 2025. The Federal Reserve Bank of New York reports that recent college graduate unemployment now exceeds the national average. This is a historic reversal. The underemployment rate for new graduates sits at 41.8%, the highest since 2020.

Meanwhile, the World Travel & Tourism Council projects a hospitality workforce shortfall of 8.6 million workers by 2035. Skilled trades face structural shortages due to aging workforces and anemic training pipelines. Amazon's "Just Walk Out" technology, once slated for 3,000 stores, was pulled from most locations after requiring 700 human interventions per 1,000 checkouts.

Something is inverting.

The instinct is to blame AI. ChatGPT landed in November 2022. The layoffs accelerated in 2023. The correlation is visible. But correlation is not causation, and the actual mechanism is more specific.

The trigger was interest rates.

From 2009 to 2022, capital was effectively free. Companies hired aggressively because the cost of being wrong was low. Tech firms in particular staffed up on the assumption that growth would continue indefinitely. When the Federal Reserve raised rates to fight inflation, the cost of capital spiked. Suddenly, headcount became expensive. The layoffs followed.

AI is not the trigger. AI is the ratchet.

The distinction matters. A trigger causes the initial displacement. A ratchet prevents the return. Companies laid people off because money got expensive. They are not rehiring because they discovered that a team of twelve can do what a team of twenty did, if you give them the right tools. The jobs are not coming back, not because AI destroyed them in some dramatic sense, but because AI made their absence survivable.

This is why the Yale Budget Lab can report that 33 months after ChatGPT's release, "the broader labor market has not experienced a discernible disruption." They are measuring the wrong thing. The disruption is not in aggregate employment. It is in the composition of work and the expectations of what will be rebuilt. The jobs that disappeared in the rate shock are not returning in the rate normalization. That is the ratchet.

There is a paradox named after Hans Moravec. It observes that high-level reasoning is computationally cheap while low-level sensorimotor skills are computationally expensive. Evolution spent billions of years optimizing our hands and eyes; it spent only a blink of an eye on our ability to do algebra. AI is reversing that timeline. A computer can beat a grandmaster at chess. The same computer cannot fold a towel.

Moravec's Paradox explains why Amazon Fresh failed. It explains why warehouse automation sits at roughly 90% of human capability and cannot close the gap. It explains why only 15% of warehouses are currently automated despite decades of investment. Robots lack the dexterity for varied product types, irregular shapes, fragile items. The edge cases are not edge cases. They are the job.

I experience this daily. The store is not a controlled environment. Shelves are stochastic. Items migrate. Customers introduce noise. The planogram is an intent, not a guarantee. The app tells me an item is in aisle 12. It is in aisle 14, behind something else, partially crushed. I find it anyway because I have learned to predict where things drift. That prediction is not in the system. It is in me.

In my previous piece, I concluded that the fastest way to automate retail is not to build a smarter robot. It is to build a simpler room. Amazon's fulfillment centers work because they redesigned the environment. Standardized bins. Fixed locations. Controlled lighting. They removed the variance that makes physical work hard.

Here is the chilling implication: offices are already the simpler room.

Documents do not migrate. Spreadsheets do not hide behind other spreadsheets. Email threads do not get crushed in transit. The variance that protects physical labor from automation does not exist in knowledge work. The environment is already legible to machines. That is why the ratchet works.

The discourse focuses on extremes. Harvard MBAs make headlines. Hospitality shortages make policy briefs. But the actual volume of displacement is in the middle.

Paralegals who summarized documents. Junior analysts who built first-pass models. Administrative assistants who scheduled and followed up. Mid-level managers whose job was to synthesize information across teams. These roles are not glamorous enough to generate think pieces. They are also where AI substitution is most direct.

Goldman Sachs estimates that AI can automate 60 to 70 percent of office tasks. Not jobs. Tasks. The distinction matters but not in the reassuring way people assume. A job composed of many tasks does not disappear when some tasks are automated. It shrinks. Ten people become six. Six become four. The remaining workers are more productive and less numerous. This is not unemployment in the traditional sense. It is a slow compression that never reverses.

The NBER working paper by Bloom, Prettner, and colleagues models this directly. Their simulation shows the skill premium dropping from 2.0 to 1.62 as AI adoption reaches levels comparable to industrial robots. The mechanism is simple: "AI tends to reduce high-skill wages and to raise low-skill wages" because AI is more substitutable for high-skill workers than low-skill workers are for each other.

The wage premium for a college degree stopped growing after decades of increase. The unemployment gap between college and high school graduates is at its lowest level since the late 1970s. The inversion is already happening. It is just not evenly distributed yet.

Policy has not caught up. This is not a complaint. It is a diagnosis.

We subsidize four-year cognitive degrees through federal loan programs while underfunding trade schools and apprenticeships. We tax the labor I provide on the shelf through payroll taxes, but we offer depreciation write-offs for the software that replaces the analyst. The tax code itself is a ratchet, making it cheaper to deploy capital than to re-employ humans. Our unemployment insurance is designed for cyclical layoffs, not structural obsolescence. It assumes people will be rehired into similar roles. That assumption is breaking.

Ford CEO Jim Farley has been unusually direct. At the Aspen Ideas Festival in 2025, he declared that AI will replace half of all white-collar workers in the United States. He organized a summit in Detroit bringing together CEOs from Penske, U.S. Steel, AT&T, FedEx, Siemens USA, and U-Haul to discuss what he calls the "essential economy." The premise is that physical work is undervalued and underprepared for. He is not wrong.

David Autor, whose task framework is foundational to labor economics, has argued that AI could potentially reverse four decades of wage polarization by democratizing expertise. The optimistic version of this story is that AI makes middle-skill workers more capable, not less necessary. The pessimistic version is that the transition period will be brutal, and we have no institutions designed to manage it.

The dockworkers of the International Longshoremen's Association secured a 62% wage increase over six years and automation limits in their recent contract. Culinary workers in Las Vegas won $2,000 per year in severance if replaced by AI. These are early signals of what organized labor looks like when it takes automation seriously. Most workers have no such leverage.

I am not predicting catastrophe. The Yale data is real. Thirty-three months of AI deployment have not collapsed the labor market. Displacement tends to happen over decades, not years. Erik Brynjolfsson's research shows AI complementing workers as much as replacing them, with the largest productivity gains going to novice workers who get "leveled up" by AI assistance.

But I am also not predicting continuity.

The jobs I was trained for are contracting. The skills I was taught to signal are becoming less distinctive. The credential I earned is worth less than it was when I earned it, not because I failed, but because the market that valued it is restructuring.

I work at Walmart. I am the error-correction layer. I handle the residuals the system cannot represent.

The question is what happens when the residuals in knowledge work get small enough that the system stops needing a correction layer at all.

I do not have a clean answer. What I have is a vantage point. I am standing in both worlds, the one that is supposedly safe from automation and the one that is supposedly threatened by it. From where I stand, the threat is misallocated. The robot is not coming for my job on the floor. The language model is coming for the job I was trained to do.

The room was already simple. It never needed a Human API. We just did not notice.