Tag Archives: Large Language Models

The AI Layoff Trap –Brett Hemenway Falk, Gerry Tsoukalas 2026-03-02

After years of labor unions advocating for an 8-hour day and a 5-day week, Henry Ford finally saw his own self-interest and Ford Motor Company on September 25, 1926, made it company policy.

Why? Workers with free time and money to spend bought cars: long-term profit!

A century later, many companies are doing the opposite: laying off workers and replacing them with so-called AI: short-term profiteering. This trend only increases, because if competitors are doing it, every company has incentive to do it.

But companies are sabotaging themselves. Fired workers cannot easily find new jobs, so they can’t afford to buy. An economy with no purchasing is in trouble.

[The AI Layoff Trap 2026-03-02 --Brett Hemenway Falk, Gerry Tsoukalas, No jobs means no buying, One policy works to stop it]
The AI Layoff Trap 2026-03-02 –Brett Hemenway Falk, Gerry Tsoukalas, No jobs means no buying, One policy works to stop it

There are other issues, such as firing experienced people means companies lose their ability to do new things or to deal with unexpected challenges, and fewer jobs mean people trying to join the job market find nothing, so there’s little new talent incoming and few left to train them. But the chase for short-term profits overrides all that.

Plus the proliferation of hyper-scale datacenters catering to this so-called Artificial Intelligence (AI), using much cooling water, either directly, or through new power plants. See:

https://wwals.net/issues/datacenters

New research models this corporate behavior and finds that most proposed solutions do not stop it. Continue reading

So-called AI hallucinates no matter how good its training data –OpenAI 2025-09-18

Update 2026-02-17: Sen. Carden Summers tries to amend to weaken GA SB 34 that would require datacenters to pay their own electric bills @ GA Sen. Comm. on Regulated Industries and Utilities 2026-02-12.

This is according to research by the creator of ChatGPT, the bot that started the “AI”boom.

Is this what we want in datacenters sucking up our water?

If not, see a previous post for some bills in the Georgia legislature.

https://wwals.net/?p=69394

[So-called AI hallucinates, no matter how good its training data --OpenAI 2025-09-18]
So-called AI hallucinates, no matter how good its training data –OpenAI 2025-09-18

Gyana Swain, Computerworld, September 18, 2025, OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws,

In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data.

“Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.”

The admission carried particular weight given OpenAI’s position as the creator of ChatGPT, which sparked the current AI boom and convinced millions of users and enterprises to adopt generative AI technology.

Continue reading