Authored by Mike Shedlock via MishTalk.com,
The consensus, as many will quickly deduce, is no and no. So, how much of the AI hype is actually grounded in reality?
AI Can’t Teach AI New Tricks
Wall Street Journal writer Andy Kessler makes a poignant observation today: AI Can’t Teach AI New Tricks
OpenAI has just secured $6.6 billion, the largest venture capital funding ever, at a valuation of $157 billion. However, the company is projected to incur $5 billion in losses this year and anticipates $44 billion in losses by 2029.
We are inundated with sensational press releases. Anthropic CEO Dario Amodei predicts that “powerful AI” will surpass human intelligence in various domains by 2026. OpenAI asserts that its latest models are “designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems.” Thinking? Reasoning? Will AI evolve to mirror human-like traits? Conscious?
I must be the bearer of bad news, but here’s a reality check on the AI hype cycle:
Moravec’s paradox: Babies are more intelligent than AI. In 1988, robotics researcher Hans Moravec observed that “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Most innate abilities are ingrained in our DNA, and many of them are subconscious.
AI has a considerable distance to cover. Just last week, Apple AI researchers seemed to concur, noting that “current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”
Linguistic apocalypse paradox: As I’ve pointed out before, AI intelligence stems from human logic embedded within words and sentences. Large language models necessitate human words as input to advance further. However, some researchers argue that we might exhaust written words to train models between 2026 and 2032.
Remember, AI models cannot be trained on AI-generated text. This results in what is known as model collapse. Output becomes nonsensical.
Current models are trained on 30 trillion human words. To maintain a Moore’s Law-like trajectory, will this scale 1000 times over a decade to 30 quadrillion tokens? Are there even that many words in existence? Writers, you better get started.
Scaling paradox: Initial indications suggest that large language models may follow power-law curves. Google researcher Dagang Wei posits that “increasing model size, dataset size, or computation can lead to significant performance boosts, but with diminishing returns as you scale up.” Indeed, large language models could encounter obstacles.
Expenditure paradox: Data centers currently exhibit a nearly insatiable demand for graphics processing units to fuel AI training. Nvidia garnered $30 billion in revenue last quarter, with expectations of $177 billion in revenue in 2025 and $207 billion in 2026. However, venture capitalist David Cahn of Sequoia Capital questions the sustainability of this trend. He believes that the AI industry needs to generate $600 billion in revenue to recoup all the AI infrastructure investments made thus far. Industry leader OpenAI anticipates $3.7 billion in revenue this year, $12 billion next year, and projects $100 billion, but not until 2029. It may take a decade of expansion to justify the current outlay on GPU chips.
Goldman Sachs‘s head of research penned a report, “GenAI: Too much spend, too little benefit?” He was being diplomatic with the question mark. Nobel laureate and Massachusetts Institute of Technology economist Daron Acemoglu believes AI can only perform 5% of jobs and tells Bloomberg, “A lot of money is going to get wasted.” Add to that the cost of power—a ChatGPT query consumes nearly 10 times the electricity of a Google search. Microsoft is commissioning one of the nuclear reactors at Three Mile Island to handle escalating power demands. Yikes.
I am certain that AI will revolutionize our lives for the better, but it won’t be a smooth upward trajectory.
DotCom Bust Comparison
This might just be one of the most meticulously researched and linked articles in recent memory.
It evokes memories of the hype surrounding click counts and ad revenue during the DotCom crash.
Ad revenue from clicks eventually materialized, but not from Gemstar in 2000, rather from Google in 2004. Does anyone still recall Gemstar (GMST)?
During the late 1990s technology boom, investors were enamored with Gemstar-TV Guide International Inc. The Los Angeles-based company appeared to possess the key to the futuristic realm of interactive television, akin to how Internet portals like Yahoo served as the gateway to the Web.
Gemstar’s patented on-screen program-guide technology was anticipated to be indispensable for viewers navigating an expanding array of TV channels and cable services.
The stock took a nosedive after the dot-com crash, plummeting from a peak of $107.43 in March 2000 to a low of $2.36 in September 2002.
We also witnessed the downfall of Excite@Home, Lycos, Global Crossing, Enron, and numerous other entities that have faded from memory.
BottomLineLaw delves into Silicon Valley After the Dot-Com Crash.
The Excite@Home headquarters remained vacant for five years until Stanford eventually acquired it in 2007 and repurposed it as an extension of its outpatient medical facility.
Irrational Exuberance
In 1996, then Federal Reserve Chair Alan Greenspan sounded the alarm. Greenspan rightfully cautioned against “Irrational Exuberance” in a televised address.
“Clearly, sustained low inflation implies less uncertainty about the future, and lower risk premiums imply higher prices of stocks and other earning assets. We can see that in the inverse relationship exhibited by price/earnings ratios and the rate of inflation in the past. But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions as they have in Japan over the past decade?”
Greenspan Becomes a True Believer
However, by 2000, Greenspan had become a fervent believer.
Fed minutes reveal that precisely at the peak of the DotCom bubble and a substantial stock market downturn, Greenspan’s primary concern was that the economy was overheating due to the productivity miracle.
In a panic about a potential Y2K catastrophe, the Fed injected copious amounts of liquidity into the economy, exacerbating the bubble.
The Fed’s Role in the DOTCOM Bubble
Fueled by misguided fears of a Y2K disaster, the Fed flooded the system with unnecessary liquidity, having previously done the same to rescue Long Term Capital Management in 1998.
After issuing warnings about irrational exuberance in 1996, Greenspan embraced the “productivity miracle” and “dotcom revolution” in 1999. By mid-2000, with Greenspan buying into his own rhetoric, just as the dotcom bubble began to burst, he started fretting about inflation risks.
The May 16, 2000 FOMC minutes provide evidence of this.
The members perceived significant risks of mounting pressures on labor and other resources, as well as of heightened inflation. They concurred that the tightening measures would help align the growth of aggregate demand more effectively with the sustainable expansion of aggregate supply. They also acknowledged that even with these additional measures, the risks still leaned predominantly towards rising inflationary pressures, suggesting that further tightening might be necessary.
Looking ahead, a surge in spending on business equipment and software was anticipated. … Even after the tightening action today, the members believed that the risks would continue to lean towards inflationary pressures.
How could Greenspan have been more off the mark? In the subsequent 18 months, the CPI plunged from 3.1% to 1.1%, the US entered a recession, and capital expenditure nosedived.
Alan Greenspan Right on Time
On November 2, 2019, I remarked Good Reason to Expect Recession: Greenspan Doesn’t
We all know what ensued three months later.
On August 19, I commented “Zero Has No Meaning” Says Greenspan: I Disagree, So Does Gold
Former Federal Reserve Chairman Alan Greenspan suggests that he wouldn’t be surprised if US bond yields turn negative. And if they do, it’s not a significant issue.
No Greenspan, Conditions are NOT Like 1998
Flashback to September 11, 2007 No Greenspan, Conditions are NOT Like 1998
WSJ: Bubbles cannot be quelled through minor adjustments in interest rates, Mr. Greenspan suggested. The Fed doubled interest rates in 1994-95 and “halted the nascent stock-market boom,” but it reignited once rates were lowered. “We attempted to do the same in 1997,” when the Fed raised rates by a quarter percentage point, and “the same phenomenon occurred.” “The human race has never found a way to address bubbles,” he remarked.
Mish: The truth is that the Fed (especially Greenspan) has embraced every bubble in history, stoking each one further. Let’s examine the last two bubbles…
This post comprises 26 links, likely setting a new record.
Returning to the beginning…
“OpenAI has just secured $6.6 billion, the largest venture capital funding ever, at a valuation of $157 billion. However, the company is projected to incur $5 billion in losses this year and anticipates $44 billion in losses by 2029.”
What portion of the hype is genuine, and how much is merely speculation?
You don’t get it do you… pic.twitter.com/qzNKErBfIu
— ₕₐₘₚₜₒₙ — e/acc (@hamptonism) October 21, 2024
Loading…