0:00
The $600 Million Mobile Miscalculation
In 2006, the executive team at Intel made a decision that, well, in hindsight, might actually be the single most expensive miscalculation in the history of modern business.
0:11
Speaker 2
Oh, without a doubt it's it's one for the textbooks.
0:14
Speaker 1
Right.
So they owned this highly specialized mobile focused chip division called X Scale.
And this wasn't some tiny side project.
It had what, 1400 engineers?
0:26
Speaker 2
Yeah, about 1400 and it was generating real revenue too.
0:29
Speaker 1
Exactly.
But Intel score business, their bread and butter was massive power hungry PC processors and the executives, you know, they felt this mobile division was just a distraction.
0:40
Speaker 2
They looked at it and thought, well, the profit margins are too thin, it's not worth our time.
0:44
Speaker 1
So they sold it.
They literally packaged up X scale and sold it to a company called Marvell for $600 million.
0:51
Speaker 2
Which, I mean, sounds like a decent chunk of change, right?
Until you look at the calendar.
Because just a few months after that paperwork was signed, Steve Jobs walked onto a stage in San Francisco and unveiled the original iPhone.
1:03
Speaker 1
Wow, so Intel had literally just liquidated the exact technology and the exact engineering team that was perfectly suited to power the mobile revolution?
1:13
Speaker 2
Yep, they essentially sold the key to the next two decades of computing for pocket change right before the lock was even revealed.
1:20
Speaker 1
It's just a staggering historical irony, and I think it highlights this brutal truth about the semiconductor industry having the deepest pockets or the most brilliant physicists or the most advanced factories on earth.
None of that guarantees your survival.
1:33
Speaker 2
No, it really doesn't, and that's exactly what we are digging into today.
1:37
Speaker 1
Right.
Because if you take a look at the device in your hand right now, as you're listening to this with a laptop sitting on your desk, the silicon brain doing millions of calculations a second just to push this audio to your ears, it's the product of a ruthless 50 year corporate war.
1:51
Speaker 2
A very messy, very expensive war.
1:54
Speaker 1
And in this deep dive, we're going to trace the history of that microscopic battlefield.
We're looking at Intel's absolute terrifying dominance, AMD's stubborn refusal to die, ARM's stealthy conquest of the globe, Apple's unprecedented hardware agility, and even Nvidia's massive pivot.
2:14
Speaker 2
And you know, the connective tissue through all this isn't just the silicon itself, it's the corporate psychology.
So time and time again, across all these companies, we see that the exact strategies that turn a company into an untouchable monopoly, the things that make them great, are the precise things that blind them to the next structural shift in the industry.
2:35
Speaker 1
So it's like the gravity of their own past success.
2:38
Speaker 2
Precisely.
They just can't escape it.
2:39
The Intel Trinity and IBM's X86 Mandate
Well, let's start at the epicenter of that gravity.
We have to go way back to 1968, to the founding of Intel, because the culture that made that X scale blunder in 2006, it was forged by three very specific personalities, right?
2:51
Speaker 2
Gordon Moore, Robert Noyce and Andy Grove.
People often call them the Intel Trinity.
2:55
Speaker 1
The Intel Trinity.
I like that.
2:57
Speaker 2
You really cannot separate Intel's corporate DNA from those three men.
They left Fairchild Semiconductor to build something entirely new, and the dynamic between them was fascinating.
3:08
Speaker 1
Break that down for us.
Who did what?
3:11
Speaker 2
Well, Robert Noyce was the visionary.
He was actually the Co inventor of the integrated circuit itself.
Very charismatic.
He was the public face of the operation.
Then you had Gordon Moore.
He was the quiet, deeply analytical intellect.
3:26
He was a chemist, actually, who famously observed that the number of transistors on a microchip doubles roughly every two years.
3:33
Speaker 1
Which we all know today as Moore's law.
Exactly.
And Moore's law, I mean, it isn't actually a physical law, right?
Like gravity, it's an economic and technological observation.
But it basically became a self fulfilling prophecy for them.
3:44
Speaker 2
Right.
It became the metronome for the entire semiconductor industry.
Everyone marched to that beat.
3:49
Speaker 1
But a metronome or a road map is kind of useless without someone forcing everyone to keep the pace.
3:55
Speaker 2
Which brings us perfectly to Andy Grove.
Grove was the operational enforcer.
He was a refugee from communist Hungary, fiercely intense, and he authored this famous management philosophy that only the paranoid survive.
4:08
Speaker 1
That really sets a tone for the workplace.
4:10
Speaker 2
Oh, absolutely.
He instituted a culture of constructive confrontation.
Intel wasn't a warm, fuzzy place to work, you know.
It was a brutal execution machine.
4:20
Speaker 1
And that machine built an absolute empire on an instruction set architecture known as X86.
4:26
Speaker 2
X86 Yes, the foundation of the PC.
4:30
Speaker 1
Actually I want to pause right here because we throw the term architecture around a lot in tech to ground this for everyone.
An instruction set architecture is basically the fundamental vocabulary that a processor understands right.
4:42
Speaker 2
That's a great way to put it.
It's the raw list of commands like A.
Add these two numbers or move this piece of data from register A to register B.
4:49
Speaker 1
And Intel defines this vocabulary with X86, and so all the software developers out there have to write their operating systems and programs using that exact vocabulary.
4:59
Speaker 2
Exactly.
And that specific vocabulary became the bedrock of the entire personal computer revolution.
But the critical inflection point, the moment that permanently altered the trajectory of this whole industry, happened in 1981.
5:13
Speaker 1
And it involved a company that was, at the time, vastly more powerful than Intel.
Yes.
5:18
Speaker 2
IBM the giant.
So what happened in 81?
5:20
Speaker 1
IBM was gearing up to release the IBM personal computer, internally called Project Chess.
It was a massive, highly accelerated undertaking and they decided they wanted to use Intel 8088 microprocessor as the brain of the machine.
5:35
Speaker 2
I mean, this should be the greatest day in Intel history.
You land the IBM contract in 1981.
You were essentially set for life.
5:42
Speaker 1
You would think so, but IBM had a fiercely strict corporate policy regarding its supply chain.
They looked at Intel, which was, you know, still a relatively young company back then, and said, look, we love the chip, but we are terrified of relying on a single supplier.
5:55
Speaker 2
Terrified of what exactly?
Like Intel going bankrupt or just simple logistics?
IBM was a behemoth.
Their nightmare scenario was that Intel's single fabrication plant catches fire, or Intel's manufacturing yields drop, and suddenly IBM's entire multimillion dollar PC assembly line grinds to a halt just because they can't get the chips.
6:17
Speaker 1
So a single point of failure.
6:18
Speaker 2
Exactly.
So IBM issued a non negotiable ultimatum.
Intel had to secure a second source.
6:24
Speaker 1
It's completely insane when you think about it in modern business terms, IBM essentially told Intel We will give you this massive contract, but only if you hand over the complete blueprints of your proprietary technology to another company so they can manufacture exact copies of your chip.
6:40
Yep.
6:40
Speaker 2
It's like discovering the most lucrative gold mine in the world and the bank tells you they'll only finance mining equipment if you give the maps and the keys to a rival bigging crew.
6:49
Speaker 1
That is wild.
But Intel, Desperate for the validation, they swallowed their pride and agreed.
6:54
Speaker 2
They had to.
So they signed a technology exchange agreement with a scrappy secondary semiconductor company called Advanced Micro Devices.
7:01
Speaker 1
AMD AMD founded by Jerry Sanders, who, conveniently enough used to work right alongside the Intel founders back at Fairchild Semiconductor.
7:10
Speaker 2
It's a very small world.
Sanders was Fairchild's worldwide marketing director before he was kind of pushed out.
He was flashy, very ambitious and he saw this IBM requirement as the ultimate Trojan horse.
7:23
Speaker 1
So how did that arrangement actually work early on?
7:26
Speaker 2
Well, for the first few years, AMD was strictly a second source supplier.
They were taking Intel's masks, the actual physical microscopic stencils used to print the chips, and manufacturing perfect licensed clones.
7:39
Speaker 1
So AMD is basically getting a free ride on Intel's R&D.
They're building up their own fabrication plants, building out their own sales networks, all on the back of Intel's designs.
7:48
Speaker 2
Exactly.
And as the PC market absolutely explodes throughout the 1980s, x86 becomes the undisputed de facto standard for all computing.
7:57
Speaker 1
But eventually, I imagine Intel looks around and realizes they've created a monster.
They have this parasite that is eating their own market share.
8:04
Weaponizing Market Share: The Intel Inside Campaign
Right, which triggers A fundamental shift in Intel's strategy under Andy Grove.
They realized that just designing the fastest chips wasn't going to be enough to kill off AMD.
They had to weaponize their market position.
8:17
Speaker 1
Weaponize it how?
8:18
Speaker 2
They had to turn a piece of silicon, which remember, most consumers didn't understand and couldn't even see inside the box, into a premium lifestyle brand.
8:27
Speaker 1
Enter the Intel inside campaign.
8:30
Speaker 2
You know it.
8:31
Speaker 1
This is one of the most brilliant and honestly ruthless marketing strategies in corporate history because it wasn't just those little shiny stickers on the palm rest of your laptop or that famous four note audio chime on the TV commercials.
8:44
Speaker 2
No, those were just the visible parts.
The real genius was that it was a highly engineered financial lock in mechanism.
8:50
Speaker 1
Walk us through the mechanics of that.
How did the lock in actually work?
8:53
Speaker 2
It was a master class in market control to something called Market Development Funds or MDF.
So Intel approached the massive PC manufacturers, companies like Dell, HP, Compaq, Gateway.
9:04
Speaker 1
The big guys.
9:05
Speaker 2
Right.
And Intel offered to subsidize their television and print advertising.
We are talking about billions of dollars in free marketing money.
9:13
Speaker 1
I mean, if you're Dell and Intel offers to pay for half of your national TV spots just for slapping a sticker on the box and playing a little Chime at the end, you take that deal every single time.
It dramatically lowers your customer acquisition cost.
9:26
Speaker 2
Of course you take it, but the contract had a deeply predatory hook to receive those top tier rebates, which eventually ballooned to over $7 billion.
By the late 90s, the PC manufacturer had to agree to purchase roughly 95% of all their microprocessors exclusively from Intel.
9:45
Boom.
9:45
Speaker 1
The door slams shut.
9:46
Speaker 2
Completely shut.
9:47
Speaker 1
Because if Dell suddenly decides, hey, AMD just released a really great cheap processor, let's put it in our budget line of desktops, they risk violating that 95% threshold.
9:57
Speaker 2
And if they violate it, Intel pulls the plug on millions of dollars in marketing subsidies.
Dell's profit margins would instantly evaporate.
10:04
Speaker 1
So Intel basically financially starved AMD of distribution.
10:08
Speaker 2
Exactly, they locked up over 2000 PC firms.
With these agreements, AMD could build a perfectly good processor, but they couldn't get it onto the shelves of Best Buy or Comp USA because the manufacturers were absolutely terrified of Intel's financial retaliation.
10:25
Speaker 1
Which obviously led to massive, bitter legal warfare.
10:29
Speaker 2
Years of it.
Intel tried to revoke the original licensing agreement, claiming AMD only had rights to older chips.
AMD fought back, arguing the contract covered derivatives it dragged through the courts forever.
10:41
Speaker 1
But eventually, Jerry Sanders realized something crucial, didn't he?
You can't just litigate your way to market dominance.
10:47
Speaker 2
No, you can't.
And they also couldn't just keep reverse engineering Intel's chips because Intel was starting to move too fast.
AMD had to actually out innovate them.
10:54
Speaker 1
Which is a monumental task.
I mean, designing an X86 processor from scratch without violating any of Intel's specific implementation patents is incredibly complex.
11:03
Speaker 2
It's remarkably difficult, but in 1996 they did it.
AMD released the K5.
It was their first fully in house designed X86 architecture.
11:12
Speaker 1
How did it perform?
11:14
Speaker 2
Well, it's mostly a proof of concept.
The K Fives had some issues with clock speeds, but it showed the world that AMD wasn't just a Xerox machine for Intel chips anymore.
They had real engineering chops.
11:24
Speaker 1
And they doubled down after that.
11:26
Speaker 2
They did.
They brought in some brilliant architectural minds, including engineers from Digital Equipment Corporation, and they started working on a completely new architecture code name, the K-7.
11:35
Athlon, Itanium, and AMD's 64-bit Server Victory
And that leads to, honestly, a psychological earthquake in Silicon Valley, March 6th, 2000.
11:40
Speaker 2
Yes, AMD releases the Athlon 1000 processor based on that K-7 architecture.
11:46
Speaker 1
One gigahertz.
They broke the one gigahertz barrier before Intel.
11:49
Speaker 2
It really cannot be overstated how humiliating that was for Intel.
Intel defined itself is the undisputed king of performance.
They threw massive amounts of money and manpower at maximizing clock speed, and yet this scrappy underdog operating on a fraction of Intel's R and dim budget beat them to the ultimate magical round Number.
12:09
Speaker 1
How did they do it mechanically?
Because my understanding is it wasn't just that the Athlon ran at a faster clock speed, it was fundamentally a smarter design.
12:17
Speaker 2
You're exactly right.
It comes down to a metric called IPC Instructions Per Clock.
12:22
Speaker 1
Break that down for us.
12:23
Speaker 2
OK, think of a processor's clock speed, the gigahertz like the RPM of a car engine.
The IPC is how much torque the engine delivers with every single rotation.
12:32
Speaker 1
OK, I follow.
12:33
Speaker 2
Intel was building engines that spun incredibly fast, but they didn't do a lot of actual work per spin.
AMD designed the Athlon to have a much higher IC.
It could execute more floating point math, more logic operations in a single cycle.
It had better cache memory management.
12:50
It simply did more work more efficiently.
12:52
Speaker 1
So AMD finally proves they can go toe to toe with the giant.
They have the engineering talent.
But right as AMD is celebrating this massive victory, the entire computing landscape hits a structural roadblock.
13:04
Speaker 2
A massive 1.
13:05
Speaker 1
And this leads us directly into a battle that fundamentally reshaped the server market.
We are moving from the 32 bit era into the 64 bit era.
13:13
Speaker 2
Right.
And to understand why this was such a crisis we have to look at the physical limitations of math.
The X86 architecture that it forward everything since the 80s was a 32 bit architecture, which means what in simple terms.
13:24
Speaker 1
In simple terms, a 32 bit processor uses memory addresses that are exactly 32 ones and zeros long.
If you calculate the maximum number of unique addresses you can create with 32 bits, it comes out to roughly 4.2 billion.
13:37
Speaker 2
Which translates directly to 4 gigabytes of physical RAM.
13:40
Speaker 1
Exactly.
A 32 bit processor simply cannot see or use more than 4 gigabytes of memory.
13:47
Speaker 2
And in 1995, four gigabytes probably sounded like infinite space.
13:50
Speaker 1
It did.
But by the early 2000s, enterprise databases, massive scientific simulations, an rapidly growing web servers, they were choking on that limit.
They desperately needed more memory.
14:02
Speaker 2
They needed 64 bit processors which can theoretically address exabytes of RAM.
14:06
Speaker 1
The entire industry agreed that a transition to 64 bit was mandatory.
The question was how to do it.
And this is where Intel, blinded by decades of monopolistic power, makes a breathtakingly arrogant strategic error.
14:18
Speaker 2
They decide to build the Itanium.
14:20
Speaker 1
Yes, they did.
Why was that an error?
14:21
Speaker 2
Intel looked at the aging X86 architecture and decided it was just too messy, too bloated with decades of legacy instructions to stretch to 64 bits cleanly.
So they parted with Hewlett Packard to design an entirely new architecture from a blank sheet of paper.
14:38
It was called IA 64 and the flagship processor was the Itanium.
14:43
Speaker 1
And Itanium utilized a completely different paradigm for processing right?
Something called VLIW.
14:48
Speaker 2
Very long instruction word yes.
14:50
Speaker 1
How does that differ from normal processing?
14:53
Speaker 2
The concept of VLIW is theoretically beautiful.
In a traditional processor, the physical hardware sends a massive amount of time and energy looking at the incoming stream of software instructions, trying to figure out which ones can be executed simultaneously to save time.
15:09
Speaker 1
It's like a traffic cop frantically waving cars through an intersection on the fly.
15:12
Speaker 2
Exactly.
VLIW flips that.
It says let's remove the traffic cop from the hardware entirely.
Instead, let's force the software compiler, the program that translates human code into machine code, to perfectly bundle the instructions into these very long words before it even reaches the processor.
15:28
Speaker 1
So the hardware just blindly executes these prepackaged bundles.
15:32
Speaker 2
Right.
In theory, it makes the silicon much simpler and incredibly fast.
15:36
Speaker 1
OK, that sounds highly logical.
What's the?
15:39
Speaker 2
The catch is that writing a software compiler capable of predicting the exact behavior of a complex program in advance and packaging those instructions perfectly turned out to be a computer science nightmare.
The compiler simply couldn't do it efficiently.
15:54
Speaker 1
So what happened?
15:55
Speaker 2
The Itanium processors ended up sitting idle waiting for data, performing terribly.
16:00
Speaker 1
But the performance issues weren't even the fatal flaw, were they?
The fatal flaw was compatibility, or rather, the total lack of it.
16:07
Speaker 2
That was the nail in the coffin.
Because IA 64 was a fundamentally different architecture, it could not natively run any existing X86 software.
16:16
Speaker 1
None of it.
Yeah, really none.
16:18
Speaker 2
None.
If a massive bank wanted to upgrade their servers to Itanium to get more memory, they couldn't just install their old software.
They had to hire developers to completely rewrite and recompile their custom databases and operating systems for this new architecture.
16:32
Speaker 1
I am still mind blown by this decision.
I mean, Intel spent 30 years building an impenetrable fortress around the X86 software ecosystem.
It's the entire reason people bought Intel chips.
And then they willingly walked outside the fortress, locked the gate behind them, and asked all their customers to join them in an empty field.
16:51
Why?
16:51
Speaker 2
It is the ultimate manifestation of the Andy Grove era hubris.
Intel controlled 90% of the market.
They truly believed we are the Sun and the software industry orbits around.
17:03
Speaker 1
Us.
They just assumed everyone would follow them.
17:06
Speaker 2
They assumed that if Intel dictated a new standard, Microsoft, Linux developers and enterprise IT departments would simply fall in line and do the massive amount of work required to migrate.
17:16
Speaker 1
But the market hates friction.
Enterprise customers loathe rewriting legacy code.
It's expensive, it's risky, and things break.
And AMD saw this massive gaping blind spot.
17:26
Speaker 2
They absolutely did.
17:27
Speaker 1
While Intel is trying to force everyone onto this complex, incompatible VLIW alien technology, AMD takes a much more pragmatic approach.
17:35
Speaker 2
Right.
AMD looks at the existing X86 architecture and says instead of throwing the baby out with the bathwater, why don't we just widen the bathtub?
They created an architecture called X86-64, later known as AMD 64.
17:50
Speaker 1
They just extended the existing 32 bit registers to 64 bits.
17:54
Speaker 2
Exactly.
They took the standard 8 general purpose registers and expanded them to 16 and widened them to handle 64 bit data.
It was an evolutionary step, not a revolutionary Cliff.
18:04
Speaker 1
And the absolute brilliance of AMD 64 is backwards compatibility.
18:08
Speaker 2
That was the killing blow.
With an AMD 64 processor, you could install a new 64 bit operating system and access hundreds of gigabytes of RAM.
But, and this is the crucial part, you could also run your old dusty 32 bit applications natively right out-of-the-box with absolutely 0 performance penalty.
18:27
Speaker 1
You don't have to rewrite a single line of code until you were ready.
18:29
Speaker 2
It was an elegant customer first solution and AMD deployed this architecture in 2003 with the Opteron Server processor.
18:36
Speaker 1
And Opteron wasn't just a 64 bit hack, was it?
It fundamentally fixed a massive bottleneck in how servers handled memory.
18:43
Speaker 2
Right.
Historically processors had to communicate with a separate memory controller chip on the motherboard just to fetch data from the RAM.
It was a slow multi step trip.
18:52
Speaker 1
So what did operon do differently?
18:54
Speaker 2
Operon took that memory controller and integrated it directly onto the processor DI itself.
It drastically reduced latency.
The CPU could talk to the RAM almost instantly.
19:04
Speaker 1
OK, so meanwhile, what is Intel offering in the server space if customers are refusing to buy Itanium?
Intel tries to sell them Zeon ships based on their Netburst architecture, right?
19:16
Speaker 2
And Netburst was a total disaster for the data center.
Netburst was designed purely to chase high clock speeds to hit 3 gigahertz, 4 gigahertz.
To do that, Intel built incredibly long deep instruction pipelines.
19:28
Speaker 1
But you can't cheat physics forever.
19:29
Speaker 2
No, you can't.
Pushing silicon to those frequencies caused massive electrical leakage.
The chips consumed terrifying amounts of power and ran incredibly hot.
19:38
Speaker 1
O imagine you are a data center manager.
You are running thousands of servers.
Electricity and cooling are your absolute highest costs.
Intel comes to you and offers a Netburst Zeon that runs so hot it might melt your server rack, or a titanium chip that requires you to fire your IT staff and hire new ones to rewrite your software.
19:56
Speaker 2
Not great options.
19:57
Speaker 1
And then AMD walks in with the Opteron.
It runs cooler, it accesses memory faster, it natively supports 64 bit addressing, and it Florida State runs every piece of software you already own.
20:07
Speaker 2
It was a slaughter.
Between 2003 and 2006, AMD captured roughly 25% of the lucrative X86 server market.
20:15
Speaker 1
That is unprecedented territory for AMD.
20:18
Speaker 2
Unprecedented.
They were stripping high margin enterprise revenue directly out of Intel pockets.
20:23
Speaker 1
And it forces Intel into a humiliating retreat.
The Itanic sinks, as they called it.
The market completely rejects IA 64.
Yep.
20:31
Speaker 2
Intel realizes that if they don't offer an X86 compatible 64 bit chip, AMD is going to take the entire server market.
20:38
Speaker 1
So the company that invented X86, the monopoly that dictated the standard for 30 years, is forced across the aisle and quietly licensed the 64 bit extensions from AMD.
20:49
Speaker 2
They implemented AMD's exact instruction set and called it Intel 64.
Architecturally, AMD had become the leader and Intel had become the clone maker.
21:00
The Rise of ARM and Intel's Mobile Miss
It's a stunning reversal, but as fascinating as this X86 server war is, we have to pull the camera back if we do.
Because while Intel and AMD were bludgeoning each other over these massive multi 100 Watt desktop and server chips, a quiet, almost invisible revolution was happening in a completely different domain.
21:20
Speaker 2
We are entering the mobile era.
21:21
Speaker 1
Exactly.
When you are building a server, you plug it into a wall and blast it with industrial air conditioning.
You can afford to be inefficient, but if you are building a cell phone or a PDA, or eventually a smartphone, the fundamental physics of the device completely change.
21:34
Speaker 2
You are severely limited by a lithium ion battery.
21:37
Speaker 1
The absolute most critical metric is no longer raw speed, it is performance per Watt.
21:41
Speaker 2
And this paradigm shift perfectly aligned with a company that had been quietly cultivating a completely different philosophy of computing since 1990, a British company called ARM.
21:53
Speaker 1
Let's dive deep into a ARM because their business model is arguably the most disruptive innovation in the history of semiconductors.
22:00
Speaker 2
It really is.
22:01
Speaker 1
ARM stands for Advanced RISC Machines.
It was originally a joint venture spun out of Acorn Computers with backing from Apple and VLSI Technology, and under their CEO Robin Saxby, they made a decision that sounded absolutely crazy at the time.
22:17
Speaker 2
They decided not to manufacture chips.
22:19
Speaker 1
Which goes against everything we just talked about.
If you go back to Jerry Sanders at AMD, his famous quote was real men have fabs.
22:25
Speaker 2
Right.
The entire industry believed that to be a serious silicon company, you had to own the multibillion dollar fabrication plants that printed the silicon wafers.
It was a massive barrier to entry.
22:36
Speaker 1
So what did Saxby do?
22:37
Speaker 2
Robin Saxby looked at the economics of fabs and realized it was a CapEx nightmare.
The cost of lithography machines, the clean rooms, the chemical rocessing, it was skyrocketing.
O AXB pioneered the intellectual property or I licensing model.
22:53
Speaker 1
Meaning they just sell the ideas.
22:55
Speaker 2
Basically, yes.
Yes, Essentially became an architecture and design firm.
They created the blueprints, the instruction set and the logical layout of the processor, and then they stopped.
They licensed those blueprints to other companies.
23:08
Speaker 1
So if I'm Qualcomm or Texas Instruments or Samsung, I don't have to spend a billion dollars inventing a processor architecture from scratch.
23:16
Speaker 2
No, you just pay ARM an upfront licensing fee to use their blueprints.
You customize the design for your specific phone, modem or digital camera, and then you pay ARM a tiny royalty, maybe a few cents for every chip you physically manufacture and sell.
23:30
Speaker 1
That IP model unleashed a tidal wave of innovation.
Because ARM wasn't burdened by the massive capital expenses of running factories, they could focus entirely on perfecting their architecture.
23:41
Speaker 2
And because multiple companies were licensing ARM designs, you had a vibrant, highly competitive ecosystem.
Everyone was iterating rapidly trying to make the most power efficient ARM chip possible to win contracts for Nokia flip phones or Blackberries.
23:55
Speaker 1
And so ARM becomes the undisputed gold standard for low power devices.
And this leads us right back to the story we opened the show with.
24:02
Speaker 2
2005.
24:03
Speaker 1
It's 2005.
Steve Jobs is developing the iPhone in total secrecy.
He needs a processor that can run a desktop class operating system OSX, but run it on a tiny battery without melting the plastic chassis.
24:17
Speaker 2
So naturally, Jobs approaches Intel.
Apple and Intel were actually in the middle of a massive, highly successful transition, moving the Mac computer lineup away from power PC chips over to Intel score architecture.
They had a great relationship.
24:30
Speaker 1
Right.
So Paula Tolini, the CEO of Intel, has this historic opportunity to put Intel Ciloton into what will become the most successful consumer product of all time.
24:38
Speaker 2
You know, Telini, just.
24:39
Speaker 1
Says no now.
For years, the narrative around this was based on Otolini's own defense.
He claimed it was a purely economic decision.
He said Apple was demanding a fixed low price per chip, and Otolini looked at his spreadsheets and couldn't see how Intel would make a profit.
24:56
Speaker 2
Especially since nobody knew if this touchscreen phone would actually sell.
24:59
Speaker 1
Exactly.
25:00
Speaker 2
But it's the classic innovators dilemma excuse.
The margins were too low, so we protected our high margin PC business.
But when you dig into the history, that excuse really falls apart.
It masks a massive organizational failure.
25:14
Speaker 1
Because as we discussed in the intro, Intel had the exact technology jobs needed.
25:19
Speaker 2
They own the X Scale division.
It was an ARM based architecture that Intel acquired when they bought Digital Equipment Corporation semiconductor assets.
Intel had 1400 engineers building high performance, low power ARM chips specifically for mobile devices.
25:33
Speaker 1
So why didn't Ottolini just offer jobs?
The X scale chip?
They already owned it.
25:37
Speaker 2
Because of internal corporate friction, Intel's culture was violently, obsessively dedicated to X86.
The X86 division generated the massive profits that paid for the fabs and all the executive bonuses.
25:50
Speaker 1
So X scale was like the unwanted stepchild.
25:52
Speaker 2
Totally.
The X scale team was viewed internally as an anomaly, almost an annoyance.
They were building chips based on a rival architecture ARM and selling into markets with much lower margins.
26:03
Speaker 1
And Otolini was trying to streamline the company.
He wanted to force X86 into every single device, from servers all the way down to cell phones.
26:11
Speaker 2
He fundamentally couldn't tolerate an architecture he didn't own, and he couldn't stomach the lower margins even if the volume was going to be astronomical.
26:19
Speaker 1
So he sells the X scale division to Marvel for $600 million in June of 2006.
Six months later, Jobs announces the iPhone.
26:28
Speaker 2
Powered of course, by an ARM based processor.
26:30
Speaker 1
Intel completely misses the mobile revolution because they were literally addicted to their own X86 monopoly.
26:37
Speaker 2
And that miss gave ARM the runway to conquer the world.
Today, essentially 100% of the world's smartphones run on ARM architecture.
And ARM didn't just sit still either.
They aggressively advanced the technology.
26:53
Speaker 1
Let's talk about big, literally, because this was a huge leap forward in power management.
26:57
Speaker 2
Yeah, BIG literally was introduced around 2011 and it solved the fundamental paradox of mobile computing.
27:04
Speaker 1
Which is.
27:04
Speaker 2
Sometimes the phone needs to be incredibly fast, like when you are rendering a 3D game or loading a heavy web page.
But 90% of the time the phone is just sitting in your pocket checking for background emails or maybe just playing an audio file.
27:18
Speaker 1
And historically, you had one type of core doing all that work.
If it was fast, it burned battery.
If it was efficient, it was slow.
27:23
Speaker 2
Right, so Aram's big little architecture put two entirely different sets of processing cores on the exact same piece of silicon.
27:32
Speaker 1
It's like having AV8 engine and an electric golf cart motor in the exact same car.
27:36
Speaker 2
That's a perfect analogy.
You have the big cores, massive complex high performance designs and you have the little cores, tiny, simple, highly power efficient designs and the brilliance was in the operating system scheduler.
27:50
Speaker 1
How?
27:51
Speaker 2
So the hardware and software work together to instantly route tasks to the appropriate core.
27:56
Speaker 1
So if I tap an app icon, the big cores wake up, blast the app open in milliseconds, and then immediately go to sleep.
28:03
Speaker 2
Exactly.
Then, as you're just scrolling through text, the little cores take over, sipping microscopic amounts electricity.
28:10
Speaker 1
It maximized both peak performance and battery life.
28:13
Apple's PowerPC to Intel Transition Challenges
And this relentless focus on architectural efficiency didn't just keep ARM dominant and mobile, it gave them the foundation to eventually attack Intel's last remaining stronghold.
28:23
Speaker 2
The data center.
28:24
Speaker 1
This is wild to me.
The phone chips are coming for the servers.
28:27
Speaker 2
Because the math changed the massive hyper scalers, Amazon AWS, Google Cloud, Microsoft Azure, they realize that for highly parallel cloud workloads like serving web pages or running micro services, having thousands of highly efficient ARM cores was actually vastly more cost effective than using traditional power hungry X86 Intel Xeons.
28:49
Speaker 1
The performance per Watt was just better.
28:51
Speaker 2
That's why you see Amazon designing their own custom ARM based server chips like the Graviton line.
ARM is basically eating the world from the bottom up.
29:00
Speaker 1
And the ultimate proof of ARM's capabilities brings us to Act 4 of this massive saga.
We have to talk about Apple.
29:07
Speaker 2
Apple is a unique beast in this story.
29:09
Speaker 1
Because Apple is the only major computing company on earth to successfully survive 3 completely separate existential CPU architecture transitions.
29:18
Speaker 2
Most companies would collapse attempting just one.
I mean, changing the fundamental instruction set of your computer means, by definition, breaking every single piece of third party software written for your platform.
29:29
Speaker 1
It's like performing a brain transplant on a patient while they are running a marathon.
29:34
Speaker 2
That's exactly what it's like, but Apple has always viewed the processor architecture merely as a means to an end.
They absolutely refuse to let their product vision be dictated by someone else's failing road map.
29:46
Speaker 1
If we trace it back, Apple launched the Macintosh in 1984 on Motorola 68K processors.
By the early 90s they realized Motorola was stalling, so Apple formed the AM Alliance, Apple, IBM and Motorola and transitioned to the Power PC architecture.
30:01
Speaker 2
And for a while Power PC was amazing.
The RSC based design was incredibly fast.
But by the early 2000s, Apple hits a wall, and once again it all comes back to power consumption and heat.
30:12
Speaker 1
Because the computing market was shifting rapidly from desktop towers to laptops, people wanted thin, light portables.
30:18
Speaker 2
And Apple was relying on IBM to produce the Power PCG 5 chip.
The G5 was a monster in desktop machines, but it was a thermal nightmare.
It consumed so much power and generated so much heat that Apple practically had to engineer a liquid cooling system for their Power Mac towers just to stop them from melting.
30:35
Speaker 1
I remember Steve Jobs at the Worldwide Developers Conference in 2005.
He was brutally honest on stage.
He basically said we have tried everything.
We cannot figure out how to put AG5 processor into a laptop without burning your legs off.
30:50
Speaker 2
And IBM, because they didn't have a massive volume of other customers buying these chips, couldn't justify the billions in R&D required to shrink the G5 and make it mobile friendly.
31:00
Speaker 1
The physics didn't work and the economics didn't work, so Apple made the agonizing decision to abandon Power PC and switch to their historic rival Intel.
31:09
Speaker 2
Intel's new core architecture at the time was highly focused on performance per Watt.
31:14
Speaker 1
And that transition was brilliant.
It burst the MacBook Air, the MacBook Pro.
But let's Fast forward a bit to the late 2010's.
31:21
M1: Apple's Vertical Integration and Rosetta 2
The honeymoon between Apple and Intel turns incredibly toxic.
31:25
Speaker 2
It broke down because Apple's entire business model relies on a metronomic, predictable cadence of yearly hardware improvements.
They need a faster, more efficient chip every single year.
But Intel's legendary manufacturing machine suddenly derailed.
31:38
Speaker 1
They hit a massive wall trying to shrink their transistor nodes down to 10 nanometers right.
31:45
Speaker 2
We need to explain this because nanometers gets thrown around as a marketing term all the time, but it represents brutal physical manufacturing challenges.
31:53
Speaker 1
Go ahead, breakdown.
31:54
Speaker 2
Historically, a node name like 10 nanometer roughly correlated to the physical size the transistor gate.
The smaller the transistor, the more you can pack onto a piece of silicon.
More transistors means lower power consumption and higher performance makes sense.
But as you get down to 10 nanometers, the physical features are so unimaginably small that traditional lithography, the process of using light to edge patterns onto the silicon, stops working cleanly.
32:21
Speaker 1
Wait, the light itself is the problem.
32:23
Speaker 2
Exactly.
The light ways themselves are literally too wide to paint the tiny lines, so Intel tried to solve this using a technique called multi patterning, where you expose the way from multiple times to create complex overlapping patterns.
32:36
Speaker 1
But the defect rate was catastrophic, wasn't?
32:38
Speaker 2
It it was awful.
If a single alignment in the multi patterning process was off by a fraction of a nanometer, the entire chip was ruined.
Intel's 10 nanometer yields were abysmal.
They delayed the architecture for years.
32:50
Speaker 1
And to make matters worse, the chips they were shipping to Apple based on the aging 14 nanometer Skylake architecture were notoriously the buggy.
32:59
Speaker 2
The quality assurance was dropping fast.
Former Intel engineers have openly stated that Apple became the number one filer of bug reports for the Skylake architecture.
33:08
Speaker 1
Imagine you are Apple's hardware design team.
You spend years designing a gorgeous, ultra thin, fabulous laptop chassis, and you're relying on Intel to deliver a chip that runs cool enough to live inside that chassis.
33:22
Speaker 2
And Intel promises it.
33:23
Speaker 1
Right.
And then at the last minute, Intel says, sorry, the 10 nanometer chip is delayed again.
You have to use this older, hotter chip.
33:31
Speaker 2
Apple literally had to completely redesign the thermal architecture of their Macbooks late in the product cycle just to compensate for Intel's failures.
It resulted in laptops that ran hot, spun their fans constantly, and had terrible battery life.
33:44
Speaker 1
Steve Jobs once called Intel a steamship.
There were slow, rigid and impossible to turn.
Apple decided they could no longer tie the future of the Mac to a steamship.
33:53
Speaker 2
And so in 2020, Apple drops the hammer.
They announced the transition to Apple Silicon.
They are dumping Intel X86 entirely and moving the Mac to custom chips designed completely in house utilizing their ARM architectural license.
34:09
Speaker 1
And this wasn't an overnight pivot.
This was a decade in the making.
Apple had been quietly building the most elite silicon design team on the planet, honing their craft by designing the incredibly powerful A series chips for the iPhone and iPad.
34:23
Speaker 2
They took all of that knowledge regarding high performance, low power mobile computing and scaled it up.
34:28
Speaker 1
The result was the M1 chip.
And the M1 wasn't just a different instruction set, it was a fundamental shift in system architecture.
Apple moved away from the traditional PC motherboard layout and built a system on a chip or so.
34:41
Speaker 2
They took the CPU cores, the graphics processing unit, the neural engine for AI, and crucially the system memory and integrated all of it onto a single piece of silicon manufactured by TSMC on a cutting edge 5 nanometer process.
34:53
Speaker 1
The unified memory architecture is what really broke the brains of PC enthusiasts.
In a normal Intel PC, if the CPU needs to render graphics, it has to copy the data from its own system RAM, send it across the motherboard, and copy it into the discrete graphics card's VRAM.
35:10
Speaker 2
Which is slow and it burns a ton of power.
35:12
Speaker 1
Exactly with the M1, the CPU and the GPU share the exact same pool of memory on the chip.
There is no copying.
The CPU computes the data, points to the memory address and the GPU instantly renders it.
35:25
Speaker 2
The performance and efficiency gains were staggering.
Suddenly you had a fanless MacBook Air that was outperforming heavy power hungry Intel desktop workstations, all while delivering 18 hours of battery life.
35:38
Speaker 1
It proved that deep vertical integration, where the same company designs the silicon hardware, the compiler and the operating system, is an insurmountable advantage.
35:47
Speaker 2
But you know, the hardware was only half the battery.
This brings up the multi billion dollar software question.
35:52
Speaker 1
Right, because when you fundamentally change the instruction set from Intel X86 to ARM, every single Mac application on Earth, Photoshop, Microsoft Word, Google Chrome instantly becomes incompatible.
36:05
Speaker 2
It's all broken.
36:06
Speaker 1
How did Apple execute this transition without causing a massive revolt from their users?
36:11
Speaker 2
They solved it with a piece of software engineering magic called Rosetta 2.
36:15
Speaker 1
We really need to dive deep into how this actually works, because it's not just a standard emulator.
36:20
Speaker 2
No, not at all.
A traditional software emulator reads an old instruction, say an X86 command, figures out what it means, it translates it into an ARM command, and then executes it.
Doing this dynamically on the fly consumes a massive amount of processor overhead.
36:35
Speaker 1
Which makes the software feel incredibly sluggish.
36:37
Speaker 2
Rosetta 2 is an ahead of time translator.
When you download an old Intel app and double click the icon for the first time, Rosetta 2 intercepts it before the app even runs.
Rosetta reads the entire application code, translates all the X86 instructions into native ARM instructions, and saves a brand new optimized ARM version of the app on your hard drive.
36:58
Speaker 1
So it does all the heavy lifting of translation Exactly.
37:00
Speaker 2
Once the next time you open the app, it just runs as a native ARM binary.
That explains why it's so fast.
37:06
Speaker 1
But how do they handle the deeply complex underlying differences in the architectures?
37:11
Speaker 2
This is where Apple's absolute control over the hardware stack shines.
X86 and ARM handle basic math and logic differently.
Specifically, there is a concept in processor architecture called flags.
37:25
Speaker 1
OK, what are flags?
37:26
Speaker 2
Flags are tiny bits of memory that record the outcome of an operation if a calculation results in A0A specific 0 flag flips to true.
The X86 architecture has two very specific, deeply ingrained flags, the parity flag and the adjust flag.
37:43
Speaker 1
And the standard ARRM architecture does not have direct equivalence for those.
37:46
Speaker 2
Correct, it doesn't have those specific behaviors.
37:48
Speaker 1
And if an old Intel program expects to see a parity flag after a math operation and it doesn't find one.
37:54
Speaker 2
The program will instantly crash.
37:55
Speaker 1
So what's the workaround?
37:57
Speaker 2
Now Rosetta 2 could calculate the exact state of those missing parity and adjust flags purely in software every single time an instruction runs.
But doing that math and software would be computationally expensive.
It would cripple the performance of the translated app.
38:12
Speaker 1
So what did Apple do?
38:14
Speaker 2
Because they designed the M1 silicon themselves, they literally hardwired custom non standard instructions directly into the ARM processor specifically to calculate and store those X86 parody and adjust flags.
38:26
Speaker 1
They dedicated actual silicon to it.
38:28
Speaker 2
They dedicated bits 26 and 27 of the M1's flag register strictly to act like an Intel chip.
38:34
Speaker 1
That is the ultimate flex.
They built physical microscoic circuitry into their brand new ARM chip just to help their software translator retend to be an Intel chip faster.
38:43
Speaker 2
Yes, Rosetta 2 uses those hardware pathways to bypass the software translation enalty entirely.
This level of architectural agility, modifying the physical silicon to solve a software friction point, is something a company like Dell or HP just buy off the shelf chips from Intel simply cannot do.
38:59
Speaker 1
And it worked.
Florida State.
The transition felt like magic.
Users bought M1 Max, downloaded their old Intel apps, and they ran perfectly, often faster than they ran on native Intel hardware.
39:11
Lisa Su's Turnaround: Zen and the Chiplet Revolution
Apple completely escaped the X86 gravity well.
39:14
Speaker 2
They really did.
39:16
Speaker 1
But as Apple ascends to these new heights, leaving Intel scrambling, we have to look back at the other historic player on the board.
We left AMD back in 2006 riding high on the success of the Opteron server chip, forcing Intel to adopt their 64 bit standard.
39:32
Speaker 2
But the semiconductor industry is unforgiving.
If the early 2000s were AM DS Golden age, the late 2000s and early 20 tens were their absolute dark age.
39:40
Speaker 1
They came perilously close to complete bankruptcy.
39:43
Speaker 2
Very close, and it all stems from a massive, highly controversial gamble in 2006.
Flush with cash from their server victories, Andy decides to acquire ATI Technologies.
39:54
Speaker 1
One of the world's leading designers of graphics processing units, or GPU's.
39:58
Speaker 2
Right.
And they paid $5.4 billion for it.
40:01
Speaker 1
Wow.
Now, the strategic vision behind the acquisition was actually brilliant, right?
Highly prophetic.
40:06
Speaker 2
It was Andy's management, led by CEO Hector Ruiz, realized that the future of computing wasn't just going to be standard CPU's.
They saw that graphics rendering and parallel processing were becoming deeply intertwined with standard computing tasks.
40:20
Speaker 1
So they envision creating what they called Accelerated Processing units, or APU's.
The goal was to fuse a high performance X86 CPU and a high performance Radeon GPU onto the exact same piece of silicon.
40:34
Speaker 2
Right, which would reduce latency, lower power consumption and dominate the laptop and low cost desktop markets.
40:40
Speaker 1
I mean, the vision was completely correct.
We just talked about how Apple's M1 basically perfected that exact sold a concept.
The problem for AMD wasn't the vision, it was the timing and the execution.
40:50
Speaker 2
The timing was disastrous, taking on $5.4 billion in debt to buy ATI left, AMD highly leveraged right as the global financial crisis of 2008 was about to detonate, and simultaneously the sleeping giant finally woke up.
41:05
Intel had finally abandoned the disastrous Netburst architecture.
They brought in a new engineering team, actually the team from Israel that had designed their efficient laptop chips, and they scaled that architecture up.
They released the core architecture.
41:18
Speaker 1
The Core 2 Duo and the subsequent Core I series processors.
41:21
Speaker 2
Right, and the core architecture violently snatched the performance crown back from AMD.
It was vastly more efficient, fastly faster, and Intel's manufacturing fabs were churning them out on advanced nodes that AMD couldn't match.
41:35
Speaker 1
So AMD is suddenly fighting a horrific 2 front war.
They are losing market to Intel in the CPU space, they are fighting NVIDIA tooth and nail in the standalone GPU space, and they are drowning in billions of dollars of debt from the ATI acquisition.
41:50
Speaker 2
Quarter after quarter, AMD posted massive financial losses.
Their stock price plummeted to near penny stock levels, hovering around $2.00 a share.
They were functionally on the verge of death.
They had to make radical, painful amputations just to survive.
42:04
Speaker 1
Which leads to the end of Jerry Sanders's old mantra.
Real men might have fabs, but starving companies can't afford them.
42:11
Speaker 2
Exactly.
In 2009, AMD made the agonizing decision to sell off the very heart of their operations, their silicon manufacturing plants.
42:20
Speaker 1
Because keeping up with Moore's Law, upgrading fabrication facilities to print smaller and smaller transistors had become a multibillion dollar capital expenditure arms race.
42:30
Speaker 2
Only a massive vertically integrated monopoly like Intel or a dedicated high volume pureplay foundry like TSMC in Taiwan could afford the research and development required for extreme ultraviolet lithography and advanced node shrinks.
42:44
Speaker 1
So AMD spins off its fabs into an independent company called Global Foundries.
They essentially sell their factories to Abu Dhabi investors.
AMD becomes a fabulous semiconductor company.
They.
42:55
Speaker 2
Still employ the architects who designed the chips, but they pay someone else to physically print the silicon wafers.
43:01
Speaker 1
Going fabulous was a painful admission of defeat on the manufacturing front, but it was a necessary survival mechanism.
It stopped the bleeding.
43:07
Speaker 2
It freed up massive amounts of capital that they desperately needed to pay down debt and fund R&D.
43:13
Speaker 1
But even after going Sablis, the processor designs were floundering throughout the early 20 tens.
They released an architecture codenamed Bulldozer.
43:21
Speaker 2
And Bulldozer was a catastrophic failure.
It was power hungry, it ran hot, and single core performance was terrible.
Intel's market share in the lucrative server market crept back up to over 99%.
Intel essentially had a total monopoly.
Again, AMD needed a miracle.
43:37
Speaker 1
And in 2014, that miracle arrived in the form of Doctor Lisa Sue.
43:41
Speaker 2
Taking over as CEO, she delivers what has to be studied in business schools as the greatest corporate turn around in tech history.
43:48
Speaker 1
Doctor Sue is a fundamentally different type of CEO, isn't she?
43:51
Speaker 2
Absolutely.
She has a PhD in electrical engineering from MIT.
She actually helped pioneer the use of copper interconnects in semiconductor manufacturing while at IBM.
She isn't a marketing executive.
She is a deeply technical, incredibly disciplined engineer.
44:05
Speaker 1
So what was her strategy?
44:07
Speaker 2
When she took over AMD, the company was fragmented, chasing too many shiny objects to try and find revenue.
She brought a brutal clarity to the road map.
She looked at the company and said, what is our actual DNA?
What are we?
Uniquely good?
44:19
Speaker 1
At and the answer was high performance computing building complex CPU's and GPU's.
44:24
Speaker 2
She stopped wasting resources trying to fight ARM in the low margin smartphone space.
She refocused the entire company on desktops, enterprise servers and gaming.
44:34
Speaker 1
And crucially, she secured strategic partnerships that provided A lifeline.
AMD won the contracts to design the custom APU silicon that powered both the Sony PlayStation 4 and the Microsoft Xbox One.
44:47
Speaker 2
Those console deals were masterstrokes.
They didn't have massive profit margins, but they provided a massive, predictable, steady stream of revenue.
That console money kept the lights on at AMD while Doctor Seuss sent her best engineers, including the legendary architect Jim Keller, who returned to the company, into the bunker to design A completely new architecture from scratch.
45:06
Speaker 1
And in 2016 they unveiled it the Zen Micro Architecture.
45:09
Speaker 2
The industry was deeply skeptical.
AMD had promised turnarounds before and failed, but AMD claims Zen would deliver a massive 40% leap in instructions per clock over the disastrous Bulldozer chips.
45:22
Speaker 1
Did they hit the target?
45:23
Speaker 2
When the first Zen processors branded as Ryzen actually hit the market in 2017, they over delivered.
They achieved A staggering 52% IPC leap.
Suddenly AMG was offering highly competitive multi core performance at prices that severely undercut Intel.
45:39
Speaker 1
Zen put AMD back in the game, but the real genius of the Lisa Sue era, the innovation that fundamentally disrupted the economics of chip manufacturing and allowed AMD to completely blind side Intel in the data center, was their shift to a chiplets strategy.
45:54
Speaker 2
We really need to unpack chiplets because it is an elegant solution to a terrifying physics and economics problem.
45:59
Speaker 1
Yes, to understand shiplets you have to understand how Intel was building their massive Zeon server processors.
Intel used a monolithic design.
46:07
Speaker 2
Meaning they tried to etch all 28 or 32 processor cores, all the cache memory, and all the memory controllers onto one single massive continuous square of silicon.
46:16
Speaker 1
Now, silicon wafers, the giant disks they print the chips on, always have microscopic, unavoidable defects.
A tiny speck of dust.
A slight chemical imperfection.
46:27
Speaker 2
And the larger your monolithic chip is, the higher the mathematical probability that a random defect will land somewhere on that chip during manufacturing.
And if a critical defect lands on your massive 32 core die, the entire chip is ruined.
You have to throw it in the trash.
46:43
Speaker 1
So as Intel tried to add more and more cores to compete, their die sizes grew and their yields, the percentage of good ships they got from a single wafer, plummeted.
Their manufacturing costs skyrocketed exponentially.
46:55
Speaker 2
The analogy I always use is ice sculpting I.
46:57
Speaker 1
Love this.
46:58
Speaker 2
One Imagine Intel is an artist trying to carve a massive, incredibly detailed, life-size horse out of a single giant block of ice.
It takes weeks of painstaking work, but if you hit one internal crack, one weak spot in that giant block, the whole sculpture shatters, it's ruined, and you lose all that time and money.
47:14
Speaker 1
That is the monolithic trap.
So what did AMD do with the chiplet strategy?
47:18
Speaker 2
They fundamentally change the art form.
Instead of trying to carve one massive horse out of a single block, AMD started carving smaller, perfectly formed identical blocks of ice.
And then they fused those smaller blocks together to build the final shape.
47:32
Speaker 1
Specifically, instead of manufacturing one giant 32 core die, AMD manufactured small simple 8 core chiplets.
47:41
Speaker 2
Exactly because these 8 core chiplets were physically small, the chances of a random wafer defect ruining one were incredibly low.
AM DS yields were massive.
They were getting almost entirely usable silicon from every wafer, which made manufacturing them incredibly cheap.
47:56
Speaker 1
But chopping a processor into tiny pieces creates a massive engineering hurdle.
How do you make 4 separate pieces of silicon talk to each other fast enough that the operating system thinks it's dealing with one giant processor?
If the latency between the pieces is too high, the performance collapses.
48:11
Speaker 2
That was AM DS secret weapon.
They developed a proprietary high speed interconnect technology called Infinity Fabric.
48:18
Speaker 1
Infinity Fabric.
48:19
Speaker 2
It is a remarkable high bandwidth communication protocol that binds these separate shiplets together across the processor package.
It allows data to flow between the separate silicon dye so quickly and seamlessly that it essentially mimics the performance of a monolithic chip.
48:34
Speaker 1
The economic and scalable advantages of this were absolutely devastating to Intel.
Because AMD had this modular Lego block system, they could scale up to massive core counts rapidly.
By 2019, they released the second generation of their EPYC server chips based on the Zen 2 architecture.
48:50
Speaker 2
They took eight of those 8 core chiplets, glued them together with Infinity fabric around a central input output dye, and dropped a 60, 4 core processor onto the market.
48:59
Speaker 1
64 cores and 128 threads on a single processor package achieved like the EPY C7742.
Intel simply could not mathematically manufacture a monolithic chip that large.
Their costs would have been astronomical.
49:12
Speaker 2
So Intel was stuck offering 28 core Xeons that cost an absolute fortune and consumed massive amounts of power.
Meanwhile, AMD walked into the data centers offering more than double the core count, vastly superior multi threaded performance, and lower power consumption, all at a highly disruptive price point.
49:31
Speaker 1
It fundamentally broke the server market.
It was so disruptive that enterprise software companies actually had to rewrite their billing models.
49:38
Speaker 2
Oh, this is a fascinating detail.
Virtualization companies like VM Ware historically charged enterprise customers per physical processor socket on the server board.
The assumption was that one socket equaled a predictable, reasonable amount of computing power.
49:53
Speaker 1
Right, but suddenly a customer could buy a single socket server with an AM DEPYC processor that had 64 cores.
That one chip had enough compute power to run the entire IT infrastructure of a mid sized business.
50:05
Speaker 2
VM Ware and other software vendors were terrified they were going to lose millions of dollars in licensing fees because customers only needed one socket instead of four.
They literally had to change their licensing terms to charge per core just to compensate for the sheer density of AMD's chiplet architecture.
50:22
That is the definition of industry shaking disruption.
50:25
Speaker 1
Driven by the relentless execution of Doctor Seuss road map, the high yields of the chiplet strategy, and leveraging TSM CS superior manufacturing nodes, AMD executed A flawless comeback.
By late 2025, they'd captured over 35% of the desktop market, and we're approaching 30% of the highly lucrative server market.
50:44
They stripped Intel of its technological Halo.
50:46
CUDA: NVIDIA's Bet on Parallel Computing and AI
It's an incredible story of survival.
50:48
Speaker 1
Now before we wrap up, we have to weave in one more critical thread.
Because what Intel and AMD were fighting over X86 and ARM was dominating mobile, another company was charting a completely parallel path that would eventually eclipse all of them in market value.
We have to talk about NVIDIA.
51:04
Speaker 2
NVIDIA is a fascinating contrast because their journey is also defined by massive early failures and an eventual incredible strategic pivot.
51:12
Speaker 1
Founded by Jensen Wong, Chris Malikowski and Curtis Prime in 1993, NVIDIA didn't start with CPU's, they started with graphics, and their very first product, the NV 1, was a total disaster.
51:24
Speaker 2
It was a classic case of betting on the wrong architectural standard.
In the mid 90's the industry was trying to figure out how to render 3D graphics on PC's.
NVIDIA designed the NV 1 based on rendering quadratic surfaces, essentially curved shapes.
51:39
Speaker 1
But Microsoft, realizing the need for standardization, released the Direct X API, which relied entirely on rendering flat triangles or polygons.
Nvidia's trip was fundamentally incompatible with the direction the entire software industry was moving.
51:53
Speaker 2
They nearly went bankrupt right out of the gate.
Shez and Huang had to layoff half the staff.
But he learned a brutal lesson about the importance of aligning with software ecosystems.
They pivoted hard, embrace polygons and released the RVA 128 which saved the company.
52:07
Speaker 1
And that willingness to pivot to find NVIDIA.
Fast forward to the late 2000s.
During the mobile boom we discussed earlier, NVIDIA tried to get into the smartphone processor game with their Tegra chips.
They wanted a piece of the mobile pie.
52:19
Speaker 2
But they hit a wall.
To sell a mobile chip, you don't just need a good CPU and GPU, you need an integrated cellular modem.
And Qualcomm had a total stranglehold on CDMA modem patents.
NVIDIA realized they couldn't win a protracted war in mobile against Qualcomm and Apple.
52:36
Speaker 1
So Jensen Huang made a very difficult but ultimately prescient decision.
He exited the smartphone market entirely.
He took the Tegra architecture and pivoted it toward markets where high performance visual computing was required, but the cellular modem wasn't the bottleneck.
52:51
Specifically, high end automotive infotainment systems and the Nintendo Switch console.
52:58
Speaker 2
But the truly historic pivot, the gamble that changed the world, happened.
In 2006, NVIDIA released an architecture called CUA Compute Unified Device Architecture.
53:07
Speaker 1
To understand why CUDA is so important, we have to look at the difference between ACPU and a GPU.
A traditional CPU, like an Intel X86 chip is a serial processor, has a few very powerful complex cores designed to execute 1 instruction at a time incredibly fast.
It's like having four genius mathematicians in A room.
53:24
Speaker 2
Right.
A GPU is a parallel processor.
It has thousands of very simple small course.
It's like having 10,000 high school students in A room.
53:33
Speaker 1
If you ask the four geniuses to solve a complex calculus equation, they will do it instantly.
But if you ask them to simultaneously paint 10,000 pixels on a screen, they will struggle because they have to do it 1 by 1.
53:46
Speaker 2
The 10,000 high school students, however, can each grab a paintbrush and paint one pixel simultaneously.
The screen is rendered instantly.
53:54
Speaker 1
Exactly.
GPU's were designed purely to push pixels for video games, but Jensen Huang realized that thousands of parallel cores could be used for other types of math, specifically the massive matrix multiplications required for scientific simulations, weather modeling, and eventually, artificial intelligence.
54:11
Speaker 2
But scientists couldn't easily program a graphics card to do math because the hardware only understood graphics commands.
54:17
Speaker 1
So NVIDIA spent billions of dollars developing.
CUDCUD is a software layer that allows standard programmers using languages like C++ to speak directly to the thousands of cores on a GPU and assigned them general purpose mathematical tasks.
54:31
Speaker 2
Wall Street thought Jensen was crazy.
Nvidia's rofits took a hit as they poured money into CUD research for years without obvious returns.
They were basically building a massive software mode around their hardware, waiting for an industry that didn't exist yet to catch up.
54:46
Speaker 1
And then the deep learning revolution happened.
Researchers realized that training neural networks, the foundation of modern AI, required exactly the kind of massive parallel matrix multiplication that Nvidia's GPU's running on the CUDA software stack were perfectly designed to execute.
55:03
Speaker 2
NVIDIA had essentially spent a decade building the exact shovels needed for the AI gold rush before anyone knew there was gold in the hills.
They became the undisputed hardware monopoly for artificial intelligence, fastly eclipsing the market caps of Intel and AMD.
55:18
Speaker 1
It perfectly illustrates our overarching theme.
While Intel was obsessively defending the X86 CPU mode, NVIDIA realized that the future of computing was going to require fundamentally different, highly specialized architectures.
They leaned into parallel processing, suffered the short term financial pain to build the software ecosystem, and reaped the ultimate reward.
55:37
Monopolies, Agility, and the Future of Silicon
So as we pull back and look at the entirety of this 50 year history, from Gordon Moore to Lisa Sue, from the IBMPC to the Apple M1 to the massive AI data centers, what is the core truth we can extract?
55:50
Speaker 1
The inescapable truth is that technological superiority is fragile and monopolies breed fatal blind spots.
Intel possessed the most dominant, lucrative franchise and hardware history with the X86 architecture.
They had billions in the bank and the best manufacturing facilities on the planet.
56:08
But their addiction to their own profit margins and their arrogance and assuming they could dictate the future to the software industry repeatedly caused them to stumble.
56:16
Speaker 2
It blinded them to the 64 bit transition, allowing AMD to disrupt the server market.
It blinded them to the mobile revolution, allowing ARM to conquer the globe.
56:24
Speaker 1
Survival belongs to the agile.
It belongs to the companies that view their architectures as tools, not religions.
Apple survived 3 architecture transitions because they ruthlessly prioritized the user experience and power efficiency over legacy allegiances.
When Power PC ran too hot, they cut it.
56:41
When Intel stumbled on 10 nanometer manufacturing, Apple cut them and leverage their own ARM expertise to vertically integrate.
56:48
Speaker 2
And AMD survived near extinction by constantly reinventing their business model.
They transitioned from a mere second source clone maker into a true architectural innovator.
When the financial burden of owning fabs became lethal, they abandoned the old dogma.
When fabulous and revolutionized processor economics with the chiplet strategy and Infinity fabric, they recognized their core strengths and executed A flawless comeback.
57:11
Speaker 1
It is an incredibly dynamic, brutal landscape, but I want to leave you with a final thought to consider.
For half a century, these corporate wars have been fought fiercely over instruction sets, the foundational languages of the chips.
It was X86 versus power PC, Intel versus AMD, X86 versus ARM.
57:29
But as we accelerate into a future defined by the cloud and massive artificial intelligence workloads, we have to ask, is the underlying CPU architecture even going to matter to the end user?
57:40
Speaker 2
It's a profound shift.
With software virtualization becoming so advanced and heavy workloads being offloaded to distant data centers or specialized AI accelerators like NVIDIA GPU's, the specific instruction set of the local CPU is becoming increasingly abstracted.
57:55
Speaker 1
Are we rapidly approaching an era where the software entirely hides the silicon?
An era where you simply request compute power and the cloud dynamically routes your task to whatever piece of silicon is most efficient at that exact millisecond, whether it's an ARM core, an X86 triplet, or a custom tensor core, without you ever knowing or caring.
58:14
Speaker 2
It makes you wonder if this brutal 50 year war over instruction sets is destined to become a relic of the past, operating invisibly beneath the surface of our digital lives.
58:24
Speaker 1
Take another look at that device sitting on your desk.
The battles that forged its silicon brain shaped the modern world.
But the next war might render that brain completely invisible.
Keep diving deep.
Intel's rise and fall: The brutal turf war over silicon