Posted by scrlk 19 hours ago
Apple will gain increasingly needed diversification.
US supply chain gets a boost.
Should be fine for TSMC in the short to medium term. Apple not going to risk actual mainline iPhone SoC on Intel any time soon, so lion share of TSMC Apple revenue will be fine.
It's not really realistic to make Mac, Watch, iPad chips on TSMC's best node in the next 3-4 years - assuming there is no collapse in AI. Unfortunately, this might mean we will get inferior Intel chips for our Macs. Intel nodes, as it stands, are far more power hungry, less dense, and lower yielding. Intel's own Panther Lake CPU tile is on 18A and it's extremely disappointing in terms of perf/watt and raw perf.
I still expect iPhone chips to be made on the best TSMC nodes though. I'm assuming Apple will design every future core for both TSMC and Intel, sort of like how they dual sourced with TSMC and Samsung in the past for the same generation.
Panther Lake does not have great raw performance, because for now Intel has not succeeded to obtain in their new 18A CMOS process clock frequencies as high as they get in the older TSMC 3-nm process used for their previous Arrow Lake H CPU generation and the CPU cores of Panther Lake have only minor changes that can affect performance in comparison with Arrow Lake/Lunar Lake.
On the other hand, from the published reviews that I have seen, Panther Lake has significantly better performance per watt than Arrow Lake H, which can be attributed only to the Intel 18A process when compared with the TSMC 3 nm process.
The energy efficiency i.e. performance per watt ratio of CPUs is mainly determined by the fabrication process and not by the CPU design, as long as the CPU designers are competent enough (unlike the single-thread performance, which is determined mainly by the CPU design).
So there is no doubt that Apple CPUs made with the Intel 18A process will have better performance per watt than those made with a TSMC 3-nm process. Moreover, because Apple CPUs can reach a given level of performance at lower clock frequencies, they should be much less affected by the lower clock frequencies attainable with Intel 18A than the Intel CPUs.
We also do not know whether Apple intends to use the Intel 18A process (currently used for Panther Lake laptop CPUs and Clearwater Forest server CPUs), or only its successor, Intel 14A.
The most important one for efficiency is ST perf/watt. MT perf/watt is largely based on how many cores there are. You can achieve better MT perf/watt simply by having more cores (more transistors) and run them at lower clocks. Panther Lake also has an entirely new MT config with 3 tiers of cores vs 2 for Arrow Lake.
For ST perf/watt, it loses to LNL.
Keep in mind that LNL and Arrow Lake used N3B, and future N3 nodes have been much more efficient. Panther Lake CPU is also a new design which should have improved perf/watt automatically regardless of node.
Based on this, one can deduce that Intel 18A is likely a bit worse than N3B and perhaps equivalent to N4P. Keep in mind that N3B went into production in late 2022 and N4P was a 2021 node.
I don’t think Nvidia even has an N2 chip announced, could be wrong through.
Nvidia’s chips aren’t usually on the latest nodes.
Not yet. The primary reason is because most AI chips are full reticle sized which means the first year yields likely won't be very cost effective. It takes a new node a few years to fully mature in terms of yield. Little iPhone A series and server CPU chiplets are perfect for new nodes.That said, Nvidia will certainly try to move smaller and lower volume chips in future generations to the most cutting edge node such as their CPUs, networking chips. Vera Rubin has 7 unique chips. They don't need be all on the same node, and they're not.
AMD is taking up much of the N2 supply with their Epyc CPUs this year. There is no doubt in my mind that Nvidia, ARM, Graviton will try to book as much of the most cutting edge node as possible for their future enterprise CPUs given that AMD has done it for N2. I can see enterprise CPUs becoming equal launch partners to TSMC nodes as Apple. Agentic AI is going to cause a huge demand increase in CPUs.
So far Apple has been aggressive about stopping production of older processors, but as we see with MacBook Neo, even the lowest chips are increasingly overpowered for many users.
I expect even if Intel can’t match TSMC’s latest, they’ll be able to produce one or two generations behind at low cost and high volume. (I worked at Intel for 5 years or so, that phrase would have gotten me fired back then)
I do expect personal AI machines to take off in a few years once local models and local hardware hit an inflection point. M5 Max is a major improvement for local inference due to the added matmul accelerators, but the RAM capacity and bandwidth bottleneck is huge.
That said, enterprise AI chips will still take the cake in terms of margins.
But Apple is about B2C, and customers buy services. All of us know, deep down, that Apple Intelligence will be a subscription service offered as part of Apple One. The Mac won't become the magical backbone for your personal inference network, it's a product used to consume Apple Services first and everything else comes second.
Apple was reported to have locked up half of the initial year's 2nm production, which is lower than their share of 3nm, but hardly a sign of being squeezed out of the market
Apple was actually told by TSMC to move off of N3 asap because Nvidia with its Vera Rubin and Google TPUs will take over.
Semianalysis had a great and detailed article about TSMC & Apple and how the future might play out: https://newsletter.semianalysis.com/p/apple-tsmc-the-partner...
https://www.macrumors.com/2025/08/28/apple-tsmc-2nm-producti...
The Macrumors article definitely isn't when Apple decided to buy half of N2 capacity for the first year. It would have happened years ago.
It doesn't even beat Lunar Lake in efficiency (made on TSMC N3B) released in 2024.
[0]https://www.notebookcheck.net/Intel-Panther-Lake-Core-Ultra-...
Both the absolute performance and the performance per watt in single-thread benchmarks are determined mainly by the CPU design and they are only slightly constrained by the CPU fabrication process.
Only the multithreaded benchmarks are useful for comparing CMOS fabrication processes, because the performance in multithreaded benchmarks (with a given cooling system) is limited mainly by the energy required to switch a logic gate, which is a characteristic of the fabrication process, and they are only weakly dependent on the CPU design, as long as the CPU design does not have obvious mistakes.
In multithreaded benchmarks, CPUs work at a fixed power consumption, determined by the maximum allowable temperature and the cooling system. A fixed power means a fixed number of gates that switch per second. The completion of a given benchmark requires a similar number of gate switchings in well designed CPUs, in which case the performance in such a benchmark is fully determined by the fabrication process. Deviations from proportionality appear when some CPUs need much less gate switchings than others to complete some work, which happens for example when a CPU has wider vector or matrix execution units, e.g. by supporting AVX-512 or SME or AMX.
ST is far better than MT for this node comparison. MT is heavily influenced by core count, clock speed, core configuration. Panther Lake also has 3 tiers of cores compared to Arrow Lake's 2. The architecture for MT is entirely different.
Meanwhile, for ST, a core is a core. It's less or not affected by architectural changes to core configurations.
> With the new Panther Lake mobile processors, Intel has managed to successfully combine the two previous generations, Arrow Lake and Lunar Lake, as the performance is even better than with Arrow Lake, while efficiency has been improved at the same time. Even with low power limits, the performance is very competitive, and Intel (in conjunction with the new GPUs) is therefore the better choice for slim laptops.
Their benchmarks say LNL is more efficient.
LNL has a much lower power consumption in the memory interface, like the Apple CPUs, which has nothing to do with the fabrication process. Also LNL is a lower performance CPU, for which it is normal to have better energy efficiency.
Only the comparison between Panther Lake and Arrow Lake H, which have equivalent structures, can be used to compare the Intel 18A and the TSMC 3-nm fabrication processes.
This comparison shows that Intel 18A ensures a better performance per watt, i.e. energy efficiency, which leads to a better multithreaded performance, but the TSMC 3-nm process, at least for now, allows higher maximum clock frequencies, which make possible a higher single-thread performance.
Only the comparison between Panther Lake and Arrow Lake H, which have equivalent structures, can be used to compare the Intel 18A and the TSMC 3-nm fabrication processes.
Panther Lake uses a new core design which likely contributed to better perf/watt regardless of which node was used. For example, Zen3 had a 19% increase in IPC despite being on the same N7 family node as Zen2. Panther Lake has 3 tiers of cores instead of 2 in Arrow Lake. The MT design is very different. New core and layout designs can make a huge difference in efficiency on the same node. This comparison shows that Intel 18A ensures a better performance per watt, i.e. energy efficiency, which leads to a better multithreaded performance, but the TSMC 3-nm process, at least for now, allows higher maximum clock frequencies, which make possible a higher single-thread performance.
We should compare ST perf/watt instead of MT. MT has too many factors including core count, die size, transistor count, clock speed.Based on ST perf/watt, Intel 18A is likely a bit worse than N3B (2022 node) and a bit better than N4P (2021 node).
The Panther Lake cores, i.e. Darkmont and Cougar Cove are the Arrow Lake/Lunar Lake cores, i.e. Skymont and Lion Cove, ported from the TSMC 3 nm to the Intel 18A fabrication process.
The Panther Lake cores have only minor changes, i.e. bug fixes and the addition of a new mechanism for interrupts and exceptions, FRED. A preliminary version of FRED is likely to have already been implemented on Arrow Lake/Lunar Lake, but if so it was disabled there after production.
In any case FRED will not cause improvements in the present benchmarks, as it is used only inside the operating system and the current operating systems are unlikely to have been updated to use it anyway.
In contradiction with what you say, ST performance or performance per watt cannot be used to compare fabrication processes but only the multithreaded performance can bu used for this purpose.
Single-thread performance is affected by a lot of factors that have nothing to do with the fabrication process, but all those have little or no influence on multithreaded performance.
The reason is that in any well optimized MT workload, the CPU runs at a constant power consumption. This eliminates the influence of all factors mentioned by you.
I have already explained in another comment that a constant power consumption means a constant number of gate switchings per second, which is determined by the energy required to switch a logical gate, which is a characteristic of a fabrication process.
When a given amount of work is done by a benchmark using the same algorithm, well-designed CPUs will need approximately the same number of gate switchings to complete the work, regardless of the number of cores included in a CPU.
Significant variations of the numbers of gate switchings can be caused only by architectural differences like the width of vector and matrix execution units. Smaller variations are caused by various quality characteristics of a CPU core design, like the frequencies of branch mispredictions and of cache misses, which should be similar for CPU design teams that do not differ much in competence.
When we compare equivalent cores in different fabrication processes, like Arrow Lake H vs. Panther Lake, the multithreaded benchmarks are almost unaffected by anything else except the fabrication process, assuming that the cooling systems are also equivalent.
The reason is that in any well optimized MT workload, the CPU runs at a constant power consumption. This eliminates the influence of all factors mentioned by you.
This makes no sense because in ST perf/watt, we're normalizing watt already.Intel is both at the same time, AMD and TSMC.
Other commenters in the thread talk about how Intel's node is simply inferior to TSMC's and will bottleneck the performance of the same chip designs simply by being bad. I hope that is not the case and/or that I won't have to settle for an Intel node inside my Apple chips. (They better not try to pull an AMD where some chips simply have utterly kneecapped performance for no good reason.)
Apple aren’t going to be asking for Intel Inside.
It’ll be more like ‘Can you make this thing? How many and much?’
Not to mention that Intel does not and will not any time in the next decade have the capacity for a product of that quantity.
There was a recent interview with Dylan Patel and he explained it pretty well.
Basically, there are tiers of risks and how "AGI pilled" each tier is. The bottlenecks and supply constraints get worse and worse as you down down the tiers.
Tier 1: OpenAI/Anthropic - extremely AGI pilled and think it's a sure thing. They want all layers underneath to prepare to make as many chips as possible and go all in.
Tier 2: Nvidia/AMD/Broadcom - very bullish but doesn't think AGI is a sure thing
Tier 3: TSMC, Samsung, SK Hynix, Intel, Sandisk, Micron - bullish but if they're wrong and overbuild, they can actually go bankrupt. Each fab can cost tens of billions. An N2 fab is estimated to be $30b each.
Tier 4: Every supplier to T3 such as ASML, Applied Materials, other fab machines and suppliers - Less bullish, may even see this as just a super cycle rather than a permanent increase in demand so they're less inclined to take too many risks to scale up
Apple Inc. has held exploratory discussions about using Intel Corp. and Samsung Electronics Co. to produce the main processors for its devices in the US, a move that would offer a secondary option beyond longtime partner Taiwan Semiconductor Manufacturing Co. [0] (paywalled)
They wouldn’t need either Intel or Samsung if it wasn’t bleeding edge. I think it’s 14A for Intel. TSMC is still have the edge overall, but they are neck and neck in terms of node.
TSMC will be more than fine. They are hardly able to meet the demands.
[0] https://www.bloomberg.com/news/articles/2026-05-05/apple-exp...
There’s a lot of “main” processors for Apple’s devices at this point.
I would be deeply skeptical of a brand new flagship iPhone <n> Pro having an Intel fab’d SoC until at least a few years into this arrangement.
Lip-Bu Tan is a year older than Tim Cook. Doubt he wants to run Intel for very long.
Would be hard for me in the Ternus role to not have that in mind if Intel gets it together.
This is about diversifying their supply chain as they have done all over the place for decades. Displays, for example.
The uncertainty of political order in the near term could make having fabs on their home turf as worth the security.
Learn now, collaborate as preparation in case certain criteria politically, financially, are met.
Intel would only need to be on par with TSMC's older 3nm node to Fab Apple's entry level SOCs.
Yes, Intel made the first purchase for High NA EUV machines. That's largely because they were so far behind TSMC, they took a big risk as the first adopter for High NA EUV with their upcoming 14A node to try to catch up.
TSMC thinks it can keep using low NA EUV machines for N2 and A14 nodes even if they have to increase the number of patterning steps. This also means TSMC will likely keep all the AI chip design wins since High NA has half of the reticle size of low NA. The maximum chip size of High NA is half of low NA. This is a major deal for AI chips because they tend to want to be as big as possible.
None of these things mean Intel bought more total EUV machines than TSMC. A quick internet search says TSMC has about 2x as many fabs in active construction as Intel.
Intel thought they could skip buying into EUV at all and just increase their patterning steps.
That didn't work out as well as they hoped.
And given the kind of performance and battery life we have seen from their latest chips they definitely seem to be back in the game
Panther Lake on 18A is less efficiency than Lunar Lake on N3B released in 2024.https://www.notebookcheck.net/Intel-Panther-Lake-Core-Ultra-...
Intel would need to have lots of (and / or very big) customers lined up or big plans to manufacture possibly more than CPUs of their own design to make use of that capacity.
> can print transistors 1.7 times smaller – and therefore achieve transistor densities 2.9 times higher – than they can with NXE systems.
https://www.asml.com/en/news/stories/2024/5-things-high-na-e...
https://www.datacenterdynamics.com/en/news/intel-acquires-as...
Anyways, it doesn't matter if it's high NA or low NA when it comes to capacity. What matters is how many total EUV machines and fabs. As of right now, a Google search says TSMC has 2x the number of fabs in construction as Intel.
Apple hardware standards. Apple software could use some of these.
But it’s likely not a one-dimensional decision. Supply chain diversification, China / Taiwan, Intel having established US fabs, on and on. Seems like a wise decision in every way.
It's not clear if it's going to get better either. It could get worse in terms of supply.
Honestly, I found it hard to understand why they abandoned RAM and solid state memory fab sectors too. With all the national security spending by DoD, DoE, etc., I would have thought there is room for some US-based business to remain, even if some of the mass consumer stuff has been lost to low margin international competitors.
Intel was in ram and solid state for a bit but didnt stick with it. Optane was a huge flop and they use different manufacturing techniques vs CPUs. Plus Micron is one of the biggest dram manufacturers and US based.
Isn’t running a fab only while it makes top of the line chips a bad idea because you can still make good money from it in later years?
If so, I think they, _if_ they ever want to own a fab (unlikely, IMO), they’ll want to accept outside customers for it when it has stopped being best-in-the-world.
It is probably a second source deal for a popular chip or a support chip in an older process node like a power converter.
2020: Apple Silicon
2030: Intel Apple Silicon
Is this maybe a way to expand the affordable neo line?
The government (both current and previous administrations) is doing everything it can to make sure they do keep up, at the very least. And with enough money being thrown at it, they probably will.
Nobody benefits if just one company controls the state of the art in chip manufacturing, and Intel is one of maybe two other companies positioned to have a chance at competing effectively with TSMC.
What IMO is a bad strategy is the aversion to nationalization that exists in the USA. They buy billions worth of shares in key companies to inject capital during times of crisis, to later divest and refuse to be a player in industry.
China's model is much more complex. There's state-owned companies, companies where the state is a major stake-holder, and private companies too. It seems to afford them more tools to push and steer industries as they see important.
The USA is no stranger to this at smaller scales; airports are state run (at the municipal or state level). This rids them of the burden of profit, and allows them to be strategically use for the broader benefit when it makes sense.
Some are profitable; state-run doesn't necessarily mean unprofitable. But some can written off as infrastructure investments that don't make money but make other industries in the region competitive. At some point this makes sense if you want to keep pushing forward; let's stop worrying too much about making money on X, because if X is a widely-available commodity, we can instead make money on Y and Z.
I see it in Mexico too. Mexico's private healthcare is affordable and good because it has huge state-run healthcare system to compete with. State-provided healthcare isn't the best or fastest healthcare you can get, but it is free. This certainly puts competitive pressure on private healthcare companies, and in a way gives the Mexican government the best regulatory tool: the market itself. The Mexican government isn't trying to destroy private health, but via the state health enterprise it gains tools to steer and push the health industry in ways it may deem important.
Looking at the state of EVs and the car industry, I think it's clear whatever the Chinese government did to incentivize EV innovation was more effective than the federal incentives the USA government provided. At one point the USA government had a 60% stake in General Motors [1]; meaning it was nationalized, before being privatized again by 2013.
I just wonder what the USA could've done with that machinery; could they have offered a cheap EV, even if it's low quality, to push adoption, competitive pressure and get supply chains going? Could they have further commoditized certain parts to lower costs? Could they have strategically opened factories in certain locations to lower the risk and investment cost of future companies, and this way get the ball rolling on creating new auto-industry regions? We will never know, but we do know the USA's auto industry is now on the defense playing catch-up to China, and there seems to be little the USA government can do except placing tariffs and offering subsidies.
[1]: https://www.cnbc.com/2013/12/09/government-sells-the-last-of...
Also, the NEO line uses cutting edge technology that is necessary for the iPhone SOC, so this is probably for other chips.
But the big reason x64 couldn't keep up was that Intel's fab capabilities were horrible. Intel got stuck and couldn't get smaller nodes out and competing fabs caught up and left Intel in the dust.
Apple was able to ship 22nm Intel processors in Summer 2012 while their iPhone processors were 32nm that Fall and 28nm in Fall 2013. Spring 2015, Apple shipped 14nm Intel laptops and later that Fall 14/16nm iPhones. Competitors had caught up and soon TSMC started surpassing Intel.
Yes, Intel's fab capabilities have improved lately, but Intel's fab failures were causing x64 to fall behind. If Intel had retained fab supremacy, x64 wouldn't have fallen behind. I think Apple still likes the idea of being able to build exactly the parts they want (so they can optimize for power, thermals, etc), but Intel fell behind because their fabs stopped being competitive.
> But the big reason x64 couldn't keep up was that Intel's fab capabilities were horrible. Intel got stuck and couldn't get smaller nodes out, and competing fabs caught up and left Intel in the dust.
It also was that Intel couldn’t execute reliably on their own roadmap, forcing Apple at the time to do extra engineering to incorporate Intel's chips. Apple sells a lot of laptops; Intel never got their act together regarding mobile processors for MacBooks and MacBook Pros.
The 8-core Mac Pro used Intel Xeon 5500 series; at idle, it used 309 W; it used 9 fans for cooling [1]. It sounded like a jet engine when it was running. And while it was an elegant design for the time, they shouldn’t have needed to jump through these hoops.
Intel kept putting out delusional roadmaps that would assume their 10nm fab process was going to be ready for mass production in just another quarter or two. They spent years refusing to plan for 10nm to not be ready, so all their new architectures were unshippable and they had to resort to just using copy and paste on their 2015 CPU cores. Their fab fuck-up was hardly the only mistake they made in that era, but it was the biggest underlying cause of their problems.
Intel/AMD chips are designed with one thermal target for acceptable computing and a second, much higher target if you want to compute at the highest throughput continuously.
Apple did not provide the highest thermal capaicity and suffered when comparing similar cpu against another OEM. With Apple silicon, the cpu is designed around the thermal solution Apple is willing to provide. A lower power target leads to a lower clockspeed target leads to different design tradeoffs than Intel/AMD where flagship designs must clock to the moon. You can see similar benefits for the lower targets in AMD's ZenC cores.
But ZenC wasn't available, and Apple probably wouldn't want to be running laptops with only ZenC when you could get a regular Zen laptop from someone else. Apple benefits from avoiding apples to apples comparisons.
Likely Apple won't lean too heavily on Intel fab to start with. Let them do processors for value products and see where it goes, but always plan for fab agility. At least until Intel fab becomes a reliable partner.
It’s a good way to keep pumping the share price too.
> The Journal report said the U.S. government, which became Intel's largest shareholder last year under a deal with its CEO Lip-Bu Tan, played a major role in bringing Apple to the negotiating table.
... smells what it smells.