Inside Google’s Ironwood TPU v7 Supply Chain
Dear visitor, by reading this article, you acknowledge that you have reviewed and agree to the disclaimer outlined on this Substack.
Who really benefits when Google scales TPU v7?
Google Inc’s Ironwood (TPU v7) is one of the least transparent but most important AI accelerators in production. The full bill of materials is not public, but if you stitch together Google disclosures, OCP (Open Compute Project) standards work, and industry supply-chain research on CoWoS, HBM, optics, and OSATs, a reasonably clear map emerges of who actually gets paid when Google ships another Ironwood pod.
This article turns that long vendor list into a narrative: how the stack fits together, and where the highest operating leverage to Ironwood volume really sits.

What We Know (and What’s Inferred)
The picture is built from three pillars:
1- Direct public disclosures from Google and coverage by TrendForce, Tom’s, TechRadar, etc.
2- OCP ORv3 rack, power, and cooling work, where Google is a major contributor and Ironwood clearly aligns: ±400 VDC rack power, >200 kW/rack, blind-mate liquid cooling, ORv3-HPR patterns.
3- Industry and sell-side analysis of TPU CoWoS, HBM allocations, optics and advanced packaging (TSMC, ASE, Amkor, ABF vendors, etc.).
From that, you get two categories:
CONFIRMED/PUBLIC – explicitly disclosed or strongly documented.
LIKELY/ECOSYSTEM – strong inferences based on standards, capacity allocation, and “who else could realistically be doing this at scale.”
1. Silicon: TPU, CPU, and System Logic
At the heart of Ironwood are custom Google chips:
Alphabet / Google ($GOOGL) – Designs the TPU v7 (Ironwood) and Axion CPUs, owns the overall architecture, inter-chip interconnect (ICI) protocol and “AI Hypercomputer” system integration.
Broadcom ($AVGO) – Confirmed co-developer of all TPU generations.
Broadcom turns Google’s architecture into manufacturable silicon, contributes SerDes and ASIC implementation, and is widely reported as Google’s partner again on Ironwood.
MediaTek (2454 TT) – Publicly announced as a “next-gen TPU” design partner for Google, taking on part of the custom AI ASIC business from Broadcom. Whether that starts with v7 or v8 is debated, but it’s clearly in the TPU roadmap.
Arm ($ARM) – Google’s Axion CPU is built on Arm Neoverse V2, providing the CPU IP that sits in the same racks as Ironwood for control, host, and some inference workloads.
This is the highest value-add tier: architecture, IP and the TPU/CPU dies themselves.
2. Foundry, Advanced Packaging, and Substrates
Once the chips are designed, they need to be built and packaged into massive multi-chip modules:
TSMC ($TSM) – The de-facto foundry for prior TPUs and Broadcom’s custom AI ASICs, and owner of the dominant CoWoS-L/S platform. Industry estimates show tens of thousands of CoWoS wafers allocated to Google TPUs, making TSMC the most likely foundry and advanced packaging provider for Ironwood.
ASE / SPIL (ASE: $ASX) – Major OSAT partner to TSMC and a key CoWoS-S player. ASE (and subsidiary SPIL) are cited as overflow packagers for Google TPU wafers.
Amkor ($AMKR) – One of the “big three” OSATs in advanced packaging (CoWoS-class, FOCOS, etc.) and a natural second-source if Google/Broadcom diversify packaging.
Underneath all of this is a constrained pool of ABF substrate vendors that effectively gate AI accelerator production across the industry:
Ibiden, Unimicron, Kinsus, Nan Ya PCB, Shinko – Core ABF and organic substrate suppliers into TSMC and the major OSATs. The same ABF stacks that feed H100/MI300-class GPUs also underpin Ironwood’s MCM packages.
For investors, this tier is where each incremental TPU wafer directly translates into utilization and pricing power for TSMC, ASE, and the ABF ecosystem.
3. HBM and System Memory
Ironwood’s headline numbers – 192 GB HBM3E at ~7.4 TB/s per chip – tightly constrain who can realistically supply its HBM:
SK hynix (000660 KS) – The most probable HBM3E supplier for TPU v7 based on capacity (24 GB stacks) and bandwidth that line up with Ironwood specs.
Samsung (005930 KS) – Pushing hard into HBM3E. Reports indicate Google evaluated Samsung HBM3E and even considered supplier changes due to qualification issues. Whether or not Samsung is in the first production waves, it is clearly in the strategic HBM conversation.
Micron ($MU) – HBM3E supplier to other AI accelerators. Public specs don’t line up cleanly with Ironwood’s bandwidth/capacity profile, suggesting Micron is not the primary HBM source here, but remains important optionality and competitive pressure.
The same trio will also feed DDR5 DRAM and NAND into Axion hosts, storage, and controllers around Ironwood pods.
4. Boards, VRMs, and Board-Level Power
The TPUs and CPUs need to live on high-layer motherboards with brutal power delivery requirements:
Celestica ($CLS) – Confirmed Google partner for TPU server assembly at both board and rack level, with significant capacity in Mexico and the US. This covers PCB assembly, final system integration and likely some in-rack harnessing.
Infineon ($IFX) – A leading vendor of power MOSFETs and controllers for high-current VRMs; widely designed into GPU/CPU boards and very likely present under Ironwood’s beefy heat-sunk VRM modules.
Texas Instruments ($TXN), Monolithic Power Systems ($MPWR), Vicor ($VICR), and Delta (2308 TT) – The usual suspects for point-of-load conversion and multiphase VRMs, taking 48–55 V busbars down to sub-1 V core rails.
This is a classic content-per-board story: every additional TPU node means more Celestica assembly, more VRM modules, and more high-current silicon on the board.
5. Racks, Power Delivery, and ORv3
Ironwood doesn’t live in isolation; it’s built into a new generation of high-power racks aligned with OCP ORv3 and a Google-backed ±400 VDC architecture that can push up to ~1 MW per rack.
Rack enclosure & structure
Rittal – Likely rack OEM, offering “DLC Ready Rack – ORV3” platforms with integrated cooling, power and monitoring that line up closely with Google’s ML/AI rack design.
Vertiv ($VRT) and Schneider – Provide high-density racks, PDUs, busways and rear-door heat exchangers widely used by hyperscalers.
Power shelves and busbars
Google / OCP ORv3 consortium – Defines the ±400 VDC reference architecture for AI and ML racks that Ironwood appears to follow.
TE Connectivity ($TEL) – Leads ORv3 high-power vertical busbar work and supplies high-current busbars and connectors.
Delta (2308 TT) and Advanced Energy ($AEIS) – Ship ORv3 power shelves and PSUs explicitly targeted at AI racks in the 21” ORv3 form factor.
Eaton ($ETN) and ABB – Facility-level UPS, switchgear and distribution to feed those shelves from 400–480 VAC.
This tier has clear rack-level leverage: power shelves, busbars and racks scale roughly linearly with the number of Ironwood pods Google deploys.
6. Liquid Cooling
Ironwood uses Google’s third-generation liquid-cooling infrastructure: board-level cold plates, rack-level manifolds, and blind-mate quick connects that map directly to OCP’s ORv3 liquid-cooling standards.
Key players across couplings, cold plates and CDUs:
Safeway Custom Fluid Transfer, Parker ($PH), Danfoss, CEJN – The core Blind Mate Quick Connector (BMQC) vendors listed in OCP documentation, supplying blind-mate couplings and valves at the rear of the rack.
CoolIT, Motivair – Provide cold plates and coolant distribution networks for high-TDP servers, heavily involved in OCP cold-plate requirements and AI rack designs.
Vertiv, Rittal – Also show up again on the cooling side with CDUs and rear-door heat exchangers.
Nalco (Ecolab, $ECL), Veolia – Water treatment and corrosion-control partners for closed-loop data center liquid-cooling systems.
Every incremental rack of Ironwood represents more coolant flow, more connectors, and more CDUs, which feeds directly into these ecosystems.
7. Interconnect: ICI, Optical Circuit Switching and Ethernet
At the system level, Ironwood relies on:
~9.6 Tb/s per-chip electrical ICI (likely Broadcom IP) inside the rack.
Optical Circuit Switching (OCS) between racks in Google’s Apollo/Jupiter fabric.
A surrounding 51.2T+ Ethernet fabric as part of the AI Hypercomputer.
Key vendors:
Broadcom ($AVGO) – Beyond the TPU die, Broadcom provides high-radix Ethernet switch ASICs (Tomahawk/Trident) and SerDes used heavily in Google’s Jupiter/Hypercomputer topologies.
Lumentum ($LITE) – Supplies lasers and optical transceiver engines and is developing the R300 OCS for AI data centers; strongly associated with Google-like OCS deployments.
iPronics and Polatis (a HUBER+SUHNER brand) – Co-leaders of OCP’s OCS sub-project; provide programmable photonic switches and optical circuit switches targeting AI fabrics.
Coherent ($COHR), Arista ($ANET), Cisco ($CSCO), Marvell ($MRVL), Alphawave, Montage, Kandou – The broader optical and Ethernet ecosystem: 400G/800G/1.6T optics, Ethernet switches, DSPs and retimers that make “fully optical” TPU clusters viable.
This is where the optical scale-out story lives: as Google grows Ironwood clusters horizontally, OCS and optics vendors monetize the links between pods and racks.
8. Connectors, Cables, and Backplanes
Moving tens of kW and terabits per second per chassis requires specialized high-speed connectors and cabled backplanes:
TE Connectivity ($TEL), Amphenol ($APH), Molex, Samtec, HUBER+SUHNER ($HUBN), 3M ($MMM).
These players dominate ORv3 and 224G signal-integrity work and supply the blind-mate power contacts, twin-ax cables, high-speed connectors and cabled backplanes that tie Ironwood boards into the rack spine.
9. System Integration, BMC, and Firmware
Finally, the management and firmware layer:
Celestica ($CLS) – Beyond PCB assembly, acts as a rack-level integrator for TPU pods: cabling, manifolds, power shelves, and mechanical integration.
AMI (American Megatrends) – BMC and rack-management stack co-developed with hyperscalers.
Broadcom ($AVGO) and Intel ($INTC) – BMC SoCs, OpenBMC contributions, telemetry and security standards (e.g., SPDM) used across ORv3 racks and mixed CPU/TPU environments.
This layer doesn’t get the same headline multiples as HBM or CoWoS, but it is critical glue for operations at fleet scale.
How to Use This Map for Diligence?
From an investor standpoint, the Ironwood map breaks into three leverage buckets:
Highest direct leverage to TPU v7 volume
Broadcom ($AVGO) – TPU die co-developer + switch ASICs + management silicon.
TSMC ($TSM) – Foundry + CoWoS bottleneck.
SK hynix (000660 KS) – Most likely HBM3E supplier.
ASE / SPIL ($ASX) and Amkor ($AMKR) – Advanced packaging OSATs.
Celestica ($CLS) – Board and rack assembly.
ABF substrate vendors – Ibiden, Unimicron, Kinsus, Nan Ya, Shinko.
Rack-level leverage (per-rack content)
TE Connectivity ($TEL) – Busbars, high-current connectors.
Delta (2308 TT), Advanced Energy ($AEIS) – ORv3 shelves and PSUs.
Rittal, Vertiv ($VRT), Schneider – Racks and integrated cooling.
Safeway, Parker ($PH), CEJN, Danfoss, CoolIT, Motivair – Liquid-cooling content per rack.
Optics and OCS leverage (scale-out content)
Lumentum ($LITE), Coherent ($COHR), HUBER+SUHNER ($HUBN), in addition to OCS ecosystem players such as iPronics and Polatis, which monetize the shift toward optical fabrics inside and between Ironwood racks.
The headline story is simple: Ironwood isn’t just a Google and Broadcom narrative. It is a distributed profit pool spanning TSMC’s CoWoS capacity, SK hynix’s HBM ramps, ABF substrates, Celestica’s integration lines, and a long tail of power, cooling, optical and connector vendors – all riding the same TPU v7 deployment curve.



I would argue that Broadcom has low leverage to TPU volumes as the latter will be a small component of AVGO's total revenues. In fact, highest leverage will the suppliers that either 1) have small revenue bases where TPU demand growth is a large incremental increase, or 2) have underutilized capacity that will torque margins heavily as incremental TPU derived revenues land on the income statement.
Also, Arista will not be a beneficiary of TPU demand. Google uses their own whitebox networking solution based on Celestica switches and their own networking operating system. Cisco is also out for that reason.
Excellent Analysis