AI training clusters are growing quickly, but the scaling problem is no longer just about processors. When large numbers of accelerators begin exchanging model weights and intermediate results, the links between systems start carrying enormous volumes of data. Electrical interconnects can only stretch so far before signal integrity, latency, and power consumption become difficult to manage. At that point the architecture begins shifting toward optics, and suddenly the interconnect becomes as important as the compute hardware itself.
AI Clusters Are Beginning To Depend On Optical Interconnects
STMicroelectronics has now entered high-volume production of its PIC100 silicon photonics platform, targeting optical interconnects used in hyperscale data centers and AI infrastructure. The platform supports optical transceivers operating at 800G and 1.6T speeds, bandwidth levels that are becoming increasingly common in large AI training clusters where massive parallel processors exchange data continuously.
At those data rates, copper interconnects struggle to maintain signal integrity and energy efficiency across the distances typical inside modern racks and switching systems. Optical signaling offers a path around those limitations, but integrating optical components into conventional electronic hardware has historically been complex and expensive. Silicon photonics attempts to solve that by fabricating optical components directly onto silicon wafers using semiconductor manufacturing techniques.
Waveguide Loss Starts To Matter At AI Data Rates
When optical signaling moves onto silicon, the physical behavior of the photonic structures becomes central to system performance. Optical signals travel through microscopic waveguides etched into the silicon, and each section of routing introduces a small amount of loss. The PIC100 platform integrates both silicon and silicon nitride waveguides with propagation losses reported as low as around 0.4 dB/cm and 0.5 dB/cm. Those numbers may appear small, but optical circuits can include many centimeters of routing as signals move between modulators, detectors, and coupling structures. Even small reductions in waveguide loss translate into measurable energy savings across high-bandwidth links.
The platform brings the modulators, photodiodes, and the optical coupling structures onto the same photonic die. That makes it possible to connect fibers or external light sources directly into the circuit instead of relying on a collection of separate optical components.
Manufacturing Scale Changes The Economics Of Photonics
One of the most important aspects of the PIC100 platform is not just its optical performance but the scale at which it is being manufactured. STMicroelectronics is producing the platform on 300 mm wafer lines, using infrastructure similar to advanced CMOS manufacturing. Moving silicon photonics onto large wafer production lines improves yield learning and manufacturing efficiency, two areas that have historically limited the widespread adoption of photonic integrated circuits. The company has also indicated plans to expand capacity significantly, with production expected to increase several times over the coming years as demand for AI infrastructure continues to rise.
Hyperscale operators require extremely large volumes of optical interconnect hardware, which means manufacturing scale quickly becomes just as important as device performance.
Through-Silicon Vias Prepare The Platform For Denser Optical Systems
Alongside the current production platform, STMicroelectronics is developing a follow-on roadmap technology known as PIC100 TSV. In that version, through-silicon vias are introduced into the photonic stack. The idea is to move signals vertically through the silicon so denser optical layouts become possible while giving heat somewhere to escape inside tightly packed modules.
The addition of TSV structures enables tighter integration between electronic and photonic circuits, which becomes particularly important as optical links move closer to processors and switching silicon. Many data center architectures are already exploring Near Packaged Optics and co-packaged optics approaches where optical interconnects are positioned much closer to the compute devices.
As AI clusters continue to grow in scale, the boundary between compute hardware and optical networking is becoming increasingly narrow.
Learn more and read the original announcement at www.st.com