May 22, 2025 by Aharon Etengoff

Collected at: https://www.eeworldonline.com/what-type-of-interconnects-and-connectors-link-accelerator-cards-in-ai-data-centers/

Many data centers are packed with racks of high-performance graphics processing units (GPUs) and tensor processing units (TPUs). These accelerators process massive artificial intelligence (AI) and machine learning (ML) datasets, executing complex operations in parallel and exchanging data at high speed. This article explores the interconnects and connectors that link AI accelerator clusters together.

Scaling AI compute with accelerators and clustered architectures

AI accelerators such as GPUs, TPUs, and, in some cases, field-programmable gate arrays (FPGAs) run large language models (LLMs) using parallel processing to handle complex computations at scale. These devices divide complex workloads into smaller tasks and execute billions of operations simultaneously. Most AI models are built on neural networks, which benefit from this massively parallel architecture to accelerate both training and inference.

As shown in Figure 1, AI accelerators are typically deployed in tightly coupled clusters to efficiently share data, synchronize computations, and scale training across thousands of processing units.

Figure 1. A Google data center houses racks of tightly coupled AI accelerators used for large-scale machine learning workloads. Shown here is an illustration of the TPU v4 infrastructure. (Image: Google)

This configuration helps meet the low-latency, high-performance demands of AI workloads. It also improves throughput, minimizes bottlenecks, and enables real-time inference for complex, compute-intensive tasks.

High-level interconnect architectures and protocols

Data centers use specialized interconnect technologies to link AI accelerator clusters to operate efficiently at scale, enabling high-speed communication within and across nodes. These interconnects support massive data exchange, synchronized processing, and the parallel execution of complex workloads. Common AI accelerator interconnects include:

NVLink — NVIDIA’s proprietary, high-bandwidth interconnect facilitates direct GPU-to-GPU communication with low latency and high energy efficiency. It supports rapid synchronization and data sharing across accelerators using dedicated connectors and NVSwitch technology. NVLink scales efficiently in multi-GPU environments by enabling memory pooling, allowing GPUs to share a unified address space and operate as a single, high-performance compute unit. As shown in Figure 2, NVLink 4.0 delivers up to 900 GB/s of bidirectional bandwidth on the H100 GPU.

Figure 2. Nvidia’s H100 GPU uses NVLink 4.0 to enable up to 900 GB/s of bidirectional bandwidth for high-speed GPU-to-GPU communication in multi-accelerator clusters. (Image: Nvidia)

UALink  — the Ultra Accelerator Link is an open interconnect standard designed to scale clusters of up to 1,024 AI accelerators within a single computing pod. The 1.0 specification supports 200G per lane and enables dense, memory-semantic connections with Ethernet-class bandwidth and PCIe-level latency. UALink supports read, write, and atomic transactions across nodes and defines a common protocol stack for scalable multi-node systems. UALink is positioned as a high-performance alternative for scaling within accelerator pods by targeting lower latency than typical Ethernet for inter-node communication.

Compute Express Link (CXL) enables coherent, low-latency communication between CPUs, GPUs, and other accelerators. It improves resource utilization across heterogeneous systems by supporting cache coherency, memory pooling, resource sharing, and memory disaggregation. CXL 1.1 and 2.0 operate over PCIe 5.0, while CXL 3.0 and later leverage PCIe 6.0 or beyond, enabling transfer speeds of up to 64 GT/s and bidirectional bandwidth of 128 GB/s.

High-speed Ethernet facilitates data movement between accelerator clusters distributed across servers and nodes. Technologies such as 400 GbE and 800 GbE enable high-throughput communication using NICs and optical or copper cabling. While Ethernet introduces higher latency than NVLink or UALink, it offers broad interoperability and flexible deployment at the rack and data center levels.

Optical interconnects and form factors; optical links transmit data at high speeds over extended distances, linking accelerator clusters across racks and nodes. Compared to copper-based connections, they consume less power and overcome signal integrity challenges such as attenuation and EMI. These interconnects often rely on standardized form factors, such as Quad Small Form-factor Pluggable (QSFP), Quad Small Form-factor Pluggable Double Density (QSFP-DD), and Octal Small Form-factor Pluggable (OSFP), which function as the physical interface for both electrical and optical Ethernet connections. These same form factors are also widely used for other high-speed optical interconnects in data centers, such as InfiniBand and proprietary optical links, further extending their role in scalable compute infrastructure.

Physical connectors and interfaces for AI accelerators

High-performance interconnects rely on various physical-layer components, including connectors, slots, and cabling interfaces. These components help maintain signal integrity, mechanical compatibility, and scalable system design. They transmit electrical and optical signals across boards, devices, and systems, facilitating the reliable operation of clustered AI infrastructure.

Although interconnects define the communication protocols and signaling standards, they rely on these physical interfaces to function effectively at scale. Common connector and interface technologies are described below.

PCIe interface connects accelerator cards to host systems and other components. Although newer generations, such as PCIe 5.0 and 6.0, offer scalable bandwidth, they may act as bottlenecks in tightly coupled multi-accelerator environments. Retimers are often used to maintain signal integrity over longer board traces.

Mezzanine connectors are used in the Open Compute Project’s Open Accelerator Infrastructure (OAI). They support high-density module-to-module connections, reduce signal loss, manage impedance, and simplify mechanical integration in modular accelerator designs.

Active electrical cables (AECs) integrate digital signal processors within copper cabling to boost signal strength over longer distances. This enables electrical links to maintain data integrity beyond the reach of passive cables.

High-speed board-to-board connectors enable direct module communication at data rates up to 224 Gbps using PAM4 modulation. They support dense, low-latency communication within AI platforms and tightly integrated accelerator clusters.

Optical connectors — QSFP, QSFP-DD, and OSFP form factors are the physical interface for both optical and short-range electrical Ethernet connections. These transceiver formats are widely deployed across NICs, switch ports, and optical modules and support PAM4 modulation to maintain signal performance across various deployment scenarios.

Liquid-cooled connectors

As shown in Figure 3, an increasing number of high-performance AI accelerator racks rely on liquid cooling. Many of the connectors used in these systems must meet stringent mechanical and thermal requirements to ensure safe, reliable operation.

Figure 3. A liquid-cooled GPU server with integrated quick-disconnect fittings and manifold connections for high-density AI training workloads. These connectors are engineered to support safe, high-throughput cooling in systems such as the NVIDIA HGX H100 platform. (Image: Supermicro)

These connectors typically withstand temperatures up to 50°C (122°F), support coolant flow rates up to 13 liters per minute (LPM), and maintain low pressure drops around 0.25 pounds per square inch (psi). They provide leak-free operation with water-based and dielectric fluids, prevent corrosion, and integrate easily with in-rack manifolds.

Most liquid-cooled connectors incorporate quick-disconnect functionality for dripless maintenance access. Large internal diameters — often around 5/8 inch — support high flow rates across AI racks. Some offer hybrid designs that combine high-speed data transmission with liquid cooling channels. Others support compatibility with three-inch square stainless-steel tubing or feature ruggedized construction to withstand temperature fluctuations, pressure changes, and vibration.

Summary

AI data centers depend on various interconnects and physical connectors to link accelerator cards, enable high-speed data exchange, and facilitate large-scale parallel processing. These components are critical in maintaining performance, signal integrity, and mechanical reliability across tightly coupled clusters.

References

A Deep Dive into the Copper and Optical Interconnects Weaving AI Clusters Together, Marvell
The Evolution of AI Interconnects, Marvell
Open Accelerator Infrastructure, Molex
NVIDIA NVLink-C2C, Nvidia
UCIe For 1.6T Interconnects In Next-Gen I/O Chiplets For AI Data Centers, Semiconductor Engineering
New Connectivity Solutions for the AI Data Center, ConnectorSupplier
New Mezzanine Connector that Supports the Open Computing Project, Arrow
PCIe 6.0 and CXL: The Perfect Alignment for AI and ML Workloads, Signal Integrity Journal

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
17 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Booker Beatie
3 months ago

I like the valuable info you provide in your articles. I’ll bookmark your weblog and test again here regularly. I am moderately certain I’ll be told lots of new stuff proper right here! Good luck for the following!

agenolx link
2 months ago

I believe this website holds some very great info for everyone :D. “I like work it fascinates me. I can sit and look at it for hours.” by Jerome K. Jerome.

gullybet cricket betting

Thanks for the article, can you make it so I get an email sent to me when you publish a new post?

https://crypto-city.pro/

I would like to express appreciation to this writer for rescuing me from such a issue. Because of surfing throughout the world-wide-web and obtaining tips which were not pleasant, I believed my life was gone. Existing devoid of the strategies to the problems you have sorted out through your good short post is a crucial case, as well as ones which could have negatively damaged my career if I hadn’t encountered your web page. Your skills and kindness in maneuvering all the stuff was valuable. I don’t know what I would’ve done if I had not come across such a thing like this. I’m able to now look forward to my future. Thank you very much for the reliable and result oriented guide. I won’t be reluctant to endorse the website to any individual who would need direction on this problem.

hi88 đăng nhập
2 months ago

You must take part in a contest for top-of-the-line blogs on the web. I’ll suggest this site!

hi88
1 month ago

Magnificent web site. Lots of useful info here. I am sending it to some friends ans additionally sharing in delicious. And naturally, thanks in your effort!

slot mudah menang
1 month ago

obviously like your website however you have to take a look at the spelling on several of your posts. Many of them are rife with spelling problems and I to find it very troublesome to tell the truth nevertheless I will definitely come again again.

3y cassino
1 month ago

I was more than happy to seek out this net-site.I wanted to thanks to your time for this glorious read!! I positively having fun with each little bit of it and I’ve you bookmarked to take a look at new stuff you weblog post.

jun88v2
1 month ago

Really clean web site, thankyou for this post.

166bet
1 month ago

Thanks for the article, can I set it up so I get an alert email every time there is a new article?

Pink Salt Trick
25 days ago

Thanks for sharing excellent informations. Your web site is so cool. I am impressed by the details that you?¦ve on this blog. It reveals how nicely you perceive this subject. Bookmarked this website page, will come back for more articles. You, my pal, ROCK! I found simply the info I already searched everywhere and simply couldn’t come across. What a great site.

Pink Salt Trick for Weight Loss

I’d must examine with you here. Which isn’t something I normally do! I take pleasure in reading a submit that will make people think. Additionally, thanks for permitting me to comment!

gullybet app download
19 days ago

Thanks for another fantastic article. Where else could anybody get that kind of information in such an ideal way of writing? I have a presentation next week, and I’m on the look for such information.

prostavive
14 days ago

Regards for helping out, excellent information. “The health of nations is more important than the wealth of nations.” by Will Durant.

116bet
11 days ago

Great – I should definitely pronounce, impressed with your site. I had no trouble navigating through all tabs and related information ended up being truly simple to do to access. I recently found what I hoped for before you know it in the least. Reasonably unusual. Is likely to appreciate it for those who add forums or something, web site theme . a tones way for your client to communicate. Nice task..

312bet
11 days ago

Thank you a bunch for sharing this with all of us you really recognise what you are talking about! Bookmarked. Please also discuss with my web site =). We can have a link trade agreement between us!

hi88
4 days ago

Hello.This post was extremely interesting, particularly because I was searching for thoughts on this matter last couple of days.