Apple has joined the board of directors for the Ultra Accelerator Link Consortium, giving it more of a say in how the architecture for AI server infrastructure will evolve.
UALink logo
The Ultra Accelerator Link Consortium (UALink), is an open industry standard group for the development of UALink specifications. As a potential key element used for the development of artificial intelligence models and accelerators, the development of the standards could be massively beneficial to the future of AI itself.
On Tuesday, it was announced that three more members have been elected to the consortium's board. Apple was one of the trio, alongside Alibaba and Synopsys.
The consortium now consists of more than 65 companies as members since its incorporation in October 2024.
"UALink shows great promise in addressing connectivity challenges and creating new opportunities for expanding AI capabilities and demands," said Becky Loop, Director of Platform Architecture at Apple. "Apple has a long history of pioneering and collaborating on innovations that drive our industry forward, and we're excited to join the UALink Board of Directors."
UALink Consortium Board Chair Kurtis Bowman welcomed the three companies to the board. "The continued support for the Consortium will help accelerate adoption of this key industry standard, defining the next-generation interconnect for AI workloads," he said.
Interconnect to the future
UALink is described as a "high-speed, scale-up accelerator interconnect technology that advances next-generation AI cluster performance." The consortium tasks itself with developing the technical specifications for the interconnects that reside between AI accelerators, or GPUs.
In short, the interconnects are used to perform high-bandwidth connectivity between two processing components, to minimize any bottlenecks and encourage fast communications. In the case here, it's to allow multiple GPUs or AI chips to communicate with each other with minimal lag, so they can work together as if they're one larger chip.
This is similar in concept to the interconnect Apple uses in its Apple Silicon Ultra chips, to connect two Max chips together.
The UltraFusion interconnect in the M1 Ultra - Image Credit: Apple
The concept, when it comes to UALink and AI servers, is that the interconnect would connect together multiple chips together. As UALink describes it, "hundreds of accelerators in a pod," with the interconnect also enabling simple loading and storing of semantics "with software coherency."
In simple terms, the UALink envisions using an interconnect to connect many AI chips and GPUs together with extremely fast communications between the components. All so they can work faster for AI development and processing.
Currently, the group intends to issue the UALink 1.0 specification in the first quarter of 2025. It is expected to allow for up to 200Gbps of bandwidth per lane, with the possibility of connecting up to 1,024 accelerators in an AI pod.
A future Apple benefit
As a company forging ahead in the world of AI development, in part through its introduction of Apple Intelligence, Apple has a vested interest in guiding AI developments.
Indeed, there are multiple aspects at play that Apple can take advantage of as part of the UALink board.
The most obvious is with developing high-performance AI-chip servers. It has already considered using various systems to develop the AI models used in its products, but better hardware can speed up learning processes, or allow for more processes to take place simultaneously.
Ultimately, this can save money on resources, or maintain the same spend but see more benefits.
This wouldn't just be for model training purposes, as it's also possible that the improved servers using the interconnects could be used for cloud-based queries.
Apple does try to perform its processing on-device, but it also employs servers for tougher off-device queries. With faster servers, these queries could be answered quicker, or with more processing applied, than at present.
There may also be an element relating to on-device processing. While intended for high numbers of components to communicate with each other, Apple could feasibly use what its learned for its own hardware.
Aside from the interconnect on the Ultra chips, Apple also relies a lot on high-speed connectivity in its chips in general. Optimizing how its system-on-chip creations work will make them higher in performance, which will benefit end users more directly.
This last goal could be extremely useful for future chips, but if it will be used in Apple Silicon or not isn't clear at this time. The certain near-term use will be for server hardware.
Even so, with the first-gen specification arriving in months, it may still be a very long time before the interconnects become more commonly used in the AI field.