Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Tech giants form an industry group to help develop next-gen AI chip components 

Microsoft Microsoft
<b>Image Credits:</b> Kwarkot / Getty Images
Microsoft
Image Credits: Kwarkot / Getty Images

Intel, Google, Microsoft, Meta and other tech heavyweights are establishing a new industry group, the Ultra Accelerator Link (UALink) Promoter Group, to guide the development of the components that link together AI accelerator chips in data centers. 

Announced Thursday, the UALink Promoter Group — which also counts AMD (but not Arm), Hewlett Packard Enterprise, Broadcom and Cisco among its members — is proposing a new industry standard to connect the AI accelerator chips found within a growing number of servers. Broadly defined, AI accelerators are chips ranging from GPUs to custom-designed solutions to speed up the training, fine-tuning and running of AI models. 

“The industry needs an open standard that can be moved forward very quickly, in an open [format] that allows multiple companies to add value to the overall ecosystem,” Forrest Norrod, AMD’s GM of data center solutions, told reporters in a briefing Wednesday. “The industry needs a standard that allows innovation to proceed at a rapid clip unfettered by any single company.” 

Version one of the proposed standard, UALink 1.0, will connect up to 1,024 AI accelerators — GPUs only — across a single computing “pod.” (The group defines a pod as one or several racks in a server.) UALink 1.0, based on “open standards” including AMD’s Infinity Fabric, will allow for direct loads and stores between the memory attached to AI accelerators, and generally boost speed while lowering data transfer latency compared to existing interconnect specs, according to the UALink Promoter Group. 

Content Courtesy – Tech Crunch