Cisco Systems (CSCO.O) unveiled its new networking chips designed specifically for AI supercomputers, posing a challenge to competitors Broadcom (AVGO.O) and Marvell Technology (MRVL.O). The SiliconOne series chips are currently undergoing testing by leading cloud providers, although Cisco has not disclosed the specific companies. Dominating the cloud computing market are major players like Amazon Web Services, Microsoft Azure, and Google Cloud, according to Bofa Global Research.
The growing popularity of AI applications, including ChatGPT, has emphasized the need for efficient communication between specialized chips, known as graphics processing units (GPUs). To address this need, Cisco's latest generation of ethernet switches, the G200 and G202, offer double the performance of the previous generation and have the capability to connect up to 32,000 GPUs.
According to Rakesh Chopra, a Cisco fellow, and former principal engineer, the G200 and G202 chips will be the most powerful networking chips available, fueling AI/ML workloads and providing a highly efficient network solution. Cisco claims that these chips can facilitate AI and machine learning tasks with 40% fewer switches, reduced lag, and increased power efficiency.
In April, Broadcom introduced the Jericho3-AI chip, which also enables the connection of up to 32,000 GPU chips. The competition in the market for AI supercomputing chips is intensifying as companies strive to meet the demands of AI applications and enhance overall performance and efficiency.