“Through the diversity of chips, we will play a role in supplying powerful power from Microsoft’s copilot to the most powerful generative AI model in the world.” – Satya Nadella, CEO of Microsoft
On the 15th (local time), at the Seattle Convention Center, about 20 minutes by car from the Microsoft Redmond campus in Seattle, USA. Before 8 a.m., the lined-up participants received a white necklace with the MS logo. They are the developers and partner company officials who attended the Ignite 2023, an annual developer event held by Microsoft, which stood at the center of the generative AI boom this year, with about 4,500 participants.
At 9 a.m., CEO Nadella, wearing his trademark colorless knit and black pants, appeared on stage, and a moment of silence passed as he was about to speak. Then, as CEO Nadella raised his arm holding an AI chip the size of a palm, the silence turned into cheers. It was the moment when Microsoft unveiled the Maia 100, an AI accelerator that it designed and produced in-house for the first time. Next to CEO Nadella was a giant data center rack about 2 meters tall. Microsoft also created the rack’s internal design, explaining that it optimized everything from the cooling system to the AI chip for Microsoft Azure services.
He said, “We are providing comprehensive infrastructure through more than 60 data regions around the world,” and “Being the computing power of the world means that Microsoft must become the best system company that encompasses different infrastructures.” It was also when Microsoft, a software company for nearly 50 years since its founding in 1975, officially announced that it would transform into a system company.
Mentioned extensive cooperation with Nvidia while launching its own AI chip
Maia 100, a chip specialized for training and inference of large-scale language models and manufactured through TSMC’s 5-nanometer process, contains built-in transistors numbering 105 billion. In particular, it collaborated with OpenAI, with a broad partnership, from the early stages of chip design. Sam Altman, CEO of OpenAI, emphasized, “From the moment we partnered with Microsoft, we reflected all the requirements for all layers of our AI model and training while co-designing Azure AI infrastructure,” and “We worked together to draw Azure’s end-to-end AI blueprint from the task of refining the AI chip to the part of training the model.” He then emphasized, “Maia will pave the way to train models and provide them to customers at a lower price.”
James Sanders, Senior Analyst at CCS Insight, who met on the spot that day, said, “We need to pay attention to the fact that they mentioned the diversity of chips,” and explained, “In the infrastructure business, diversity is an important factor, and for this, they showed a strong partnership with Nvidia and AMD, and at the same time, they presented their own chip.” He then pointed out, “I think Microsoft was able to build a wide range of use cases as it has an internal customer called OpenAI.”
At the same time, Nvidia also unveiled its first self-made data center chip, Cobalt 100. It features 128 cores, manufactured based on the design of ARM, a British chip design company. CEO Nadella mentioned, “We have already installed it in Microsoft’s data center and are using it for Azure Teams service, and the service performance has improved dramatically.”
Integrated into the Copilot ecosystem
On the same day, Microsoft rebranded its AI chatbot ‘Bing Chat’ and unveiled it under the name ‘Copilot.’ In particular, they made it possible to use the Copilot function by logging in without additional costs. At the same time, it unveiled Copilot Studio, which allows each customer to create their chatbot. This means that all the users’ data that was scattered before can be integrated, and each can secure an AI assistant.
By. Jeong Hye Jin
Most Commented