Baidu Inc. on Tuesday unveiled China’s first AI chip for edge and cloud computing, Kunlun in Beijing during the company’s second annual Create AI developer conference. Developed to match high-performance requirements for varied AI scenarios, the announcement has listed Baidu among the ranks of AI processor makers like Google, Nvidia, and Intel. Kunlun AI chip is built to handle a wide range of cloud and edge scenarios including public clouds, data centers, and autonomous vehicles. And the chip is coming in two categories: inference chip 818-100 and training chip 818-300.
Baidu Kunlun AI chip is an affordable high-performance solution for in-depth AI processing demand. The chip utilizes Baidu’s AI ecosystem featuring AI scenarios such as PaddlePaddle deep learning frameworks and search ranking. Baidu was able to build this world-class AI chip due to its years of experience in AI service performance optimization and frameworks.
Chinese tech Baidu started developing deep learning AI accelerator based on field-programmable gate array (FPGA) in 2011. But the computational capability of Baidu Kunlun which constitutes thousands of small cores is virtually thirty times faster than the original FPGA-based accelerator. Other specs include Kunlun’s ability to perform 260 tera-operations per second (TOPS), 512Gbps memory bandwidth, 14nm Samsung engineering and its power consumption rating: 100 Watts.
While Baidu Kunlun AI chip supports common deep learning algorithms of open source, its tasks can also be extended to a wide range of AI applications such as voice recognition, natural language processing, search ranking and other recommendations. The emergence of ever-growing AI applications is piling pressure for a steady increase in computational power. Traditional chips are limiting the acceleration in AI technologies as they impede enamors computing power. Baidu developed Kunlun AI chip for large-scale artificial intelligent workloads, providing answers to high-performance demand. The Chinese tech giant believes this contribution will open doors for significant innovations in the open AI ecosystem.
In a bid to contribute more to open AI ecosystem expansion, Baidu plans to keep iterating Kunlun AI chip, developing it progressively to solve problems with future volumes and computational power requirements. Hence, Baidu will go on with building “chip power” aimed at meeting the needs of various fields including voice and image recognition, intelligent devices and autonomous vehicles.
Also at the event, Baidu launched its Apollo program which will be used in building Apolong autonomous driving buses alongside with Chinese bus maker King Long. With around 100 Apolong buses already made and scheduled to hit the road in early 2019, the buses will operate in Chinese major cities for commercial passengers.
An upgraded version of Baidu’s AI service suites and Brain 3.0 were also released during the conference. The upgraded platform now offers about 110 AI services including natural language processing, computer vision and facial recognition software features, and AI models with simple drag-and-drop training.