Moffett AI, a Silicon Valley and Shenzhen-based AI chip company, has completed an angel round financing of nearly US$10 million. This round of investment is led by Chinese venture capital firm Keytone Ventures and co-invested by Creation Venture Partners and Cloud Angel Fund.
With the latest proceeds, the firm will continue to develop high computing power cloud-based AI chips.
Established in Silicon Valley in 2018, Moffett AI’s core business is the development of next-generation high-power AI chips. At present, it has achieved small-scale mass production of FPGA chips. The firm is developing cloud-based SoC and public cloud SaaS platform services. It aims to support the development of a comprehensive sparse neural network by optimizing the computing model, and provide a universal AI computing platform with ultra-high computing power and ultra-low power consumption.
With the continuous development of AI deep learning and the advancement of various algorithms, as a platform for providing computing power, AI chips are shifting towards a more optimized data flow architecture. The size of global AI chip market was US$ 6 billion in 2018 and is expected to reach US$ 91 billion by 2025, with a growth rate of 45%, according to Allied Market Research. The market is growing rapidly.
The company’s CEO Wang Wei believes that the current AI chip architecture should be based on the integration of hardware, software and algorithms, designed according to the development trend of algorithm. This is relative to the current TPU, NPU, xPU and other AI chips based on dense matrix calculations. The current deep learning algorithms can achieve high accuracy on a single task. Considering the huge neural network that will be a multi-tasking body in the future, sparse neural networks will be the development trend of deep learning. By developing a architecture based on a dynamic sparse neural network, the company can achieve a significant increase in equivalent computing power under the same resources, thereby bringing better hardware support for algorithm scientists to study deeper and more complex algorithms. If benchmarked with Habana and Nvidia, the company claims that it is confident that the computing power and energy efficiency ratio indicators of its next-generation architecture will exceed these two companies by 5-10 times.
With about 30 employees, Moffett AI has R&D centers in Shenzhen and Silicon Valley. The founding team is a group of young scientists from Carnegie Mellon University in the field of machine learning and senior Silicon Valley computer architects and chip design engineers. On average, they have more than 15 years of chip development experience from Intel, Qualcomm, Marvel and other leading companies. The technical team has complete strength from architecture design to chip engineering, including upper-layer AI application algorithms, sparse optimization algorithms, software and hardware tool chains, architectures, front-end and back-end implementation, and so on. Dr. Zhibin Xiao, a senior chip architect from Alibaba DAMO Academy, also joined Moffett AI’s team as the chief architect.
Zhu Xun, executive director of Keytone Ventures, said: “The development of artificial intelligence is subject to insufficient computing power. This is a common industry problem, and we believe that there is a large market space.” The core technology of Moffett AI team has its originality, including sparse optimization algorithms, integrated software and hardware design, etc., not simply copying and micro-innovation of industry giants, but actually making products according to the characteristics and needs of artificial intelligence computing design. At the same time, its technical solution has its implementation path, and it is very likely that the computing power will be greatly improved on the existing basis.