ByteDance is developing its own cloud AI chip and ARM server chip team, according to media reports citing insiders. ByteDance is also actively forming an AI chip team, and there are many chip-related positions at ByteDance on major recruitment platforms.
ByteDance chooses to enter the chip field from cloud AI chips and ARM server chips, which is also the mainstream way for Internet companies to enter the chip market.
From the perspective of the international market, since Google released TPU in 2016, it has become a trend for Internet giants to enter the chip field. Chinese internet giants have expanded their business in chips as well, initially starting from cloud AI chips or ARM server chips.
In terms of the AI cloud chip market, according to the latest ABI Research report, the global cloud AI chip market is expected to reach 10 billion U.S. dollars in 2024.
But judging from the current market situation, Nvidia has an absolute advantage in the AI cloud chip market. But for different Internet companies, because the ecosystems they built are different, they have differences in chip performance requirements. In this case, customized chips may enable them to better leverage their ecosystems.
However, compared to AI cloud chips that are more difficult to develop and have a strong target, GPUs based on AI acceleration are currently more mainstream in cloud applications. Therefore, ByteDance has also been active in the GPU field. In February of this year, ByteDance invested in a local GPU chip design startup Moore Threads.
On the other hand, ByteDance’s platforms – whether it is its Toutiao or Douyin aoo in the 5G era – must carry more content, which requires server chips such as data centers and edge computing support.
At the same time, due to the increased market demand for server chips, open source architectures such as ARM and RISC-V have begun to exert their strength in the server market. Many manufacturers also want to take this opportunity to enter the server market originally occupied by X86.
As far as the ARM server chip market is concerned, this emerging semiconductor market has attracted many manufacturers to participate.
Marvell is one of the best, but after several years of exploration, Marvell has begun to shift from the general market to the customized market. This also shows that the specific market environment may be more conducive to the commercial landing of ARM server chips, and Internet companies, as users of server chips, are more aware of the chip performance required by their ecosystems.
The rise of ARM server chips has given semiconductor manufacturers a unified starting line, coupled with their own understanding of demand, perhaps Internet companies will make greater inroads in server chip market.
For Internet companies, entering the server chip market means they can reduce their dependence on third-party supplies, and on the other hand, self-developed chips can reduce costs.
This can be proved by the international Internet giant Amazon. According to previous reports from foreign media, after the introduction of two generations of data center chips, Amazon’s self-developed chips have also begun to be used to process part of the calculation tasks of the Alexa voice assistant.
In the early test, the processing result of Amazon’s self-developed chip Inferentia cluster is the same as that of the Nvidia T4 chip, but it reduces the delay by 25% and the cost by 30%.
Considering the situation of ByteDance, choosing the ARM architecture for its server chips is not only in line with the current market trend, but also conducive to the company’s long-term development.
In addition to the ARM architecture being sought after by many manufacturers, RISC-V is also considered by the industry to be another open source architecture that can achieve the application of artificial intelligence technology.
Therefore, in January this year, Bytedance also invested in the research and development of RISC-V instruction set architecture-based artificial intelligence dedicated processor company, Stream Computing.
According to the news on its official website, the NeuralScale NPC core architecture independently developed by Stream Computing is a dedicated computing core based on the RISC-V instruction set and oriented to the field of neural networks.
It has energy efficiency ratio (Power Efficiency) and extreme programmability that can meet the needs of cloud-based diversified artificial intelligence algorithms and applications.