The Integration of AI and Hardware: Opportunities and Challenges
In the current era of AI, AI can be deeply integrated with hardware in various scenarios, but the greatest significance of most of these products is to verify how to define a product through "brain supplement" when there is no demand. Because, not all hardware is AI smart hardware just because it is connected to an API;
When connecting a large model to hardware, you need to consider the architecture and how to use it. Do you need to purchase a large model on the end or deploy it in the cloud? There are also content quality audits, prompts for scheduling large models, etc.; if you only connect to an API, you still need to reasonably call different models based on user scenarios;
The recently popular AI companion track is essentially an attempt by hardware companies to expand their boundaries and a lubricant for achieving the second curve after the decline of their main business;
For example, Samsung Ballie is positioned as an "AI companion robot" with a self-projection function. It can play video clips for users at any time, and can also link and control smart devices at home. LG's companion "AI housekeeper" is designed to help users patrol their homes and observe the situation of pets. Smart vegetable growers, smart canes, AI glasses, smart coffee machines that can share coffee parameters with users around the world, smart spice dispensers, smart bathtubs for bathing birds, Freewrite Wordrunner keyboards with built-in timers and counters, Aptera launched a solar-powered three-wheeled electric car that looks like a dolphin, AI hardware Omi attached to the temple, cat bed-style air purifier AeroCatTower, Lenovo brought a computer with a retractable screen ThinkBook Plus Gen 6 Rollable, TCL released a detachable companion robot Ai Me, and so on.
01
Therefore, there are many types of AI hardware on the market, with different names. From the perspective of product form, the mainstream mainly includes three directions: "dolls/plush toys", "smart hardware", and "robots". There are real and fake ones among them, and everyone is crazy about the concept;
Here we can see that there are two routes
1. Let the big model further assist the operation to move towards smart home companion robots; the original reason is that hardware robots and the operation of a complete smart home ecosystem are areas that big companies excel in. For example, Samsung, LG, and TCL in China are all on this track; they have built-in cameras and sensors, can move autonomously, talk to users, and can also help users control indoor smart homes. It is a good offline AI carrier . How far it will go depends on the actual landing, pricing system, and subsequent related retention. It is difficult to determine whether it involves commercialization, and it is also an attempt;
2. The giant companion track of the emotional value big model landing. It is to make a cute and pleasing appearance to connect to the cloud big model and talk to people. Their products are easier to implement, but they must be segmented into different groups of people and the big model debugging and access to hardware functions are the core; otherwise, the price war will start again or the shell speaker will be labeled as AI companion;
Among them, if we put aside the high-end packaging concept , the tool for controlling smart devices of Samsung Ballie mentioned above is actually a gateway. The question is, who needs a gateway and projector that can follow them? At the same time, companion robots have technical barriers; they are implemented in large language models and segmented scenarios that have been fully trained and parameter-adjusted; but the current shells do not seem to be enough to support robots to understand users' meanings instantly and accurately. What really allows AI to catch the first wave of attention in hardware is precisely those unpretentious "pleasing shells". However, the focus is still on the accuracy of human-computer interaction and the improvement of AI chip computing power and the empowerment of neural network models; in particular, there are two moves that are more core than human-computer interaction.
Therefore, compared with companies and large manufacturers that focus on embodied intelligence, this type of hardware company can easily find its direction; but there is a point here that start-ups can easily find ideas and create concepts on their own, and these entrepreneurs are fully aware of how nonsensical their products are.
For example, the AI oven uses AI to precisely control heat radiation. It is hard to imagine what kind of control measure AI control is because food contains protein; it is hard for me to understand how protein and so on are controlled by temperature to avoid complete charring. The laws of physics exist for everything.
The handheld AI hardware product "Rabbit R1" was very popular last year. It can provide users with navigation, online car-hailing and other services through voice input. The idea is that you can issue corresponding instructions without calling the APP, and this device can also be used independently through a SIM card or WIFI connection network. However, it cannot occupy any niche in the terminal market. It cannot replace mobile phones and is very dependent on voice interaction; the core demand of consumers is that the degree of dependence on voice interaction is originally very low. It can neither replace touch interaction nor meet the privacy needs of users in public places. In the end, after the mobile phone is connected to the large language model, the large model is deployed locally. This type of product will die in the light. Indeed. Hardware products with AI as a selling point are certainly not all gimmicks. For example, the AI PCs deployed by the OEMs are all good attempts.
02
The real value of AI lies in finding practical scenarios to provide segmented and scarce functions or services, thereby generating segmented emotions and practical value ;
At the same time, AI should not only provide more product value on the C-end, but also match resources on the B-end to leverage its strength; the essential reason is that AI implementation is supported by software-driven hardware; the ecology of large manufacturers lies in the new software tax, and now the maturity of smartphones lies in Apple's innovation in the model of intelligent interaction, thus laying the foundation for the traffic revenue of large manufacturers such as ByteDance; Apple is at the top of the ecological chain with both software charging models and hardware equipment landing, which ultimately forms the core barrier of the user ecology and creates the core elements of traffic/user habits . The products that AI hardware manufacturers actually want to make are products with ecological niches. If this situation is not broken, AI large model technology will remain positioned to improve efficiency and enrich the product supply and user experience of large manufacturers. The good ending is that large manufacturers will give manual labor, and the bad ending is that it will be gone in a gust of wind;
03
The core direction of AI hardware is redesigned hardware and interactive experience;
The interactive form of AI hardware is
Multimodal signal input → sensor computing power → AI model processing and calculation -> define interaction mode -> implement UIUX;
In this architecture, the AI model needs to be computed in the cloud, and the hardware needs to expand its capabilities based on its original functions. Finally, the hardware and sensors are needed to assist the operation of the model, so as to better output the model results and implement a truly interactive form.
-
In other words, the first layer of AI hardware relies on sensor cost calculation, which determines the realization of the commercial ROI account book. It also virtually kicks out some backward manufacturers to improve their hardware barriers, otherwise the price will be involuted and no one will be able to survive.
-
The second layer is the access and computing power operation of large AI models;
-
The third layer is the definition of interaction mode
Currently, there is a debate between LUI (Language User Interface) VS voice-based interaction (Voice-first UI) VS GUI (Graphical User Interface). Although GPT4o's multimodal model now provides technical support and has a better experience in certain specific scenarios, it cannot independently become the most mainstream interaction method.
The three are explained as follows:
-
LUI refers to a natural language interactive interface, where the application interface is developed in the form of a dialogue. It is difficult to achieve multi-threaded and multi-tasking operations. Accurately locating information points is only suitable for single-point tasks with clear goals, and the output result information density should not be too high. For example, the most common scenarios for Tmall Genie are asking about the weather and setting alarms.
-
GUI is a graphical interactive interface that allows users to select or enter the information needed to complete the task. It allows for more natural communication habits and retains user-related information; however, it requires 1V1 specific scenarios to complete; for example, voice assistants can provide assistance when users are driving, cooking, etc., when it is inconvenient to use their hands. There are also Meta Ray-Ban glasses/AI headphones
-
VUI stands for human-computer interaction, namely Intelligent Assistant, such as Echo Show and smart home control.
My understanding is that when users provide feedback, the system needs to summarize and identify it on a large scale so that users can implement it. In principle;

1. At present, most of the real-time voice dialogue technologies are realized through three parts: STT→LLM→TTS. The realization standard requires about 500ms to realize real human-computer dialogue;
2. End-to-end S2S model development: The GPT-4o voice model uses an end-to-end architecture of voice input → voice output to reduce latency;
3. LLM can handle multilingual translation scenarios well, and cross-language real-time conversation is no longer a problem.
04
Hardware design should follow
1) Define a completely new category;
2) Innovate and improve existing categories.
Big Internet companies tend to make a mistake here; they basically have a software development mindset and are completely unaware that the trial and error cost of hardware is higher. If a small accessory fails to be verified, all links have to be rechecked and the mold opening cost is extremely high. The hardware team actually needs to be in awe.
In fact, the most prudent hardware development requires adding AI functions based on existing mature hardware categories; that is, integration between categories and functions, such as integrating video and graphic functions in AI toys. Here, special attention should be paid to the trade-offs in product definition within the boundaries of existing battery density and chip power consumption;
3) Combining ecology is the ultimate basic direction; such as " mobile phone ecology" VS "AI companionship embodiment";
Mobile phones have always been the core of the consumer electronics ecosystem. The computing and operation methods of PCs, mobile phones, and tablets have largely determined the physical form of the hardware, and it is difficult to have fundamental innovations in large-scale model algorithms. This process will not be completed within 5 years. Mobile phones can cover most of the production and entertainment needs of most users. Today, the volume of mobile phones is still basically in weight, volume, battery life, and interaction mode; at present, mobile phones will continue to occupy the use scenarios of both hands and remain the main core ecosystem.
In terms of ecological position, mobile phones cover all aspects in the three-layer design of AI hardware. Wearable devices derived from it are difficult to have a good user experience without the interaction of mobile phones. They need to be used with mobile phones. Basically, they need to cut into the blank scenes of mobile phones and must be lightweight; this is also the current AI headsets, AI glasses and other strategies that offset the lack of one link in the mobile phone ecosystem and can complement the shortcomings of the mobile phone ecosystem. Once the mobile phone system is used and cannot free your hands, this product will be doomed.
Among them, hardware products such as digital cameras and sports cameras may be covered by AI glasses to a certain extent. Finally, it will return to the mobile phone ecosystem. Mobile phone manufacturers occupy the core ecological niche;
What the entrants need to do is
1) Select a large enough track to integrate into the mobile phone ecosystem;
2) The marginal tracks that are not valued by large companies are acquired or have unique ecological barriers;
-
Here you need to consider your position, such as integrating into the mobile phone ecosystem, relying on Huawei, Xiaomi, and Apple to use their ecosystem to return your own traffic and achieve mutual benefit and win-win results.
-
Either they rely on the supply chain + independent distribution channels to hold their own cash flow and wait for the price to rise, or they go the way of vertical segmentation like Rokid;
-
Finally, it will be integrated into the ecosystem of ByteDance, Alibaba, etc.
The core logic behind the Internet giants not participating in hardware but instead acquiring or injecting capital is that although AI computing power determines the implementation of hardware, hardware still has composite requirements: the ability to combine software and hardware . Internet companies follow the principle of data-driven and software computing power; the talent model and organizational management form are separated from the basic hardware demand data-only theory.
Therefore, it is best for large companies to invest in the AI hardware track and have a deep insight into consumer needs. The hardwareization of software functions should be integrated into the actual scenarios of segmented needs. For example, multiple steps can be changed to one step to form consumption habits;
05
AI companion embodied hardware serves as a physical carrier for AI companionship
AI companionship needs to find specific hardware as a carrier to find user needs; we can take out AI voice secretaries, AI smart toys and companion robots and redefine the scenarios and functions to verify the rapid implementation of the big model;
like,
① AI voice secretary relies on the advantages of AI big models in long text understanding and information extraction . It can integrate voice access conversion into real life to reduce unnecessary operations in the next step; it collects audio information of the user's environment anytime and anywhere, and triggers the adaptation of multiple scenarios at any time to reuse supply chain resources.
②AI smart toys are a manifestation of LUI implementation . The precision requirements for children's segmented scenarios are relatively low, and the hardware visualization provides higher practical + emotional value. The threshold is low and it can basically be used. The adult track needs to find the right emotion + investment community atmosphere; although the hardware form is relatively simple and easy to mass produce quickly, in the long run, IP authorization and binding will be the core competitiveness of this category and the core factor of the ultimate investment premium blind box attribute. At present, many companies are installing software on hardware, accessing large models through API calls, and interacting with users.
The hardware part of AI toys usually includes chips, sensors, speakers, microphones, batteries, etc. The software part includes voice recognition technology, AI large models, etc. The core here is that the movement can achieve a high degree of anthropomorphism, that is, a large model; so the complex emotional support of AI companion toys is still worthy of recognition.
The realization of emotional energy relies on the training of small models in vertical fields to support more complex multimodal perception. It mainly uses two layers of models, namely the underlying general large model + vertical small model structure. Its main partners include MiniMax, Doubao, and Zhipu. In general, the core of AI companion toys is always to assist users in solving problems, which is significantly different from traditional toys;
③ The companion robot can also be said to be the "embodiment" of the AI language secretary , further extending and solidifying the value of emotions and companionship. Basically, they all use the interactive function of the AI large model, and can establish emotional connections with the physical expression settings. This is the integration and experience improvement of the desktop smart speaker. The camera sensor is transferred to the mobile phone, which avoids compliance to a certain extent. It is also a direction derived from the sweeping robot. This is better than the direction of the robotic arm 👌🏻; sweeping robot + smart speaker = pet + cleaning;
At the same time, the core technological progress of embodied intelligence is the embodiment of the universality of cross-scenario tasks. Universality includes two aspects:
-
Universal form: can be adapted to different forms
-
Universal scenario: Perform diversified tasks for different scenarios
The current AI companionship embodied intelligence solution has made some breakthroughs in the physical interaction with people and objects in service and companionship scenarios, but the most critical point still needs to be cooperated with large manufacturers. Because the universality of synthetic data in embodied intelligence needs to be further proved. At the same time, after the sensor layout and hardware configuration are changed, the training data and other aspects need to be verified.
Although the final forms of the three types of products are different, the core functions are basically the same, including voice recognition, natural language processing and machine learning. Simply put, AI companion toys hope to interact with users in multiple dimensions of sight/hearing/touch through anthropomorphism, animal anthropomorphism and IP anthropomorphism.
In terms of price, the gap between different products is very large, ranging from 100,000 to tens of thousands; the commercial proof behind it involves many factors, in addition to the difference in product technology costs , the industry itself + premium attributes, the gap between function and price, and the inventory need to be carefully explored;
Finally, AI companion toys can be divided into three main routes.
The first type focuses on entertainment , with products providing emotional value through visual and sound sensors and AI technology, such as Moflin and Ropet, the AI robot pet released by Mengyou Intelligent.
The second is the educational direction , which adds language, mathematics, programming and other knowledge on the basis of interaction, and can basically improve efficiency by combining voice and images;
Such as the AI early childhood education robot launched by Huohuotu and the AI Magic Star of Shifeng Culture;
The third type is the health care of the elderly , the detection of health data, and the companionship and care of special groups. It is a more niche track; for example, the humanoid robot NAO produced by SoftBank can recognize the purpose of fear, sadness or happiness.
06 How about the AR track?
This is determined by the hardware industry chain, optical solutions and software ecology. At present, the core technology of the optical industry chain has not yet been broken through, and the industry chain is also in its early stages. In addition, due to the limitations of the hardware technology boundaries, it is still not possible to start up. It is difficult to become a mass consumer product.
In general, AR cannot bring disruptive growth in the user base, which is basically centered around the growth of game-related categories . If it can be applied to the content production end, multimodal content generation tools will help interaction and form important changes in interactive scenarios, that is, content creation and production will derive more gameplay, and truly realize the experience of augmented reality ("AR").
There are 2 landing points
-
AI content generation costs
-
Hardware lightweight and battery life: improved;
07 AI glasses?
Take Rayban Meta as an example. Eyes are the channel with the highest density of information for humans; similarly, glasses can also easily obtain visual and audio information. Judging from the current user behavior, it is basically a scene of seeing + photography + lightweight; the weight of Rayban Meta is controlled at around 50g. And TikTok and Instagram have taught everyone to use cameras to shoot in public scenes, which has become commonplace .
The product with photography and video as the main function was defined, and the Qualcomm AR1 chip significantly improved the video and audio, and covered all offline channels; the core was to make a pair of sunglasses that made Rayban Meta very popular among content creators and influencers. In many places in the United States, sunglasses are a rigid demand, and the mass base is relatively good, so it was expanded;
Moreover, Ray-Ban Meta’s main selling point is the OK video quality, not AI+Ray-Ban is a good enough brand. The essence is AI glasses under the brand gift; behind it is the brand effect + Meta’s ecological integration; it is not because of AI that it is popular. It is a good brand, and then it adds a sense of technology and some interesting functions. People see that the price difference is not much, and it turns out that buying an ordinary Ray-Ban also costs this much, so they buy it. The core selling point of AI glasses mostly comes from the glasses themselves with AI functions.
AI glasses are not like the traditional manufacturing industry chain. AI glasses will form various glasses terminals due to their different optical solutions and display solutions. This also leads to different glasses terminals, which will bring about industrial chains and costs.
The upstream of the AI glasses industry chain is mainly composed of three categories:
1) Hardware
Optical modules, sensors, audio modules, batteries
2) Software large language model: each company develops its own large model and AI interaction system
3) Other key components: CPU, memory chip, Bluetooth, WiFi
Midstream
1) ODM/OEM manufacturers : responsible for product design, manufacturing, quality control, etc.
2) OEM manufacturers : responsible for OEM of AI glasses related business
3) Brands : AR/VR manufacturers, AI ecosystem manufacturers, traditional eyewear brands
The most basic downstream is
1. Traditional optometry centers: offline stores such as hospitals and optical stores;
2. Internet sales platforms: Taobao, JD.com and other online platforms;
The main classification and advantages and disadvantages of smart glasses at present
Smart glasses can be roughly divided into the following categories according to their functional combinations and field of view:
1. Smart glasses without display (the weight can be controlled within 50g, meeting the lightweight requirements)
Audio glasses: Very limited functionality on the user side
Camera + audio glasses: Rayban Meta has achieved phased success, priced at $300
2. Smart glasses with display (can be controlled within 100g, lightweight is not OK)
40-50 degree FOV (Thunderbird X2): Lightweight display, price range $500-1,000
50-70 degree FOV (Orion): Augmented reality, prototype available, not mass-produced
100 degree FOV: close to VR visual experience, but using OST solution; beyond the current technical boundaries
The essence here is that the limited interaction provided by LUI has a high overlap with TWS headphones. The later persuasiveness is actually not very OK; it also needs to be used in conjunction with a mobile phone to unlock more extended scenarios and provide a better basic experience.
Among the glasses products with display solutions, only 40-50 degree FOV positioning can be provided, which is useless. On the one hand, weight and cost need to be considered, and on the other hand, it is difficult to go further in lightweight considering the functions. The landing scenarios are currently concentrated in: real-time translation, navigation, teleprompter and other scenarios. To a certain extent, the functions of smart glasses except photography, videography and audio can be covered by smart watches. In the future, lightweight chips + long battery life will be the core definition of whether people will pay for it or not;
AI glasses are basically a type of product. They are a product that you wear when you need them and take off when you don’t need them. How long they last depends on whether users use them enough.
08 AI headphones?
Currently, mobile phone manufacturers, audio brands, Internet giants and technology companies are all involved in the development of AI headphones. The development of language models to the GPT-4o stage has promoted the development of AI headphones. Headphones have added AI voice interaction functions.
The first group is traditional mobile phone manufacturers, such as Huawei, Xiaomi, Samsung, etc. AI headsets are often tied to mobile phone sales;
The second group is branded headphone manufacturers that originally made headphones, speakers and other hardware, etc., and connected to the large models of external companies or self-developed models corresponding to the APP to achieve translation, recording and translation and other actions. This type of headphones has been strongly bound to sports or meeting scenes and has strong functionality. Or built-in ChatGPT becomes an independent hardware device.
The third category is Internet giants and technology companies, such as the Ola Friend headphones released after ByteDance acquired Oladance (a headphone brand). They are connected to ByteDance's Doubao smart assistant, extending Doubao's functions to daily companionship scenarios. The standard configuration of AI headphones is to serve as a productivity tool, expanding translation and voice transcription functions in specific scenarios such as meetings and business;
However, AI headphones are not smart enough and rely on mobile phones. Without mobile phones, AI functions cannot be completed. At the same time, users have not yet developed the habit of chatting with headphones. The ear canal is uncomfortable and the experience is extremely poor. The possibility and necessity of headphones becoming independent hardware are also low. It is not that mobile phone manufacturers cannot integrate the capability ecosystem. The business model is basically a subscription function system. This track still needs to fight to find the real demand point;
09
The final test for AI hardware products is the supply chain
When it comes to software, you can run the code as soon as it is written, and you can launch it online after testing; hardware includes structural design, whether it is manufacturable, process design, software computing power structure design, hardware design, and many other things that determine the efficiency, cost, and quality of replication . If there are three more parts in the hardware, the entire chain will change, and the cost will also change. Otherwise, hasty delivery will result in large-scale return costs;
Hardware is also a carrier that directly determines the manufacturability, cost, gross profit, and reliability of the product. For example, Rabbit R1 has a 2.88-inch touch screen display at the hardware level, equipped with a MediaTek processor, and the remaining components are a camera, scroll wheel, microphone, and button, all of which are very mature components that can be made by Huaqiangbei at will. And there are too many details behind the product mold opening, product structure, and function definition; Rabbit Core ignores the moat of the hardware. The main concept of the use of LAM is essentially just an extension of the innovation results of LLM (large language model) such as GPT-4. Whether LAM is easy to use depends largely on the quality of LLM. In the end, it died in the light;
10 The core of AI industry chain
The upstream of the AI server industry chain is composed of component manufacturers, including chips, PCBs, power supplies, cooling modules, etc.; specifically, they are divided into components (integrated circuits, chips, optical devices, RF devices), ICT infrastructure (services, switches, routers, base stations), and other hardware equipment (power supply equipment, air conditioning systems, cameras, sensors).
The midstream is AI server manufacturers, who integrate and assemble chips into server hardware and add necessary network and storage devices to form a complete AI server solution; specifically divided into data centers, edge computing, computing power networks, IDC services, cloud computing, computing power security, etc.
The downstream is various application markets, including Internet companies, cloud computing companies, data center service providers, government departments, financial institutions, medical fields, telecom operators, etc. Specifically, it includes public users and government and enterprise users, covering the Internet, finance, public utilities, telecommunications and other application fields.
Related Companies
Cloud vendors/big models (Amazon, Microsoft, Google, Facebook; Alibaba, Tencent, Baidu, SenseTime)
Chips (Nvidia, Intel, Qualcomm; Haiguang Information, Cambrian)
Chip + Network Equipment (Broadcom, Marvell Technology)
Network equipment + server (Supermicro, Dell, Lenovo; Foxconn, ZTE, Inspur, Gongjin, Sugon, Unisplendour, High-tech Development, Sichuan Changhong, Digital China, Tuowei, China Great Wall, FiberHome)
Optical modules (including chips, device components and structural parts: Zhongji Xuchuan, Xinyisheng, Tianfu Communication, Yuanjie Technology, Taichen Optoelectronics, Accelink Technology, Broadcom Technology, Cambridge Technology, Liante Technology, Mentech Optoelectronics)
Platform Layer
Cloud computing (Sugon, Inspur, Sangfor, SingNet, Wangsu Technology, AtHub, Aofei Data, UCloud, Capital Online, Tongniu Information)
Network security (Sangfor, Venustech, Qi'anxin, Dianke Network Security, SDIC Intelligent, Deepin Technology, Digital Certification, Topsec, Beixinyuan, NSFOCUS, AsiaInfo, Jida Zhengyuan, Ahnheng Information, Sanwei Security, Geer Software, Yongxin Zhicheng, Anbotong, Xinan Century, Shengbang Security, Shanshi Network Technology)
Data elements (Yinzhijie, Shengyibao, Yihualu, Tongxingbao, Shanghai Ganglian, COSCO Shipping Technology, Yunsai Zhilian, Cape Cloud, Dongfang Guoxin, Guoxin Health, Jiuyuan Yinhai, Shenzhen Sanda A, Borui Data)
Solution providers (Weimo Group, China Software International)
AI Application Layer
AI glasses (Goertek, Longqi Technology, Jiahe Intelligent, Yidao Information, Tianjian, Hengxuan Technology, Actions Technology, Zhongke Bluexun, Fuliwang, Bos Eyewear, Rockchip, BIWIN Storage, Dongshan Precision, Changying Precision, Desay Battery, AAC Technologies)
AI applications (Mobvista, EasyPoint, BlueFocus, Tomcat, Kunlun, Inspur, Meitu)
AI Education (iFlytek, Tianyu Digital Technology, Jicheng Electronics, Zhizhen Technology, Yaowang Technology, Aoto Electronics)
AI Media (Visual China, Caesar Media, Perfect World, Aofei Entertainment, Gravity Media, Mango Media, Huace Film & TV, Chinese Online, Enlight Media, iReader Technology, Worth Buying)
AI games (Giant Interactive, Kaiying Network, Gigabit, Shengtian Network, Baotong Technology, 37 Interactive Entertainment, Century Huatong, Giant Interactive, Kunlun Wanwei, Youzu Interactive, Palm Technology, Perfect World)
AI Office (Kingsoft Office, Foxit Software)
AI mobile phones (Dao Ming Optics, ZTE, Victory Giant Technology, Nanxin Technology, Transsion Holdings, Wingtech Technology, Unigroup Guoxin)
In summary, AI+hardware is not a new story; AI big model + hardware is a definite industry upgrade and refresh for existing categories and existing hardware; overall, it is hardware-oriented, and AI is only a basic technical capability to comprehensively enhance the capabilities of current hardware categories. The core competitive dimension of AI hardware is the competition between hardware industries and categories. When it comes to specific product forms, it depends on the incremental value of AI in the product.
AI big models are not yet mature, edge AI chips are still in the early stages, and the industry ecosystem is still in the very early stages. AI+hardware is an opportunity to upgrade mature categories at this stage. It is too early for real innovation, and the possibility of innovative hardware categories emerging is small.