A new chapter in humanoid robots! Texas Instruments(TXN.US) teams up with NVIDIA(NVDA.US) to combine AI and sensing, igniting the “Physical AI” revolution.

Focusing on chip giants specializing in simulation chips and embedded processing solutions—Texas Instruments (TXN.US), long known as the “global chip demand barometer,” has fully combined its real-time control, sensing, and power product portfolio with NVIDIA (NVDA.US)’s advanced robotic computing components, sensor technologies based on an Ethernet architecture, and proprietary simulation techniques, providing major technical support for developers to build, deploy, and mass-produce humanoid robots at scale, among other endpoint devices collectively known as “physical AI” (physical AI).

Based on current media coverage, the partnership between analog chip powerhouse Texas Instruments and NVIDIA is expected to push humanoid robot intelligent systems into a higher stage, rather than just “working together to build robots” at the surface level. Their latest collaboration is more like building a more complete, secure, and easier-to-scale robotics intelligence foundation at the underlying technology stack level. This will provide tangible support for the industry to advance the commercialization of humanoid robots.

As market expectations for combining massive AI inference workloads with real-world execution continue to heat up, the partnership between NVIDIA and Texas Instruments is not merely a layering of chips and sensing layers—it is a coordinated build from AI inference and real-time perception down to the underlying control systems, serving as an important foundation for humanoid robots to achieve real-world applications.

Texas Instruments’ General Manager of Industrial Automation and Robotics, Giovanni Campanella, said: “Texas Instruments’ comprehensive product portfolio bridges the gap between NVIDIA’s powerful AI computing capabilities and real-world applications, enabling developers to validate a complete humanoid-class operating system earlier.” In his statement, he also mentioned: “This integrated approach will accelerate the evolution from product prototypes to commercially deployed humanoid machines, ensuring these robots can work safely alongside humans.”

NVIDIA has recently been actively committed to bringing cutting-edge AI technologies to broader domains—such as endpoint devices known as “physical AI” (physical AI), including robotics and autonomous vehicles—so as to continue expanding demand and find new growth points beyond data center business. According to NVIDIA CEO Jensen Huang’s perspective, “physical AI” emphasizes enabling robots/autonomous operating systems to perceive, reason, and complete an entire set of actions in the real world, and an era in which “physical AI” helps drive the evolution of human civilization is about to arrive. “Physical AI” emphasizes enabling robots/autonomous systems to perceive, reason, and act in the real world, and these three capabilities are the key toolchain for moving models from “just having a conversation” to “doing work in the physical world.”

Texas Instruments teams up with NVIDIA to coordinate the three-layer system—most difficult underlying sensing + control + AI inference—in robotic intelligent systems

As part of this collaboration, Texas Instruments designed a sensing fusion solution by combining its millimeter-wave radar technology with NVIDIA’s Jetson Thor robotics technology, using NVIDIA’s proprietary Holoscan sensing bridge to deliver low-latency 3D perception and safety awareness, supporting the development of humanoid robotics technology. The latest development results from both sides will be showcased at NVIDIA’s highly anticipated GTC event taking place in San Jose, California, from March 16 to March 19.

Deepu Talla, Vice President of Robotics and Edge AI business at NVIDIA, said: “Safe operation of humanoid robots in unpredictable environments requires extremely powerful computing and processing capabilities, to synchronize ultra-complex AI models, real-time sensor data, and motor control systems.”

By fusing high-definition camera and radar data, the joint solution from Texas Instruments and NVIDIA improves object detection, positioning, and tracking technology iteration, while also reducing false positives/system false alarms and enhancing humanoid robots’ real-time decision-making capabilities.

Robotics experts widely believe that there are still several years to go before truly autonomous humanoid robots with general-purpose capabilities are available, but systematic progress in perception, inference, and motion coordination is a necessary prerequisite for commercial deployment. The collaboration between Texas Instruments and NVIDIA is a key step to推动 the industry from the “algorithm and simulation verification” stage to the “safe real-world operation” stage. This will greatly help the industry improve overall development efficiency, strengthen system robustness, and ultimately shorten the path to mass production.

In robotics R&D, the Sim to Real gap has long been one of the biggest challenges—even if AI algorithms perform well in simulation models, they may still fail in real complex environments. As a high-performance inference platform, NVIDIA Jetson Thor has already been used by multiple companies for robotics applications, while Texas Instruments’ control and sensing modules add direct capability for interacting with the physical world to this platform. Together, this will allow developers to validate system perception, motion, and safety earlier and more accurately, effectively shortening prototype validation cycles and lowering iteration costs.

Texas Instruments will integrate its real-time controllers, sensing sensors (e.g., millimeter-wave radar mmWave), and power management technologies with NVIDIA’s high-performance robotics computing platform (Jetson Thor) and Holoscan Sensor Bridge, forming a complete end-to-end chain from sensing and control through inference computation. Compared with a traditional architecture that relies only on vision cameras + GPU inference systems, this sensing fusion solution can achieve low-latency 3D perception and safety awareness, improving the overall robot’s ability to understand its environment in real time—an essential step toward practical, deployable systems.

When humanoid robots execute tasks, they need not only complex AI inference, but also real-time processing of sensor fusion, multi-joint motion control, and edge safety decisions—functions that all must be completed within extremely short time frames. Texas Instruments’ millimeter-wave radar and Ethernet bridging technology can help robots detect and track objects in complex environments (such as glass doors, strong light/weak light, smoke, and dust) more reliably than traditional camera-based solutions. This improvement in the hardware sensing layer lays a solid foundation for real-world operation.

The next super wave of humanoid robots

Multiple technology companies headquartered in the United States are working to develop high-frequency-band humanoid robots. For example, Tesla (TSLA.US), led by Elon Musk, is developing a humanoid robot called Optimus, planned for industrial and consumer-type uses.

Supported by Microsoft (MSFT.US) and OpenAI, Figure AI is trying to build a general-purpose type of humanoid robot capable of handling a wide variety of tasks. Figure AI says: “These robots can eliminate unsafe and unpleasant work, ultimately allowing human society to live happier and more meaningful lives.” Boston Dynamics clearly hopes its Atlas robot will “completely transform industrial work environments.”

Globally, from Tesla Optimus to Figure AI’s Helix super system, and to other technology firms’ R&D efforts, all reflect the dense capital and industrial investment in this niche field. Current industry data shows that humanoid robot prototypes of various types have made notable progress in functions, perception, and motion control. Features such as bipedal balancing, environmental perception, and multimodal decision-making are gradually maturing. At the same time, ongoing improvements in supply-chain costs and the performance of key components have led to a competitive landscape in which multiple technical routes coexist. All of this is driving the transition from conceptual research to real-scenario pilots. This positive momentum indicates the industry is moving from a “hot hype period” to a stage of real technology accumulation and large-scale deployment, even though there is still a time window before widespread adoption. Market research institutions expect the market size in this field to grow significantly over the next decade, with representative projects such as Tesla’s Optimus 正 plan to achieve high-reliability and safety goals, and to推进 a mass production plan in the coming years.

At present, the core drivers of humanoid robot R&D are the deep integration of AI perception, decision-making, and motion control. This includes using large models to understand language and visual information, prioritizing decision-making with reinforcement learning, and sensor fusion (such as vision, radar, and force perception). Such systems can not only walk in controlled environments, but also perform higher-level tasks, such as logistics load handling, maintenance inspection, or service work that collaborates with humans. Institutions such as Morgan Stanley believe this kind of integrated technological breakthrough is key to making commercial deployment feasible. Morgan Stanley analysts expect that the humanoid robot market will ultimately exceed the traditional auto industry. They forecast that by 2050, the global humanoid robot market’s annual revenue scale will exceed $5 trillion, and the number of humanoid robots could reach more than 1 billion units.

However, Ken Goldberg, a professor and robotics expert at the University of California, Berkeley, said in a recent journal article that engineers still have a long way to go before they can manufacture humanoid robots with real-world skills.

Goldberg said: “We’re all very familiar with ChatGPT, and with the astonishing work it has done in vision and language, but most professional researchers are very uneasy about such analogies: namely, now that we’ve solved all these problems, we’re ready to solve the major problems related to humanoid robots, and it will happen next year. I’m not saying it won’t happen, but I’m saying it won’t happen in two years, five years, or even ten years. We just want to reset expectations to avoid creating a bubble that ultimately leads to a huge backlash.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments