CES 2025: Jensen Huang's Keynote Address! Nvidia's Market Value Reaches All-Time High

Technology Author: Contributor, Danyun XIAO Editor: Danyun XIAO Jan 08, 2025 02:16 PM (GMT+8)

CES 2025: Jensen Huang's Keynote Address! Nvidia's Market Value Reaches All-Time High

CES

At the 2025 Consumer Electronics Show (CES), Jensen Huang, founder and CEO of Nvidia, delivered the opening keynote address, showcasing several groundbreaking technological advancements by Nvidia in the fields of AI and computing. These innovations included the next-generation RTX 50 series GPUs based on the Blackwell architecture, the world’s largest and fastest chip—Grace Blackwell NVLink72, the first global foundational model platform—Cosmos, the first personal AI supercomputer, as well as Nvidia's strategic moves in AI agents and physical AI, along with the latest developments in pushing AI agents and cross-industry applications.

Below is a summary of the keynote highlights.

Three Stages of AI Development: Perception, Inference, and Agentic AI

Since Nvidia introduced the Unified Device Architecture (UDA) in the early 1990s, AI development has been closely linked with graphics computing. Nvidia's first integrated game console GPU, the NV1, marked the beginning of GPU research. In 1999, Nvidia invented the programmable GPU, laying the foundation for the development of modern computer graphics and artificial intelligence. However, AI truly took off in 2012, when researchers like Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton used Nvidia’s CUDA technology to train the groundbreaking convolutional neural network (CNN) AlexNet, sparking a revolution in deep learning. This marked the beginning of the “Perceptive AI” era, where artificial intelligence focused primarily on image, text, and speech recognition.

In 2018, Google’s release of the Transformer model (BERT) completely changed the landscape of machine learning and triggered a leap in computational capabilities, allowing AI technologies to handle increasingly complex and multimodal data—ranging from image and sound to amino acids and physics data. This innovation ushered in the “Generative AI” era, opening new possibilities for the generation and creation of images, text, and audio.

Currently, AI is progressing into a new stage—“Agentic AI”—where AI systems are gradually developing capabilities in self-awareness, reasoning, planning, and task execution. These systems are even able to make decisions and predictions through physical simulations.

Highlight 1: GeForce RTX 50 Series Blackwell Architecture GPUs

As AI technology continues to evolve, the demand for computational power has grown exponentially. To meet the high-performance needs of AI computation, Jensen Huang officially launched Nvidia’s latest-generation GPU—RTX 50 series. The core innovation of the RTX Blackwell series GPUs lies in their immense computing power and optimization for AI workloads. The new GPUs feature 9.2 billion transistors and support up to 10 million TOPS (trillion operations per second) in AI computation, three times the performance of the previous Ada architecture. To handle this tremendous computational demand, the Blackwell series is equipped with over 4 million ray tracing teraflops of computational power, enabling the generation of incredibly detailed and realistic images, particularly in real-time ray tracing, where it efficiently processes extremely complex details.

Additionally, the new GPU architecture incorporates several innovative technologies, including Micron’s G7 memory technology, providing 1.8TB per second of bandwidth—nearly twice the memory bandwidth of the previous generation. The Blackwell series' programmable shaders not only excel in traditional graphics tasks but also efficiently process complex neural networks, accelerating the inference and training of AI models. These breakthroughs significantly enhance computational efficiency while drastically reducing energy consumption for AI applications.

Huang emphasized that the RTX Blackwell series delivers exceptional performance not only in desktop GPUs but has also successfully compressed this powerful computing capability into laptops. For instance, laptops equipped with the RTX 50 series GPUs, weighing just 1.5 kg and measuring only 14.9 mm thick, can offer performance comparable to desktop RTX 40 series GPUs. This has been made possible through AI-driven innovations like Tensor Core technology, which enables real-time pixel generation and tracking, greatly improving rendering efficiency and extending battery life through energy efficiency breakthroughs.

According to Huang’s keynote, Nvidia has already begun mass production of the new generation of GeForce RTX 50 series GPUs, including GeForce RTX 5090, RTX 5080, 5070 Ti, and 5070, priced at $1999, $999, $749, and $549 respectively. These GPUs are expected to be officially released on January 30, 2025, allowing users to experience more powerful AI acceleration and graphics computing capabilities at a more competitive price.

1280X1280.PNG.PNG

Huang also pointed out that, based on traditional data center power supplies, the Blackwell architecture achieves a fourfold increase in performance per watt, meaning it can complete more computational tasks with the same energy consumption, greatly enhancing the efficiency and profitability of data centers. Blackwell’s performance per dollar has tripled, leading to a significant reduction in AI training costs. For AI research companies, this cost-effectiveness means they can scale their models threefold within the same budget, significantly speeding up product development cycles and quickly bringing products to market, thereby optimizing customer experiences.

Highlight 2: NMS and Nemo Accelerate the Development of Agentic AI Solutions

In addition to the revolutionary breakthroughs in hardware, Nvidia has also accelerated the construction and deployment of AI systems for enterprises, industries, and everyday life around the concept of “Agentic AI.” Agentic AI refers to systems where multiple AI models work collaboratively, leveraging deep learning and inference to provide personalized services. To support the widespread application of Agentic AI, Nvidia introduced the NMS (Nvidia Microservices) platform, which packages complex AI algorithms (such as CUDA DNN, Cutlass, TensorRT, etc.) into portable microservices. Developers can integrate these microservices into their applications to quickly build AI agents. NMS offers a vast library of APIs and pretrained models across various domains, including natural language processing, image recognition, and speech analysis. With these modular tools, businesses can flexibly adjust the functionalities of AI agents and deploy them across different business scenarios, such as customer support, data analysis, and decision optimization.

On top of the NMS platform, Nvidia launched the Nemo framework—a comprehensive system for training and evaluating digital employees, aimed at helping enterprises manage the onboarding, training, and assessment of AI agents. The Nemo framework supports training AI agents within specific business domains and customizing them according to enterprise needs, ensuring better integration into company workflows.

1280X1280 (1).PNG.PNG

Currently, Nvidia has introduced a series of open-source AI models tailored for enterprises, based on Meta's Llama model and Nvidia’s own Nemo framework. The Llama 3.1 model, since its release by Meta, has become one of the most downloaded AI models globally and is already used by thousands of companies to train and deploy various AI applications. Nvidia’s AI team has deeply optimized these models to ensure they better meet the specific needs and application scenarios of businesses. Huang emphasized, “The Llama model itself is already very powerful, but we’ve further optimized it through the Nemo framework to better cater to enterprise requirements. This optimization allows businesses to perform more accurate semantic understanding, text generation, and data analysis.” After fine-tuning, these optimized models will provide enterprises with more efficient and intelligent AI solutions across visual recognition, language understanding, speech generation, and other fields.

In addition to breakthroughs in data centers and enterprise applications, Nvidia is bringing AI technology to personal computing through collaboration with Microsoft to launch an AI development environment based on Windows Subsystem for Linux 2 (WSL 2). WSL 2 supports CUDA technology, enabling developers to seamlessly run Nvidia’s AI models and computing tools in a Windows environment. As this innovative technology spreads, millions of PCs worldwide will become AI computing terminals, further promoting the adoption of AI technology and broadening its application, bringing AI’s powerful capabilities into the daily lives of individual and home users.

Highlight 3: The First Global Foundation Model—Cosmos

Currently, Nvidia’s continued innovations in hardware, software, simulation, and data generation are accelerating the maturation of digital twins, simulation technology, and AI, thereby fundamentally reshaping the operational models of multiple industries. Huang predicted that the robotics and autonomous driving industries will experience large-scale commercialization in the coming years, becoming a trillion-dollar global industry.

In the final part of the keynote, Huang showcased Nvidia's latest breakthroughs in industrial automation and robotics, including a partnership with Keon to use the Mega platform and Omniverse to build industrial digital twins, providing revolutionary solutions for robot fleet optimization and testing. Through Omniverse, Nvidia is digitalizing physical warehouse environments and using open-source USD connectors to transform camera images, LiDAR data, and AI-generated data into 3D models, allowing robots to operate autonomously in virtual environments. By perceiving and reasoning within this digital twin environment, robots can autonomously plan their next actions and adjust behaviors through sensor feedback, creating a continuous loop of learning and execution. The combination of Omniverse and Cosmos enables physical-accurate digital twins and simulations, providing robot systems with three core computers: one for training AI (DGX), one for deploying AI (AGX), and one for simulating and testing AI within the digital twin system.

1280X1280 (2).PNG.PNG

Based on this, Nvidia introduced the Cosmos global foundation model platform, designed to understand the physical world. It can be trained with large-scale physical dynamic data (up to 20 million hours of video data) and, combined with autoregressive and diffusion models, generates high-quality, highly realistic synthetic data. The Cosmos platform can handle complex real-world environments and physical interactions, providing developers with the core tools needed to build intelligent, precise models in fields like autonomous driving, robotics, and industrial AI. The deep integration with the Omniverse platform allows Cosmos to not only generate virtual environments of the physical world in real-time but also accelerate reinforcement learning and multimodal model training, advancing robotics technology, smart manufacturing, and the future industrial revolution.

Autonomous Driving

In the autonomous driving field, Nvidia introduced its next-generation in-vehicle computing platform—Thor. Thor can process massive data from various sensors and accurately predict the driving path of autonomous vehicles. Its computing power is 20 times that of the previous generation, and it has entered full-scale production, becoming one of the core technologies for global autonomous vehicle development. Additionally, Nvidia announced that its Drive OS system has been certified with the highest safety standard ISO 26262, making it the world’s first programmable AI computer to meet this standard, ensuring the safety of autonomous driving technology. Nvidia has also formed strategic partnerships with global automotive giants, including Toyota, BYD, and Tesla, and it is expected that autonomous driving technology will experience rapid development in the coming years. Nvidia is also providing necessary tools and frameworks to robot developers through the Nvidia Isaac platform's simulation and synthetic data generation technologies, helping them train smarter, more efficient robotic systems.

“AI technology not only has a profound impact on enterprise applications but is also fundamentally changing the way each of us lives,” Huang stated during the keynote. “As GPUs and AI computing power continue to evolve, we are entering a new era where artificial intelligence will become the core driver of social progress and economic growth.”

Furthermore, Huang announced the launch of the personal AI supercomputer—Project Digits, priced at $3,000, which is expected to launch around May. It can serve as a small workstation or work in conjunction with existing computers. Huang noted that AI will become mainstream across industries and applications, and with the help of Project Digits and the Grace Blackwell superchip, millions of developers will benefit, enabling them to locally develop, test, and deploy AI models, further advancing the AI era.

71ac07f0-7ded-45b9-845a-90201e747912.png.png

While Huang was delivering his keynote, Nvidia's market value reached an all-time high today.

557f1ea7-aa66-4deb-9485-4935308d41ec.png.png

Source: stockanalysis