šØ BREAKING: NVIDIA just revealed its roadmap for physical AI, robotics and national-scale AI factories. Most people donāt know it: In 2016, they built their first AI supercomputer with zero customers. OpenAI was the first to say yes. Hereās a breakdown of the most important announcements from #GTCParis :š§µš
Content Warning: Adult Content
Click to Show
1/ šØ 130 TB/s bandwidth The Grace Blackwell NVL72 (One Giant GPU) system connects 72 Blackwell GPUs, moving more data per second than the entire global internet.
Content Warning: Adult Content
Click to Show
2/ Digital twins Everything physical will be built digitally first.ā ā Thatās what Jensen said on stage at GTC Paris From cars to factories... tested, optimized, perfected in simulation AI + Digital Twins are becoming the new foundation of how we build
Content Warning: Adult Content
Click to Show
3/ AI factories are coming. Stargate is just one example that is gonna hold over 5,000 GPU dies. Jensen called them āfactories for intelligenceā and thatās exactly what they are. Massive physical racks that generate, reason, and run nonstop. Powered by Blackwell, NVLink, and liquid-cooled systems
Content Warning: Adult Content
Click to Show
4/ Agentic AI will be the next big shift. Weāre moving from static models to systems that can observe, reason, act and improve. These agents donāt just respond. They reflect, retry, and make real-world decisions. AI is no longer a tool. Itās becoming a teammate.
Content Warning: Adult Content
Click to Show
5/ CUDA-Q is now available for Grace Blackwell It brings quantum and classical computing into a single workflow. You can now write hybrid code that runs across CPUs, GPUs, and QPUs seamlessly.
Content Warning: Adult Content
Click to Show
6/ Sovereign AI in progress @NVIDIAAI France, Italy, Spain, UK, Finland, Germany all building local AI infra. BV-backed partnerships with Mistral, NAISS, Barcelona Supercomputing Center, others to build region-specific LLMs using Nemotron + DGX Cloud

7/ @NVIDIA and Hugging Face have teamed With DGX Cloud Lepton, developers get instant access to a global GPU network - no complex setup required. You can train, fine-tune, and deploy models at scale, all with minimal overhead. Built for speed, flexibility, and open collaboration.
Content Warning: Adult Content
Click to Show
Beyond the announcements, the highlight for me was meeting Jensen Huang, CEO of NVIDIA, an absolute honor. This is the man whose vision and perseverance helped birth an entire era of AI, starting with GPUs for graphics, then CUDA for compute, and now systems that power everything from ChatGPT to self-driving factories. In 2012, it was NVIDIAās chips that helped launch AlexNet and the deep learning revolution. Back then, the idea of a GPU powering intelligence felt wild.... Today, itās the foundation. Another powerful moment for me was a story he shared: When NVIDIA launched its first AI supercomputer (DGX-1), no one wanted it. No customers. No fanfare... Just one company in San Francisco asked for it. That Company? OpenAI. That level of belief, patience, and long-term thinking is rare. It reminded me why I got into this space to build things that might not make sense yet, but will move the world forward... Still feeling inspired. Still thinking about whatās possible. Weāre stepping into an era where compute, creativity, and intelligence blend together, and this moment reminded me that the best work often starts quietly, just with someone who sees it early and keeps building. Itās not just tech. Itās infrastructure for the future... and what stood out most? Jensenās humility and clarity. No fluff or overhype. Just deep conviction and an obsession with building what the world doesnāt yet know it needs. Weāre not just scaling AI. Weāre reshaping how nations operate, how companies create, and how people solve problems. Truly grateful for the chance to witness this moment š



