AI and the Future of Work: Artificial Intelligence in the Workplace, Business, Ethics, HR, and IT for AI Enthusiasts, Leaders and Academics

364: Inside the AI Infrastructure Race: TensorWave CEO Darrick Horton on Power, GPUs and AMD vs NVIDIA.

Dan Turchin Episode 364

Darrick Horton is the CEO and co-founder of TensorWave, the company making waves in AI infrastructure by building high-performance compute on AMD chips. In 2023, he and his team took the unconventional path of bypassing Nvidia, a bold bet that has since paid off with nearly $150 million raised from Magnetar, AMD Ventures, Prosperity7, and others. TensorWave is now operating a dedicated training cluster of around 8,000 AMD Instinct MI325X GPUs and has already hit a $100 million revenue run rate. 

Darrick is a serial entrepreneur with a track record of building infrastructure companies. Before TensorWave, he co-founded VMAccel, sold Lets Rolo to LifeKey, and co-founded the crypto mining company VaultMiner. 

He began his career as a mechanical engineer and plasma physicist at Lockheed Martin’s Skunk Works, where he worked on nuclear fusion energy. While he studied physics and mechanical engineering at Andrews University, he left early to pursue entrepreneurship and hasn’t looked back since.

In this conversation we discussed:

  • Why Darrick chose AMD over Nvidia to build TensorWave’s AI infrastructure, and how that decision created a competitive advantage in a GPU-constrained market
  • What makes training clusters more versatile than inference clusters, and why TensorWave focused on the former to meet broader customer needs
  • How Neocloud providers like TensorWave can move faster and innovate more effectively than legacy hyperscalers in deploying next-generation AI infrastructure
  • Why power, not GPUs, is becoming the biggest constraint in scaling AI workloads, and how data center architecture must evolve to address it
  • Why Darrick predicts AI architectures will continue to evolve beyond transformers, creating constant shifts in compute demand
  • How massive increases in model complexity are accelerating the need for green energy, tighter feedback loops, and seamless integration of compute into AI workflows

Resources: