Nvidia Debuts Tesla P100 Accelerator With 15B Transistors for AI, Deep Learning

Advertisement
By Gadgets 360 Staff | Updated: 6 April 2016 13:14 IST

Nvidia CEO Jen-Hsun Huang used the opening keynote of the company's annual GPU Technology Conference to announce a massive new processor designed specifically for deep learning. The Tesla P100 is the first shipping product to use Nvidia's new Pascal architecture, and is made up of 15.3 billion transistors, which the company says makes it the largest microchip ever fabricated.

The Tesla P100 is built using a new 16nm FinFE manufacturing process and uses 16GB of HBM2 graphics memory which is integrated onto the same chip substrate, which results in memory bandwidth of up to 720GBps. Peak performance is rated at 21.2 Teraflops for half-precision instructions, 10.6 Teraflops for single-precision and 5.3 Teraflops for double-precision workloads. Up to eight Tesla P100 chips can be interconnected using Nvidia's NVLink bus.

Advertisement

The Tesla P100 is claimed to deliver over 12x the performance of Nvidia's previous generation Maxwell architecture in neural network training scenarios. Specific applications, such as the AMBER molecular dynamics code, are said to run faster on one Tesla P100 server node than on 48 dual-socket CPU server nodes, according to Nvidia.

Huang also said that the company has deciced to "go all-in on AI", and that deep learning and artificial intelligence are the company's fastest growing business area. He named several areas of research, including finding a cure for cancer and understanding climate change, which require computing resources that can scale infinitely.

Advertisement

Massachusetts General Hospital has set up a clinical datacentre which will use Nvidia's AI processing technology to help diagnose diseases starting with the fields of radiology and pathology, and will use its archive of 10 billion medical images to create a deep learning neural network.

The Tesla P100 will initially be available in Nvidia's new DGX-1 "deep learning supercomputer" in June, and in servers from a number of manufacturers beginning in early 2017. The DGX-1 will have eight Tesla P100 chips for a combined 170 Teraflops of half-precision performance, and is claimed to be able to deliver the deep learning throughput of 250 traditional x86 servers in a single 3U server enclosure.

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement
Popular Mobile Brands
  1. OnePlus Nord CE 6 Visits Geekbench With These Specifications
  2. Apple WWDC 2026 Artwork Teases New Siri Interface, AI Features in iOS 27
  1. Google Reportedly Exploring AI Inference Chip Partnership With Marvell Technology
  2. Clair Obscur: Expedition 33 Crowned Best Game at BAFTA Games Awards 2026: Full List of Winners
  3. Oppo Find X9s Key Specifications, Performance Details Spotted on Geekbench Ahead of Launch
  4. Realme C81 Said to Launch in India Soon; Key Specifications, Colours, Storage Leaked
  5. OnePlus Nord CE 6 Listed on Geekbench With Snapdragon 7s Gen 4 Chip, 8GB RAM
  6. Apple’s WWDC 2026 Teaser Hints at Siri Overhaul With New UI, AI Features: Report
  7. NASA Observes Rare Sungrazer Comet Disintegration Near the Sun
  8. Kolaiseval Out on OTT: Know Everything About This Tamil Psychological Thriller Film Online
  9. Band Melam OTT Release Date Revealed: Know When and Where to Stream it Online
  10. LEGO Friends: The Next Chapter Season 4 Now Streaming on Netflix: What You Need to Know
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.