Nvidia Launches Chip Aimed at Data Centre Economics

Laptops/Tablets

Semiconductor firm Nvidia on Thursday announced a new chip that can be digitally split up to run several different programs on one physical chip, a first for the company that matches a key capability on many of Intel’s chips.

The notion behind what the Santa Clara, California-based company calls its A100 chip is simple: Help the owners of data centres get every bit of computing power possible out of the physical chips they purchase by ensuring the chip never sits idle. The same principle helped power the rise of cloud computing over the past two decades and helped Intel build a massive data centre business.

When software developers turn to a cloud computing provider such as Amazon or Microsoft for computing power, they do not rent a full physical server inside a data centre. Instead, they rent a software-based slice of a physical server called a “virtual machine.”

Such virtualisation technology came about because software developers realised that powerful and pricey servers often ran far below full computing capacity. By slicing physical machines into smaller virtual ones, developers could cram more software on to them, similar to the puzzle game Tetris. Amazon, Microsoft and others built profitable cloud businesses out of wringing every bit of computing power from their hardware and selling that power to millions of customers.

But the technology has been mostly limited to processor chips from Intel and similar chips such as those from Advanced Micro Devices (AMD). Nvidia said Thursday that its new A100 chip can be split into seven “instances.”

For Nvida, that solves a practical problem. Nvidia sells chips for artificial intelligence (AI)] tasks. The market for those chips breaks into two parts. “Training” requires a powerful chip to, for example, analyse millions of images to train an algorithm to recognise faces. But once the algorithm is trained, “inference” tasks need only a fraction of the computing power to scan a single image and spot a face.

Nvidia is hoping the A100 can replace both, being used as a big single chip for training and split into smaller inference chips.

Customers who want to test the theory will pay a steep price of $200,000 (roughly Rs. 1.5 crores) for Nvidia’s DGX server built around the A100 chips. In a call with reporters, Chief Executive Jensen Huang argued the math will work in Nvidia’s favour, saying the computing power in the DGX A100 was equal to that of 75 traditional servers that would cost $5,000 (roughly Rs. 3.77 lakh) each.

“Because it’s fungible, you don’t have to buy all these different types of servers. Utilization will be higher,” he said. “You’ve got 75 times the performance of a $5,000 (roughly Rs. 3.77 lakh) server, and you don’t have to buy all the cables.”

© Thomson Reuters 2020


Which is the bestselling Vivo smartphone in India? Why has Vivo not been making premium phones? We interviewed Vivo’s director of brand strategy Nipun Marya to find out, and to talk about the company’s strategy in India going forward. We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts or RSS, download the episode, or just hit the play button below.

Products You May Like

Articles You May Like

How to Install ZRAM to Boost Ubuntu Performance
Meta Testing New Feature That Allows You to Post Instagram Reels Directly on Threads
Bandai Namco Begins to Cut Workforce After Cancelling Games
Beats Solo 4 Review: Good Audio Quality, But Expensive
Realme GT 7 Pro Launch Confirmed for October; May Get a Large 6,500mAh Battery

Leave a Reply

Your email address will not be published. Required fields are marked *