Nvidia Faces Much Tougher Competition in Artificial Intelligence, but Will Still Source: Eric Jhonsa
Nvidia Corp. (NVDA) is set to face a much tougher competitive environment in the white-hot market for server co-processors used to power artificial intelligence projects, as the likes of Intel Corp. (INTC) , AMD Inc. (AMD) , Fujitsu and Alphabet Inc./Google (GOOGL) join the fray. But the ecosystem that the GPU giant has built in recent years, together with its big ongoing R&D investments, should allow it to remain a major player in this space.
This column originally appeared on Real Money, our premium site for active traders. Click here to get great columns like this.
It's a basic rule of economics that when a market sees a surge in demand that leads to a small number of suppliers amassing huge profits, more suppliers will enter in hopes of getting a chunk of those profits. That's increasingly the case for the server accelerator cards used for AI projects, as a surge in AI-related investments by enterprises and cloud giants contribute to soaring sales of Nvidia's Tesla server GPUs.
Thanks partly to soaring AI-related demand, Nvidia's Datacenter product segment saw revenue rise 186% annually in the company's April quarter to $409 million, after rising 205% in the January quarter. Growth like that doesn't go unnoticed. Over the last 12 months, several other chipmakers and one cloud giant have either launched competing chips or announced plans to do so.
To understand why some of these rival products could be competitive with Tesla GPUs on a raw price/performance basis, it's important to understand what made Nvidia's chips so popular for AI workloads in the first place. Whereas server CPUs, like their PC and mobile counterparts, feature a small number of relatively powerful CPU cores -- the most powerful chip in Intel's new Xeon Scalable server CPU line has 28 cores -- GPUs can feature thousands of smaller cores that work in parallel, and which have access to to blazing-fast memory.
That gives GPUs a big edge for projects that involve a subset of AI known as deep learning. Deep learning involves training models that attempt to function much like how neurons in the human brain do to detect patterns in content such as voice, text and images, with the algorithms used by the models (like the human brain) getting better at both understanding these patterns as they take in more content and applying what they've learned to future tasks. Once an algorithm has gotten good enough, it can be used against real-world content in an activity known as inference.
Inference algorithms don't always require a ton of processing power. GPUs can do a good job of handling them, and Nvidia has certainly been trying to grow its exposure to this field, but a lot of server-side inference work is still done using Intel's Xeon CPUs. And Apple Inc. (AAPL) , citing privacy concerns, prefers to run AI algorithms against user data directly on iOS devices.
However, training a deep learning model to create an algorithm that's good at making sense of the data it's shown -- for example, translating text or detecting stop signs and traffic lights for an autonomous driving system -- can be very computationally demanding. In training, thousands or even millions of artificial neurons split into "layers" responsible for different tasks communicate with neurons on another layer to gauge the likelihood that a particular judgment made about the data being analyzed (e.g., whether an image shows a stop sign) is accurate.
By using clusters of Tesla GPUs that each have thousands of cores to split up the work of all these artificial neurons working in parallel, AI researchers can train a deep learning model much faster than they could using server CPUs that have less than 30 cores. It also helps that Nvidia's high-end Tesla GPUs are good at the kind of complex math that deep learning algorithms perform, and can provide a model with tons of memory bandwidth and a high-speed chip-to-chip interconnect (known as NVLink) for communication.
| }
|