Markets

NVIDIA Corp (NVDA) Q3 2019 Earnings Call Transcript

Logo of jester cap with thought bubble.

Image source: The Motley Fool.

NVIDIA Corp (NASDAQ: NVDA)
Q3 2019 Earnings Call
Nov 14, 2019, 5:30 p.m. ET

Contents:

  • Prepared Remarks
  • Questions and Answers
  • Call Participants

Prepared Remarks:

Operator

Good afternoon, my name is Christina and I am your conference operator for today. Welcome to NVIDIA's Financial Results Conference Call. [Operator Instructions] Thank you.

I will now turn the call over to Simona Jankowski, Vice President of Investor Relations to begin your conference.

Simona Jankowski -- Vice President-Investor Relations

Thank you. Good afternoon everyone and welcome to NVIDIA's conference call for the third quarter of fiscal 2020. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette, Kress Executive Vice President and Chief Financial Officer.

I would like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter of fiscal 2020. The content of today's call is NVIDIA's property it can't be reproduced or transcribed without our prior written consent.

During this call we may make forward-looking statements based on current expectations, these are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, November 14, 2019 based on information currently available to us.

Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

With that let me turn the call over to Colette.

Colette Kress -- Executive Vice President and Chief Financial Officer

Thanks, Simona. Q3 revenue was $3.01 billion, down 5% year-on-year and up 17% sequentially. Starting with our gaming business revenue of $1.66 billion was down 6% year-on-year and up 26% sequentially. Results exceeded our expectation, driven by strength in both desktop and notebook gaming.

Our GeForce RTX lineup features the most advanced GPU for every price point and uniquely offers hardware-based ray tracing for cinematic graphics, while ray tracing launched a little more than a year ago, two dozen top titles have shipped with that or on the way. Ray tracing is supported by all the major publishers including all-star titles and franchise such as Minecraft, Call of Duty, Battlefield, Watch Dogs, Tomb Raider, Doom, Wolfenstein and Cyberpunk. Of note, Call of Duty: Modern Warfare had a record-breaking launch in late October that came on the heels of control, an action, adventure game with multiple ray-traced features. Reviews have praised both for the ray-tracing implementation and gameplay performance.

With last week's PC release of Red Dead: Redemption II as a strong gaming lineup for the holiday season. Our business reflects this growing excitement RTX GPUs now drive more than two-third of our desktop gaming GPU revenue.

Gaming laptops were standout driving strong sequential and year-on-year growth. This holiday season our partners are addressing the growing demand for high-performance laptops for gamers, students and prosumers by bringing more than 130 NVIDIA-powered gaming and studio laptop models to market. This includes many thin and light form factors enabled by our Max-Q technology, tripled the number of Max-Q laptops last year.

In late October, we announced the GeForce GTX 1660 SUPER and the 1650 SUPER, which refresh our mainstream desktop GPUs with more performance, faster memory and new features. The 1660 SUPER delivers 50% more performance than our prior generation Pascal base 1060, the best selling gaming GPU of all time. It began shipping on October 29 priced at just $229. PCWorld called it the best GPU you can buy for 1080p gaming.

We also announced the next generation of our streaming media player with two new models SHIELD TV & SHIELD TV Pro, which launched on October 28. These bring AI to the streaming market for the first time with the ability to up-scale video real time from high-definition to 4K using NVIDIA trained deep neural networks. SHIELD TV has been widely recognized as the best streamer on the market.

Finally, we made progress in building out our cloud gaming business, two global service providers Taiwan Mobile and Russia Rostelcom with GFN.RU joined SoftBank and Korea's LG as partners for our GeForce NOW game streaming service. Additionally, Telefonica will kick off a cloud gaming proof of concept in Spain.

Moving to data center, revenue was $726 million, down 8% year-on-year and up 11% sequentially. Our hyperscale revenue grew both sequentially and year-on-year and we believe our visibility is improving. Hyperscale activity is being driven by conversational AI, the ability for computers to engage in human-like dialog capturing context and providing intelligent responses.

Google's breakthrough introduction of the BERT model with its super human levels of natural language understanding is driving away of neural networks for the language understanding. That in turn is driving demand for our GPUs on two fronts; first, these models are massive and highly complex. They have 10x to 20x, in some cases, 100x more parameters than image-based models. As a result, training these models requires V100 base compute infrastructure that in orders of magnitude beyond what is needed in the past. Model complexity is expected to grow significantly from here.

Second, real time conversational AI requires very low latency and multiple networks running in quick succession from de-noising to speech recognition, language understanding, text-to-speech and voice encoding. While conventional approaches fail at these tasks NVIDIA's GPUs can handle the entire inference chain in less than 30 milliseconds. This is the first AI application where inference requires acceleration. Conversational AI is a major driver for GPU accelerated inference.

In addition to this type of internal hyperscale activity, our T4 GPU continue to gain adoption in public clouds. In September, Amazon AWS announced general availability of the T4 globally, following the T4 roll out on Google Cloud platform earlier in the year. We shipped a higher volume of T4 inference GPU this quarter with [Phonetic] V100 GPUs and both were records. Inference revenue more than doubled from last year and continued a solid double-digit percentage of total data center revenue.

Last week, the results of the first industry benchmark for AI inference, MLPerf Inference were announced. We won in addition to demonstrating the best performance among commercially available solutions for both data center and edge applications, NVIDIA accelerators were the only ones that completed in all five MLPerf benchmarks. This demonstrates the programmability and performance of our computing platform across a diverse AI workloads which is critical for wide scale data center deployments and is a key differentiator for us.

Several product announcements this quarter's helped extend our AI computing platform into new markets, the Enterprise Edge. At Mobile World Congress, Los Angeles, we announced a software-defined 5G wireless RAN solution accelerated by GPUs in collaboration with Ericsson. With this opens up the wireless RAN market to NVIDIA GPUs, it enables new AI applications as well as AR, VR and gaming to be more accessible to the telco edge.

We announced the NVIDIA EGX Intelligent Edge Computing Platform. With an ecosystem of more than 100 technology companies worldwide early adopters include Walmart, BMW, Procter & Gamble, Samsung Electronics and TTEs and the cities of San Francisco and Las Vegas.

Additionally, we announced a collaboration with Microsoft on Intelligent Edge Computing. This will help industries better manage and gain insights from the growing flood of data created by retail stores, warehouses, manufacturing facilities and urban infrastructure.

Finally, last week we held our GPU Technology Conference in Washington DC, which was sold out with more than 3,500 registered developers, CIOs and federal employees. At the event, we announced that the US Postal Service, the world's largest delivery service with almost 150 billion pieces of mail delivered annually is adopting AI technology from NVIDIA, enabling 10x faster processing of package data and with higher accuracy.

Moving to ProVis, revenue reached a record $324 million, up 6% from the prior year and up 11% sequentially, driven primarily by mobile workstations. NVIDIA RTX graphic and Max-Q technology have enabled a new wave of mobile workstations that are powerful enough for design applications, yet thin and light enough to carry. We expect this to become a major new category with exciting growth opportunities. Over 40 top creative design applications are being accelerated with RTX GPUs just last week at the Adobe MAX conference, RTX's accelerated capabilities were added to three Adobe Creative apps. RTX-accelerated apps are now available to tens of millions of artists and designers driving demand for our RTX GPUs.

We also continue to see growing customer deployment of data science, AI and VR applications. Strong demand this quarter came from manufacturing, public sector, higher education and healthcare customers.

Finally, turning to automotive, revenue was $162 million, down 6% from a year ago and down 22% sequentially. The sequential decline was driven by a one-time non-reoccurring development services contract recognized in Q2. Additionally, we saw roll-off of legacy infotainment revenue and general industry weakness. Our AI cockpit business grew, driven by the continued ramp of the Daimler as they deploy their AI-based infotainment systems across their fleet of Mercedes-Benz vehicles.

In August, Optimus Ride launched New York City's first autonomous driving pilot program powered by NVIDIA DRIVE. Urban settings post unique challenges for autonomous vehicles given the number of density of objects that need to be perceived and comprehended in real time. Our DRIVE computer and software stack allows the shuttles to safely and effectively provide first and last mile transit services. We remain excited about the long-term opportunity in auto, our offering is consist of in-car AV computing platforms as well as GPU servers for all AI development and simulation.

We believe we are well positioned in the industry's leading end-to-end platform that enables customers to develop, test and safely operate autonomous vehicles ranging from cars and trucks to shuttles and robotaxis.

Moving to the rest of the P&L, Q3 GAAP gross margins were 63.6% and non-GAAP was 64.1%, up sequentially reflecting a benefit from sales of previously written-off inventory, higher GeForce GPUs average selling prices and lower component costs.

GAAP operating expenses were $989 million and non-GAAP operating expenses were $774 million, up 15% and 6% year-on-year respectively. GAAP EPS was $1.45, down 26% from a year earlier, non-GAAP EPS was $1.78, down 3% from a year ago. Cash flow from operations was a record $1.6 billion.

With that let me turn to the outlook for the fourth quarter of fiscal 2020 which does not include any contribution from the pending acquisition of Mellanox. We expect revenue to be $2.5 billion plus or minus 2%. This reflects expectations for strong sequential growth in data center, offset by a seasonal decline in notebook GPUs for gaming and switch related revenue. GAAP and non-GAAP gross margins are expected to be 64.1% and 64.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $1.02 billion and $805 million, respectively. GAAP and non-GAAP OI&E are both expected to be income of approximately $25 million.

GAAP and non-GAAP tax rates are both expected to be 9% plus or minus 1% excluding discrete items. Capital expenditures are expected to be approximately $130 million to $150 million. Further financial details are included in the CFO commentary and other information available on our IR website.

In closing, let me highlight the upcoming events for the financial community. We will be at the Credit Suisse Annual Technology Conference on December 3, Deutsche Bank's Auto Tech Conference on December 10, and Barclays Global Technology, Media and Telecommunications Conference on December 11.

We will now open the call for questions. Operator, would you please poll for questions?

Questions and Answers:

Operator

[Operator Instructions] And your first question comes the line of Vivek Arya with Bank of America Merrill Lynch.

Vivek Arya -- Bank of America Merrill Lynch -- Analyst

Thank you for taking my question. For my first one, you mentioned that you were seeing strong sequential growth in the data center going into Q4. Jensen, I was wondering if you could give us some color on what's driving that? And just how you think about the sustainability of data center growth going into next year and what markets do you think will drive that, is it more enterprise, more hyperscalers, more HPC, just some color on near and longer-term on data center? And then I had a follow-up for Colette.

Jensen Huang -- Founder, President and Chief Executive Officer

Yeah. Thanks a lot, Vivek. We had a strong Q3 in hyperscale data centers. As Colette mentioned earlier, we shipped a record number of V100s and T4s and for the very first time we shipped more T4s than V100. And most of the V100 -- most of the GeForce are driven by inference. In fact, our inference business is now a solid double-digit and are doubled year-over-year. And all -- most -- that is really driven by several factors. As you know, we are working on deep learning for some time and people have been developing deep learning models that started with computer vision. But image recognition doesn't really take that much of the data center capacity. Over the last couple of years, a couple of very important developments have happened. One development is a breakthrough in using deep learning for recommendation systems. As you know, recommendation systems is the backbone of the Internet, whenever you do shopping, whenever you're watching movies, looking at news, doing search, all of the personalized web pages, all of just about your entire experience on the Internet is made possible by recommendation systems because there's just so much data out there.

Putting the right data in front of you based on your personal -- based on your social profile or your personal use patterns or interest or your connections, all of that is vitally important. For the very first time, we're seeing recommendation system based on deep learning throughout the world.

And so increasingly, you're going to see people roll this out and the backbone of the Internet is now going to be based on deep learning. The second part is conversational AI. Conversational AI has been coming together in pieces at first speech recognition which require some amount of noise processing or beamforming then you go into speech recognition then it goes to natural language understanding, which then gets connected to a recommendation system, which then gets connected to text to speech and the speech encoder and then -- and that has to be done very, very quickly.

Whereas, images could be done offline, conversation has to be done in real time. And without acceleration and without NVIDIA's accelerators, it's really not possible to do it in real time, it takes seconds to process all of the handful of deep learning models. And now we're able to do that on an accelerator and do it in real time.

And so the combination of these various breakthroughs from deep learning based recommender, the speech stack, as well as natural language understanding breakthrough in what is called the bidirectional encoded transformer. That breakthrough is really quite significant and since then derivative works have come from that approach and natural language understanding is really working incredibly well.

And so what we're seeing people do is the hyperscalers across the world we work with just about everybody, this area of work is really complicated, the models are very, very large. There's a whole bunch of models that has to work together and they are getting larger. And so that's one large category, which is the hyperscalers.

The second which we introduced this quarter is really about taking AI out to the edge. And the reason for that is because there are many applications whether it's based on video or other types of sensors of all kinds where there is vibration sensors, temperature sensors. bio-metrics sensors, there's all kinds of sensors that are used in industries to monitor the health of equipment, monitor the conditions of various situations and you want to do the processing at the point of action. This way you don't stream [Phonetic] the data, which is continuous back into the cloud, which caused a lot of money. You want to take the action at the point of action because latency matters, maybe you're controlling gates or vehicles or robots or drones or whatnot, and then lastly, one major issue is data sovereignty.

Maybe your company doesn't own all of the data that you're processing, and therefore, you have to do that processing at the edge and you can afford to put that in cloud. And so these various industries retail, warehouse, logistics, smart cities, we're just seeing so much enthusiasm there around that. And this -- so we built the platform called the EGX, which basically as a cloud native completely secure, takes advantage of NVIDIA's full stack of every single model and it's managed with Kubernetes remotely. And you could deploy these services at the edge in faraway places, because IT departments can't afford to go out there to manage them. And we've seen some really great adoption. We announced this last quarter, Walmart is using our platform, BMW is using it for logistics Procter & Gamble for manufacturing, Samsung Electronics for manufacturing visual inspection. And then, last week we announced probably the largest logistics operation in the world, the United States Postal Service.

And so those are -- I would say that the Intelligent Edge will likely be the largest AI industry in the world. For rather clear reasons, if you just kind of estimated the size of retails, nearly $30 trillion. And if retail stores could be made a little bit more convenient it could save the industry a lot of money, warehouses, logistics, transportation, farming, I think there's like 0.5 million farms in world that covers a third of the world's land mass. And so there's a lot of places where AI could be put at the edge and could make a big difference. And I think this is going to be the grand adventure that we started this last quarter with the announcement of NVIDIA EGX.

Vivek Arya -- Bank of America Merrill Lynch -- Analyst

And just another quick follow-up, on PC gaming, how are you looking at growth going forward. In that you had a very good quarter in October, I think in January you are probably guiding to some seasonal declines, but I imagine a lot more of that is due to console declines, just how are you looking at PC gaming growth going into October into January and then next year as you get competition from two new consoles that are also supposed to come out. Thank you.

Jensen Huang -- Founder, President and Chief Executive Officer

Yeah, during Q3 -- during Q4 and Q1, we see normal seasonal declines of console builds and we also see normal seasonal decline of notebook builds. And the reason for that is because the notebook vendors have to lineup all their manufacturing in Q3, so that they could meet the hot selling season in Q4. And so we're seeing -- what we see in the Q4 and Q1 time frame or just normal seasonal declines of these systems.

Overall, for PC gaming and RTX is doing fantastic, let me tell you why it's so important, I would say that at this point, I think, it's fairly clear that ray tracing is the future and our RTX is a home run. Just about every major game developer has signed on to ray tracing even the next generation consoles have to stutter step and include ray tracing in their next generation consoles.

The effects, the photo realistic look, is just so compelling. It's not possible to really go back anymore. And so I think that it's fairly clear now that RTX ray tracing is the future. And there are several 100 million PC gamers in the world that don't have the benefits of it, and I'm looking forward to upgrading them.

Second, and this is a combination of RTX and Max-Q, we really created a brand new game platform, notebook PC gaming. Notebook PC gaming really didn't exist until Max-Q came along and our second-generation Max-Q this last season really turbocharge this segment. Over 100 laptops now are available for PC gaming. And my sense is that this is likely going to be the largest gaming platform, new gaming platform that emerges. And we're just in the beginning innings of that.

And so the combination of upgrading the entire installed base of PC gamers to RTX and ray tracing and this new gaming segment called notebook PC gaming is really quite exciting and it's going to drive our continued growth for some time and so I'm excited about that.

Operator

Your next question comes from the line of Aaron Rakers with Wells Fargo.

Aaron Rakers -- Wells Fargo -- Analyst

Yeah, thanks for taking the question. And I have a follow-up if I can as well. Just thinking about the trajectory of gross margin here, solid gross margin upside in the quarter, you also noted that you have the benefit of selling through some written-off component. So,I guess, first question is, what was that impact in this most recent reported quarter and how do we think about the trajectory of gross margin here even beyond the January quarter, what should we be thinking about in terms of that gross margin trend and again I have a quick follow-up.

Colette Kress -- Executive Vice President and Chief Financial Officer

Sure. Thanks for the question. In the current quarter, the net benefit as we refer to as the net release of our inventory provisions primarily associated with our components was about 1 percentage point, to our overall gross margin. As you know, going forward, mix [Phonetic] is still the largest driver of our gross margin over time. Over the long run -- long term, we do expect gross margins to improve and we continue to see outside of the benefit that we received gross margin improvement for the long-term.

Aaron Rakers -- Wells Fargo -- Analyst

Okay.

Jensen Huang -- Founder, President and Chief Executive Officer

As you know, just to add to that, as you know NVIDIA has really become a software company. If you take a look at almost all of our products, the GPU -- having the world's best GPU, of course, just the to starting point. But, almost everything that we do, whether it's in artificial intelligence or data analytics or healthcare or robotics or self driving cars almost all of these platforms gaming, rendering, cloud graphics, all of these platforms start from a really rich stack of software.

And you can't just -- just can't put a chip in these scenarios and they work. And so most of our businesses are now highly software rich and they address verticals that we focus on. And then secondarily, we're a platform company, and so our platforms available from all the OEMs and cloud providers and as a platform company that has a great deal of software intensity it's natural that the margins would be higher over time.

Aaron Rakers -- Wells Fargo -- Analyst

Yeah. Very helpful. And then you mentioned in your prepared remarks that you've seen hyperscale, your hyperscale business within data center grow both on a quarter-over-quarter as well as year-over-year basis in this last print. You also mentioned that your visibility is improving. Can you just help us understand what exactly you're seeing in the hyperscale guys because it feels like there is some mixed data points out there. What underpins your improved visibility or what are you seeing in that piece of your business?

Jensen Huang -- Founder, President and Chief Executive Officer

Yeah. We had a strong Q3. We're going to see a much stronger Q4. And the foundation of that is AI, it's deep learning inference. That is -- this deep learning inference is understandably going to be one of the largest computer industry opportunities. And the reason for that is because the computation intensity is so high and for the very first time for the very first time aside from computer graphics aside from computer graphics, this mode of software is not really practical without accelerators. And so I mentioned earlier about the large scale movement to deep learning recommendation systems, those models are really, really hard to train.

I mentioned earlier about the conversational AI, because conversation requires real time processing several seconds is really not practical. And so you have to do it in milliseconds, tens of milliseconds and our accelerator makes that possible. What makes it really complicated and the reason why although so many people talk about it, only we demonstrated -- we submitted all five results all five tests for the MLPerf Inference benchmark and we won them.

And the reason for that is because it's far more than just a chip. The software stack that sits on top of the chip and the compilers that sits on top of the chip are so complicated. And it's understandably complicated because a supercomputer wrote the software. And this body of software is really, really large and if you have to make it make it both accurate as well as perform it, it's really a quite a great challenge and it's one of the great computer science challenges. This is one of those problems that hasn't been solved and we've been working hard at it for last six, seven years now.

So, this is really the great opportunity we've been talking about inference for some time and now finally the workloads and a very large diverse set of workloads are now moving into production and so I'm hoping I'm enthusiastic about the progress and seeing the trends and the visibility that inference should be a large market opportunity for us.

Operator

Your next question comes from the line of C.J. Muse with Evercore ISI.

C.J. Muse -- Evercore ISI -- Analyst

Yeah, good afternoon. Thank you for taking the question. I guess, I'd love to follow on that last question. So, clearly, your commentary Jensen here is much more bullish than I've heard you I think before on inference, particularly as it relates to this first benchmark. And so, I guess, can you talk a bit about how you see mix within data center looking out over the next 12 to 24 months. As you'd see kind of training versus inference as well as cloud versus enterprise considering, I would think inference over time could be could grow into a large opportunity there as well.

Jensen Huang -- Founder, President and Chief Executive Officer

Yeah, C.J., that's really good. Let me break it down. So, when we think about hyperscale there are three parts; training, inference and public cloud. Training, you might have seen the work that was done open AI [Phonetic] recently where they have been measuring and monitoring the amount of computation necessary to train these large models. These large models are not only getting larger. The amount of data necessary therefore has to scale as well. The computation is now growing and doubling every three months. And the reason for that is because of recent breakthroughs in natural language understanding and all of a sudden a whole wave of problems are now able to be solved.

And just as AlexNet, seven years ago kind of was the watershed event for a lot of computer vision oriented AI work. Now, the transformer-based natural language understanding model and the work that Google did with BERT really is a watershed event also for natural language understanding. This is, of course, a much, much harder problem and so the scale of the training has grown tremendously. I think, what we're going to see this year is a fair number of very sizable installations of us of [Phonetic] GPU systems to do this very thing, training.

The second part is an untapped market for us and this untapped market is really inference. The reason why I haven't really spoken about it until now is because we've never really been able to validate our intuition that that inference is going to be a large market opportunity for us, that it's going to be very complicated the models are very large, they're very diverse, they require large amount of computation, large amount of memory bandwidth and large amounts of memory and large and significant capabilities of programmability.

And so, I've talked about this before, but I've never been able to validated and of course with MLPerf and sweeping the benchmarks. And frankly, the only one of those so many have attempted they submitted results and they -- and some of them rescinded it that this benchmark is just really, really hard, inference is heart.

And then finally, our business results also validated the our intuition. And so our engagement now with CSPs are now global, we're working across natural language understanding, recommendation systems conversational AI, just a whole bunch of really interesting problems.

Now, the cloud that the cloud is the third piece and the reason why cloud is growing so well and represents about half almost of many of our CSPs particularly the ones with the public cloud, the reason for that is because the number of AI start-ups in the world is still growing so incredibly. I think, we're tracking something close to 10,000 and more AI start-ups around the country -- around the world in healthcare, in transportation, in retail, in consumer Internet and FinTech are the number of AI companies out there it's just extraordinary.

I think over the last three or four, five years, some $20 billion, $30 billion have been invested into start-ups. And these start-ups, of course, use cloud service providers so that they don't have to invest in their infrastructure, because it's fairly complicated and. And so we're seeing a lot of growth there. And so that's just the hyperscalers. The hyperscalers give us three points of growth, three areas of growth training, inference and public cloud. And public cloud is all -- is primarily AI start-ups.

Then there's the Intelligent Edge which we recently ventured into and we've been building this platform called EGX for some time. And it's cloud native, it's incredibly secure you can manage it from afar, it's -- the stack is complicated, its performance and we saw some, we've been working with some early adopters and this last quarter we announced some of them, Walmart and BMW and Procter & Gamble and the largest logistics company in the world USPS. And so this new platform, I think, long term will likely be the largest opportunity. And the reason for that is because that the industries that it serves

Operator

And your next question comes from the line of Harlan Sur with JPMorgan.

Harlan Sur -- JPMorgan -- Analyst

Good afternoon. Thanks for taking my question. There are a lot of concerns around China, trade tensions, economic slowdown, but history has shown that gamers tend to be less sensitive to these macro trends, and in fact, also somewhat insensitive to price changes, at least at the enthusiast level. So, given that China is such a big part of the gaming segment, can you just discuss the gaming demand trends out of this geography?

Jensen Huang -- Founder, President and Chief Executive Officer

Gaming is solid in China and is also the fastest adopter of our gaming notebooks. This gaming RTX notebooks or GeForce notebooks is really a brand new category. This category never existed before because we couldn't get the technology in there. So that it's both delightful to own as well as a powerful to enjoy.

And so we saw a really great success with RTX notebooks and GeForce notebooks in China and RTX adoption has been fast your. Your comments makes sense because most of the games are free to play these days. The primary games that people play are eSports, which you want the best gear, but you could. And after you buy the gear you pretty much enjoy forever and mobile which is largely free to play, you invest in invest in some of your own personal outfits and after that I think you can enjoy for quite a long time. And so the gear is the gear is really important.

One of the areas where we've done really great work, particularly in China has to do with social. We have this platform called GeForce Experience and as an extension of that there is a new feature called RTX Broadcast Engine, and it basically applies AI to broadcasting your content to share it, you could make movies, you could capture your favorite scenes and turn it into art applying AI and one of the coolest features is that you could overlay yourself on top of the game and share it with all the social networks without the green screen behind you. We use AI to stitch you out basically cut you out of the background. And irrespective of what noisy background you've got. And so, as you know, China is really a super hyper social community -- communities back there and they all kinds of really our core social platforms to share games, user-generated content and short videos and all kinds of things like that. And so GeForce has that one additional feature that really makes it successful.

Harlan Sur -- JPMorgan -- Analyst

Great. Thank you.

Operator

And your next question comes from the line of Toshiya Hari with Goldman Sachs.

Toshiya Hari -- Goldman Sachs -- Analyst

Hi, guys. Thanks for taking the question. I wanted to ask on automotive, Colette, in your prepared remarks you talked about your legacy infotainment business being down in the quarter. Just curious what percentage of automotive revenue at this point is legacy infotainment versus the newer AI/ADAS solutions. And more importantly, Jensen, if you can speak to the growth trajectory in automotive over the next year and a half maybe two that would be appreciated. And I do ask the question, because it feels like we've heard many, many announcements, customer announcements collaborative work that you're doing with customers. Yeah, we haven't quite seen sort of the hockey stick inflection that some of us were expecting a couple of years ago. So, just kind of curious when we should -- how we should set our expectations going forward. Thank you.

Colette Kress -- Executive Vice President and Chief Financial Officer

Yeah. Toshiya, let me address the first question regarding our legacy infotainment systems and our auto [Phonetic] business, it is still representing maybe about half or more of our overall revenue in the automotive business. We have our AI cockpit continuing to grow and grow quite well both sequentially as well as year-over-year as well as our autonomous vehicle solutions that we may be doing including development services.

Jensen Huang -- Founder, President and Chief Executive Officer

Let's say, we're the first -- probably the first AV car, that's going to be passenger owned on the road and I think we've talked about it before is Volvo. And we were expecting them to be in the late 2020, early 2021 time frame, and I'm still expecting so. And then there's the 2022, 2023 generations, most -- I would say, most of the passenger owned vehicle developments are going quite well. The industry, as you know, is under some amount of pressure. And so, a lot of them have slipped it out a couple of years or so. And this is something that I think we've already spoken about in the past.

Our focus, our strategy consists of several areas, one area of course is passenger owned vehicles. The second part is robotaxis, we have developments going with just about every major robotaxi company that we know of. And they are here in the States, they are in Europe, they are in China. And when you hear news of them, we're delighted to see the progress. And then the third part has to do with trucks, shuttles and increasingly a large number of vehicles that don't carry people, they carry goods.

And so we have a major development with Volvo, that was Volvo Trucks, Volvo Cars and Volvo Trucks, as you know, are two different companies, one of them belongs to Giles Volvo Cars, Volvo Trucks is the heritage Volvo. And we have a major program going with them to automate the delivery of goods. You also see us we're doing various GTCs, I'll mention companies are working with on grocery delivery or goods delivery or within a warehouse product delivery. You're going to see a whole bunch of things like that, because the technology is very similar and it's starting to the development, the technology we develop for passenger owned vehicles is starting to propagate down into logistics vehicles.

I continue to believe that everything that moves eventually will have autonomous capability or be fully autonomous. And that I think is at this point fairly certain. Our strategy is both in developing the in-car AV computing system and it's software defined, it's scalable as well as the AI development and simulation systems. And so when somebody is working on AV and they're using AI and most of them are, there is a great opportunity for us. And when they start ramping up, and they're collecting miles of data, it becomes a very large market opportunity for us.

And so I'm anxious to see every single car company be as progressive and aggressive in developing AV and they will be, this is a forgone conclusion.

Toshiya Hari -- Goldman Sachs -- Analyst

Thank you.

Operator

Your next question comes from the line of Stacy Rasgon with Bernstein.

Stacy Rasgon -- Sanford C. Bernstein & Co. -- Analyst

Hi, guys. Thanks for taking my questions. I have two data center questions for Colette. The first question I want to return to your kind of outlook for a strong sequential data center growth in Q4. Now, this business grew 11% sequentially in Q3 and you didn't actually call out strong growth as we were going into the quarter, you are calling out for Q4, does that suggest to me that you expect sequential growth in Q4 to be stronger than Q3, given you're calling it out in Q4, you didn't call it out in Q3 or would you define like what you saw in Q3 as well as already being strong sequential growth, like how do we think about the wording of that in relation to what we've seen in Q3, and what you expect for Q4?

Colette Kress -- Executive Vice President and Chief Financial Officer

Sure. Stacy, when we had provided guidance in Q3 and how we finished the quarter in Q3, we had indicated that our growth would stem from both gaming and data center. We completed that and we also had stronger than expected from guidance from both gaming and data center in our Q3 results.

Moving to Q4, Q4 see sequential decrease in totality versus Q3. We have reminded the teams about our overall seasonality that we sometimes have in gaming associated with our game tools [Phonetic] as well as also with our notebooks that seem to be primarily in Q2 and Q3 being our strongest quarters. And likely therefore a seasonal down tick as we [Phonetic] move to Q4. What we wanted to do was, if we have in totality overall decline associated with that we did want to emphasize what we are expecting in terms of data center with the overall strong growth sequentially.

Stacy Rasgon -- Sanford C. Bernstein & Co. -- Analyst

So, I guess, to ask the question again, would you define what you saw in Q3 as being strong growth as well?

Jensen Huang -- Founder, President and Chief Executive Officer

I would believe our growth of 17% was higher than we expected to Q3. Again, when we get into Q4 we'll see how the quarter ends in terms of data center, but we are expecting strong growth. Thanks, Stacy.

Stacy Rasgon -- Sanford C. Bernstein & Co. -- Analyst

Okay. Thank you. For my second question, hyperscale you said was up year-over-year now and that's after off of last year where it was the peak, inference doubled year-over-year. And this suggest to me, I know you said enterprise was down year-over-year. But this suggest to me that it wasn't just down year-over-year, it was down a lot year-over-year. How do we think about that in the context of like the growth that we've seen very strongly over the last few quarters, enterprise. And going back to your commentary at the Analyst Day, which was almost entirely about the opportunity coming from enterprise growth, what's going on there. What drove that and what should we expect going forward?

Colette Kress -- Executive Vice President and Chief Financial Officer

Sure. Our enterprise business has been beginning to ramp over a year ago at a very small base. We've continued to see great traction in there with a lot of the things that we've announced throughout, but keep in mind, in our year-ago quarter we also had very strong systems and a very large deal associated with our DGX. So, when we look from a quarter-over-quarter period we're just looking at one quarter we can't have a little bit of lumpiness. So, that year-over-year impact is really just due to a extremely large deal in the prior year Q3.

Operator

Your next question comes from the line of Mitch Steves with RBC.

Mitch Steves -- RBC -- Analyst

Hey, guys. Thanks for taking the question. Apologies for any background noise. But just had one question just for Jensen, so in 2018, can you give us a rough update on what the GPU utilization was for deep learning application and what it leaves [Phonetic] today. I'm just wondering how the -- how that's advanced over the last couple of year or two?

Jensen Huang -- Founder, President and Chief Executive Officer

Let's see. I would say, 2018, it was nearly all related to training and this year, we started to see the growth of inference to the point where, we now -- we've now sold more -- this last quarter, we sold more T4 GPUs for inference than we sold V100s that's used for training, and both of them were record highs.

And so the comment that Colette just made comparing to year-over-year, we had a large DGX system sale a year ago that we didn't have this year. But if you excluded that the V100 and the T4 is doing great. There are record levels and T4 didn't hardly existed a year ago now is selling more than V100s and both of them are record highs. And so that kind of gives you a feeling for it. I think, and that's really the major difference, the inference influence is really kicking into gear and my sense is that it's going to continue to grow quite nicely.

Mitch Steves -- RBC -- Analyst

Got it. Thank you.

Operator

And your next question comes from the line of Joe Moore with Morgan Stanley.

Joe Moore -- Morgan Stanley -- Analyst

Great. Thank you. I wonder if you could talk a little bit more about the 5G opportunity that you announced at Mobile World, and I guess, you talked a lot about AI and IoT services in a CRAN environment, but is there -- how big is that opportunity and can you address kind of the core compute aspect to CRAN with the GPU?

Jensen Huang -- Founder, President and Chief Executive Officer

Yeah. If you look at the world of mobile today, there are players that are building DRANs and there are radio heads in the BBU, basically the baseband units. In the data center where people would like to move the software for radio networks, it's really an untapped market. And the reason for that is because the CPU is just not able to support the level of performance that's necessary for 5G. And ASICs are too rigid to be able to put into a data center. And so the data center needs a programmable solution that is data center ready that can support all of the software richness that goes along with the data center, whether it's a VM environment like VMware and we recently -- during the quarter, we announced another partnership with VMware. They recognize that increasingly our GPUs are becoming a core part of data centers and cloud.

We had a partnership -- we announced the partnership with Red Hat, they realize the momentum with what [Phonetic] they're seeing us in telcos and they would like to adapt their entire stack from open stack to OpenShift on top of our of our GPUs. And so now with VMware, with Red Hat, we're going to have a world-class telco enterprise stack that ranges all the way from hypervisors and virtual machines all the way to Kubernetes.

And so our strategy is to -- our goal is to really create this new world of CRAN and VRAN, centralized data centers and software defined networking. And the software defined networking will, of course, include things like in the data center networking as well as firewalls, but the computationally intensive stuff is really the 5G radio.

And so we're going to, we're going to create a software stack for 5G in basically exactly the same way that we've done for creating a 5G -- a software stack for deep learning. And we call it aerial, aerial is to 5G essentially what cuDNN is for deep learning. And essentially what OptiX is for ray tracing. And this software stack is going to allow us to run the whole software -- run the whole 5G stack and software and deliver the highest performance, the incredible flexibility and scale to as many layers of MIMO as customers need and to be able to put all of it in the data center. The power of putting it in the data center as you know is flexibility and fungibility.

With the low latency capability of 5G you could put a data center somewhere in the regional hub and depending on where the traffic is going, you could shift traffic computation from one data center to another data center, something that you can't do in basebands, in -- baseband units in the cell towers, but you can do that in the data center and that helps them reduce the cost.

The second benefit is that the telcos would love to be a service provider for data centers' computation at the edge. And the edge applications are things like smart cities and whether it's warehouses or retail stores or whatever it is, because they're geographically located and that is distributed all over the world. And so, to be able to use their data center to also be able to use AI in combination with IoT is really excited to them. So, I think, that that's really -- this is really the future that we're going to see a lot more service providers at the edge and these edge data centers will have to run the data center, the networking, including the mobile network and software as well as run 5G and IoT, AI and IoT applications.

Joe Moore -- Morgan Stanley -- Analyst

Great. Thank you.

Operator

And your last question comes from the line of Harsh Kumar with Piper Jaffray.

Harsh Kumar -- Piper Jaffray -- Analyst

Yeah. Hey, guys. I apologize for the background noise, but Colette maybe you could give us an idea of gaming and the guidance, it's down and I was wondering could you maybe give us the impact of the console business versus the laptop and give us an idea of what might be the bigger driver there?

Colette Kress -- Executive Vice President and Chief Financial Officer

I'd say, our Q4, both of them are expected to be seasonally down. In the case of the consoles, we do wait for Nintendo to assist in terms of what they need, so we will have to see how the quarter ends on that. But in both cases, in totality, these businesses have ranged may be in totality of the two of about $500 million a quarter and we'll see both of them sequentially, declined. Thank you.

Harsh Kumar -- Piper Jaffray -- Analyst

Understood. Thank you.

Operator

I will now turn the call back over to Jensen for any closing remarks.

Jensen Huang -- Founder, President and Chief Executive Officer

Thanks, everyone. We had a good quarter, driven by strong gaming growth and hyperscale demand. We're making great strides in three big impact initiatives. The world's computer graphics is moving to ray tracing and our business reflects that. Some of the biggest blockbuster games this holiday season and beyond our RTX enabled including Call of Duty: Modern Warfare and the best selling game of all time Minecraft.

Design applications used by millions of artists and creators are rapidly adopting RTX ray tracing. We're reinventing computer graphics and look forward to upgrading the hundreds of millions of PC gamers to RTX. Hyperscale demand was strong this quarter and our visibility continues to improve. The race is on for conversational AI, which will be a powerful catalyst for us in both training and inference. And lastly, we have extended our computing platform beyond the cloud to the edge where GPU-accelerated 5G, AI and IoT will revolutionize the world's largest industries. We look forward to updating you on our progress in February.

Operator

[Operator Closing Remarks]

Duration: 61 minutes

Call participants:

Simona Jankowski -- Vice President-Investor Relations

Colette Kress -- Executive Vice President and Chief Financial Officer

Jensen Huang -- Founder, President and Chief Executive Officer

Vivek Arya -- Bank of America Merrill Lynch -- Analyst

Aaron Rakers -- Wells Fargo -- Analyst

C.J. Muse -- Evercore ISI -- Analyst

Harlan Sur -- JPMorgan -- Analyst

Toshiya Hari -- Goldman Sachs -- Analyst

Stacy Rasgon -- Sanford C. Bernstein & Co. -- Analyst

Mitch Steves -- RBC -- Analyst

Joe Moore -- Morgan Stanley -- Analyst

Harsh Kumar -- Piper Jaffray -- Analyst

More NVDA analysis

All earnings call transcripts

AlphaStreet Logo

10 stocks we like better than NVIDIA
When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has quadrupled the market.* 

David and Tom just revealed what they believe are the ten best stocks for investors to buy right now... and NVIDIA wasn't one of them! That's right -- they think these 10 stocks are even better buys.

See the 10 stocks

*Stock Advisor returns as of June 1, 2019

This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.

Motley Fool Transcribers has no position in any of the stocks mentioned. The Motley Fool owns shares of and recommends NVIDIA. The Motley Fool has a disclosure policy.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

In This Story

NVDA

Latest Markets Videos

The Motley Fool

Founded in 1993 in Alexandria, VA., by brothers David and Tom Gardner, The Motley Fool is a multimedia financial-services company dedicated to building the world's greatest investment community. Reaching millions of people each month through its website, books, newspaper column, radio show, television appearances, and subscription newsletter services, The Motley Fool champions shareholder values and advocates tirelessly for the individual investor. The company's name was taken from Shakespeare, whose wise fools both instructed and amused, and could speak the truth to the king -- without getting their heads lopped off.

Learn More