QLogic Corp. (QLGC)
2012 Analyst Day Conference Transcript
September 6, 2012 12:30 PM ET
Jean Hu - Chief Financial Officer
Simon Biddiscombe - President and CEO
Arun Taneja - Founder, Taneja Group
Roger Klein - SVP, General Manager, Host Solutions Group
Craig Alesso - Director, Product Marketing, Network Solutions
Shishir Shah - SVP and General Manager, Storage Solutions Group
Tom Joyce - Vice President, HP Storage Marketing, Strategy and Operations
Steve Russell - Managing Director, Enterprise Computing, Morgan Stanley
Keith Bachman - BMO Capital Markets
Amit Daryanani - RBC Capital Markets
Aaron Rakers - Stifel Nicolaus & Company, Inc.
Scott Craig - Bank of America Securities-Merrill Lynch
Previous Statements by QLGC
» QLogic's CEO Presents at 2012 Citi Technology Conference (Transcript)
» QLogic Corporation CEO Discusses F1Q13 Results - Earnings Call Transcript
» QLogic's CEO Presents at JPMorgan TMT Conference Call (Transcript)
After the break Shishir Shah will discuss in great detail about exciting technology announcement we had this morning. You also will hear from our customers. I’ll come back to review a financials and Simon will provide a closing summary. In the end, we’ll have a Q&A session to answer all your questions.
So, before we start, just a quick reminder, our comments and the presentation today will be subject to our Safe Harbor statement, you can read on the screen.
And, with that, please join me and welcome our CEO, Simon Biddiscombe.
Thank you. Will turn my microphone on. We will turn my microphone on. We will turn my microphone on. Can you hear me?
You can hear clear.
Put lines off.
You turn to see his microphone on, not mine.
You got mine on.
Mine is on. Is this turned on?
That was working.
That was working.
Got to be some joke about not been able to get Facebook, stock trading and not been able to get my microphone working. But I don’t have time for that. So, Jean gave you the Safe Harbor.
Thank you for joining us this afternoon. It’s a two years since QLogic last had an Analyst Day, and two years ago, we gave you an overview of how we saw the evolution of the data center associated with the explosion of data that was ongoing at that point in time and that we certainly believed was going to be a continuing challenge for those who manage an architect data centers moving forward.
The critical point we tried to make two years ago, is look, we don’t see the world differently than anybody else. We see the world in substantially the same way in terms of the technologies that are going to be relevant and the deployment models that are going to relevant.
And I think a lot of that has played like over the course of the two years, albeit not at quite the pace that we had expected it to for certain technologies and albeit in a different micro economic environment than we had expected.
With that all said, today the challenge is associated with implementation of solutions to process, move and store data, so servers, networks and storage are incredibly different than they were two years ago. And the data itself, the explosion of data itself continues to be a bigger challenge than any of us ever anticipate, would be the case.
So those trends have continued to give rise to new technologies over the course of the last couple of years. We’ve see incremental use cases associated with solid state technologies, be the PCI based or be there arrays, we’ve seen incremental technology development with storage and associated with network, and flat networks, flat networks with much higher bandwidth and lower latency than we’d expected to see by this point in time.
QLogic tends to be a beneficiary from every one of the trends that we are going to talk about today. And QLogic tends to be a significant beneficiary from the explosion of data that continues to be the challenge that our solutions enabled management within the data center.
And today with announcement of our latest products that simplify the deployment of server-side cache and server-side SSD capabilities. We believe that we are once again demonstrated thought leadership and bringing highly innovative solutions to market.
And if you remember, I told you two years ago, the vision was to be the market leader in high performance data center connectivity, that’s completely unchanged. I still believe we are the market leader in high performance data center connectivity and I expect us to remain the market leader in high data center connectivity moving forward.
So let’s start with data, let’s talk about what’s going on at this point in time. More data, the most data, most data center architect not just can even come close to be an able to manage let alone start to interpret applying intelligence too.
We live in the big data age, that’s changed dramatically over the course of the last couple of years and we’re dealing with data sets for the so large and complex, extraordinarily difficult to bring capabilities that come our traditional data management set to bear on big data.
So it’s difficult to be associated with capturing the information, storing the information, searching, sharing, visualizing the information, whether that information comes from web blogs or SAN networks or RFID, the social network and so on and so on. The challenges associated with data are just growing incrementally more complex by the day.
Just couple of anecdotal points; number one, as the Wal-Mart now handles more than a million customer transaction every hour and the data bases of Wal-Mart has 167 times size of all the books and library of Congress today, and data at some point tomorrow its going to be radically different.
Everyday humans create 2.5 quintillion bytes of data. It’s a 1 followed by 18 zeros. And since our last Analyst Day, since we last stood here in New York, almost two years ago to the day, 90% of the world’s digital data has been created.
So we are living in the world with enormous opportunity for those who can bring differentiated solutions today and those differentiate solutions can be hardware centric, and those differentiated solutions can be software centric.
So all the information that’s generated has to be processed and it’s moved and it’s stored. And its not just processed once, its processed on multiple occasions, its processed on our iPhones and our iPads and then its on PCs, it’s processed on servers and it’s processed on adapter and it’s going to be processed on storage arrays.
So any individual by today that is now being processed on multiple occasions, it’s moved across multiple networks, it’s moved across wireless networks, mobile wireless networks, cellular networks, moved across enterprise storage networks, it’s moved across enterprise Ethernet networks, it’s moved across your home networks.
And then ultimately it’s stored and ultimately -- it’s not stored once, stored on many, many occasions, whether it’s stored on personnel devices then it’s stored and backed up in cloud and so on and so on and so on.
So more data, driving more infrastructure across the multitude of protocols, across servers, networks and storage. And everything we do in our business lives and in our personal lives continues to drive that explosion of data as we move forward. Structured data and it’s unstructured as well.
It is my favorite example actually, so don’t pay too much attention to what it says on the screen. So we start this from tera data. So what that says is 1 terabyte of data is generated by commercial things like. It’s kind of interesting that’s actually just a commercial data, that’s reservation systems and the processing of the FAA information and so on and so on and so on.
Every one of those engines has the ability to generate 10 terabytes of data through the sensors that exist in it, every 30 minutes of life time. So imagine the challenges for a Boeing or an airbus or an airline data center architect or manager as he tries to capture, process and store the amount of information that’s been generated in that kind of environment.
The other one and my favorite examples recently was BMW, BMW’s data center managers went from 64,000 unique users to 10 million unique users and every vehicle have the ability to send BMW its sensory information. Have you ever start as a data center manager or architect think about the challenges associated with capturing, processing and storing that amount of data.
And the answer is that we have started from servers to networks of storage that been in the whole multitude of critical technology advancements over the course of recent years and QLogic’s been an important part as to how many of those advances have occurred. And I’ll show you how we think about that as I work through the remainder of my presentation and then each of the presenters you hear from later today will also give you a view on how we think we contribute to the evolution of technology within this regard and importantly, how we think about that moving forward.
But let’s take a quick look at what we’re seeing across each of these technologies, so we’ll start with servers. So long gone are the days where single socket, single core, CPUs were the servers of choice across much of the enterprise market and across environments where data management is critically more complex than has been in the past.
The mega physical servers that have multiple sockets, six and eight processor, each of those processors is capable of having 10 cores or whatever the latest numbers at this point in time based on Romley.
And that will continue grow as we move forward. On top of those cause relating VMs, those VMs are running unique operating systems, those operating systems are all running unique applications and every one of those applications is part of what’s driving the digital economy and it’s driving the digital lives that we all live on a day to day basis.
But that serve as a hungry, very hungry actually, long gone are the days where you would assume that you can feed that process capability with 1-gig Ethernet wire or a couple of 1-gig adapters, or with 2 or 4-gig Fibre Channel capabilities. The applications demand data, demanding quickly, those applications demand data essentially in real time.
Within the context of the enterprise and within the context of the data center time is money and one of the critical value proportion associated with our technology announcement a little earlier today, our Mt. Rainier technology is the beauty of eliminating time to data, bring data closer to the processor and allowing applications to have access to data far faster than they have at any point in the past.
We think this is one of the key elements of what has made QLogic a successful company and we think the continued innovation that will put in forth the key element as well as continue to drive us the success across the market that we are serving.
By the way, behind every one of those tablets and iPhones and whatever other products we are holding today. There is an enormous demand for servers, every 600 smartphones or 122 tablets require a new server to be installed.
Servers that are behind the scenes processing emails, web searches, Facebook updates, instagram, instagram I still don’t understand, two years ago it didn’t even exist, today my kid think instagram is the coolest thing on earth, when I look at the actual amount of processing and storage associated with it, extraordinarily high.
So high performance servers and lower end servers all demand and drive for -- all demand -- driving demand for incremental I/O. Since more high performance connectivity which is all that QLogic brings to the market.
Its different network technologies, so network clearly have to be scalable and they clearly have to continue to be agile. We’ve really three distinct technology trends within the context over the last couple of years.
First, is clearly about speed, speed being the combination of bandwidth and latency. On bandwidth, we are seeing networks, on the Ethernet side go from 1 to 10 to 40 to 100-gig and now for the first time we’re talking about 1 terabytes Ethernet.
And on the Fibre Channel side we’ve gone from 8 to 16 to 32, whether or not there will be 64, I think only time will tell. We didn’t think there will be 4-gig Fibre Channel, but it clearly has a longevity that’s way beyond the expectation that people had.
Fibre Channel continues to be rock solid, it continues to be the storage protocol of choice within the enterprise and you’ll hear that as we move through presentation this afternoon as well.
Both Ethernet and Fibre Channel networks are being optimized for latency and we believe that that continues to be offer a significant opportunity for our company as we think about differentiated solutions that leverage capabilities that exist within QLogic.
That’s the first trend speed, the second trend within networks is clearly convergence and the converted convergence clearly isn’t what people expected it to be, but we still all believe within converged networks. We still believe that ubiquitous Ethernet world and the value of in particular FCoE our case is something that will continue to gain traction as we love forward.
We’ve seen post strongly. We’ve seen anecdotal information in talking to our major OEMs we know there has been something of an uptick in FCoE post Romley launches and part of how we always characterize our expectations for that market was you need to 10-gig and with Romley came 10-gig and with 10-gig comes the ability to move FCoE more effectively than ever before. So we continue to believe within conversions in networks.
And then, finally, we continue to believe is in the flattening of the networks. When we talk about the flattening we’re talking about taking at hops, we’re talking about the removal of individual physical switches that cause death to the networks.
And we do that, we believe in that for two reasons; number one, latency, the more hops there are, the more latency you introduced; and number two, management, the more data that you’re trying to move from switch to switch, to switch, to switch, before you finally get to a storage device, you’ll finally get to a server, the more complex, the management of that data becomes.
So you’ll hear from Tom Joyce, who is the VP of Marketing for HP Storage Business how we have enabled a much flatter perspective of storage networks in particular in this case for HP in their flat SAN implementation.
So we’re going to see in three trends, the three trends have continued to be prevalent within networking. Number one speed, number two convergence and number three the flattening of networks. And we continue to believe that every one of those can be a positive driver for QLogic.
As explosion in the number of users and devices, and every one of those users and devices ultimately requires network connectivity, more ports than ever before part of the lead system around Fibre Channel that people still haven’t got the grips with this, despite everything that is going on from a technology evolution over the course of the last 15 years the Fibre Channel existed.
As an industry we ship more Fibre Channel last year than ever before and in 2010 we ship more Fibre Channel than ever before. I’m not sure it will be the same this year, having the macro impact is certainly dampening the demand environment this year. Fibre Channel hasn’t gone away and we shipped more last year than we’ve ever shipped before.
So you got the explosion in number of users all requiring access to networks and ultimately that access to networks is about mobile networks, it’s about enterprise networks and it’s about carry wireline networks and so on, and we continue to expect that will be an explosion in the number of port.
And then finally, this clearly going to be an explosion in the size of the pipes, fatter pipes, the only way you are going to be able to deal with expectations associated with the amount of rich media the people expect to continue to drive all the networks is with fatter pipes.
So in 1 million minutes of video across the network every second in 2015, not sure, that’s what Cisco expect to see in 2015. The network requirements associated with that absolutely enormous.
I’ll come back to what zettabyte is, 2010 was the first year of zettabyte of data was actually generated and I’ll come back in a couple of slides, we may have look ahead to try to figure out what is zettabytes is.
Suffice to say PowerPoint doesn’t actually know how to spell zettabytes, caused us tremendous amount of consternation so, as 4.8 zettabytes by the end of 2015. So we got more data, driving the needs for more I/O, it’s very clearly, clearly what QLogic is all about.
And then finally, within the context of what’s going on within storage environment, everything retained, even if you think you deleted the chances are its being retained somewhere, maybe retained to compliance, may just be retain to future access.
And so what we are seeing is significant technology advances on the storage side and it is things like PCIe based SSD, it is things like flash-based arrays, it is things like thin provisioning and so on and so on and so on.
So in the storage market we continue to drive technology evolution in ways that haven’t previously anticipated. QLogic’s a critical part of storage I/O. Regardless of which protocol it is, Fibre Channel, iSCSI, FCoE, Ethernet.
QLogic’s a critical part of storage I/O and you are going to hear more from Roger on the target market very specifically and the opportunity that exist for us in the target market as we work our way through the presentations.
This is zettabyte, can’t even see at terabyte, but that’s a zettabyte of data, all demanding high performance storage I/O, which is all of what QLogic is about. So whether its servers and evolution of servers, whether its networks and the evolution of networks, or its storage and evolutions of storage. So it’s clearly being critical technology advances over the course of the last couple of years since we last talked.
And QLogic clearly stands to be a beneficiary of each of those based on the solutions that it has in the market today and that it will continue to bring to market moving forward. But unfortunately, that doesn’t solve the problems associated with the explosion of data.
Network traffic is expected to grow by 32%, storage capacity is expected to meet to grow by 50% and that’s between now and 2015. But we are going to have that handle against the backdrop of an increasing spending of somewhere around 7% and I will grant you, that they maybe a little bit more or less that goes to network and storage technologies, okay.
But there is a huge disconnect between the rates of growth of traffic and the rates of growth of spend and that’s given the rise to different deployment models. It’s given rise to virtualized data centers. It’s given the rise to cloud. It’s given the rise to the converged enterprise.