How to Invest In Edge Computing: Why Exploding Data Demand And Creation is Driving This Trend
So, what do you get when you cross Dell Technologies (DELL), FedEx (FDX), and Switct (SWCH)? Up until last week, you would be hard-pressed to answer that question, but after last Thursday’s announcement, the answer is a proliferation of edge computing facilities. Storage and processing hardware provided by Dell, connectivity provided by Switch, and real estate courtesy of FedEx. Talk about synergy!
But we have a feeling that more than a few readers are scratching their heads asking the obvious question, “Uh, what exactly is ‘Edge Computing’ and why is it expected to have explosive growth in the coming years?”
According to Alphabet (GOOGL) Google Search Trends, popularity of the term “Edge Computing” over the past 5 years peaked during August and September of this year following an explosion in cloud storage over the last decade. As we continue to figure out what to do with the peta-and exabytes of data we collectively produce and consume, the next frontier is figuring out how to optimize how we process all that data and turn it into something useful.
Enter “Edge Computing” a segment valued at $3.5 billion at the end of 2019 and expected to grow at a CAGR of 37% over the next five years according to Grandview Research. As for what it is, IBM (IBM) defines it as “a distributed computing framework that brings enterprise applications closer to data sources such as IoT devices or local edge servers.” At Tematica we tend to say context and perspective help frame the data we are getting and where we are going. As such, if we want to better understand Edge computing and its business benefits, it helps to better understand the past.
Back when governments and corporations were the only entities that had to manage and process large amounts of data there were several limitations they had to contend with. The first was cost. According to a page we found courtesy of the Internet Wayback Machine, the cost of a megabyte (MB) of storage in the early 1980s amounted to anywhere from $120 to $300 depending on the vendor. Those costs dropped significantly in the 1990s and by the end of the decade, 1MB of storage could be had for around $0.10. Nowadays? Asking about data storage in MB will get you laughed out of the room. Western Digital (WD) offers a 1 Terabyte drive (1,000,000 MB) branded for use in Microsoft’s (MSFT) XBOX video game platform, which goes for $69.99. The 5TB version goes for $139.99. To be clear, these storage products are targeted towards teenagers, not corporations. Storage is so inexpensive that even cloud storage services like Dropbox (DBX) will give users 2GB of storage for free.
The second limitation was physical space. While the roughly 4” X 6” X 1” form factor of hard drives hasn’t changed in decades, what used to hold 20GB in the 1990s can now hold upwards of 20 TB - a ratio of 1:1000. The third limitation was processing power. An Apple 165c purchased in the early 90s (that may or may not still be in one of our closets somewhere) had a 160MB hard drive, 4MB of RAM, and a 32MHz processor. The Lenovo device this is being written on sports a 512GB SSD, 16GB of RAM, and a 3.4GHz processor all for about half the price of that ‘90s era Mac. Back then, only corporations could afford the processing power, storage, and the real-estate required to support large scale databases and processing power required to do anything with their data. The final limitation (more of a requirement really) is that both governments and corporations needed to control access to proprietary data.
All these limitations led to the development of the client-server model where workers sat at what used to be called “dumb terminals” and used a custom program (front end) that allowed them to either input data or output (essentially view or print) pre-canned reports. Any change to the front end or any new report (or change to a report) needed to be sent to company programmers who would COBAL or FORTRAN their way to a solution. It wasn’t until the mid-1990s that the much smarter and more versatile desktop computer started to become a regular sight in offices.
While programs like VisiCalc and Lotus 1-2-3 had been available for over a decade, companies were just starting to embrace local computing and provide both desktops and programs to their workforce. This shift, along with Microsoft’s Office platform brought power to the (office) people. For the most part, no more waiting for programmers to make and test change after change just so your boss could see a report with the second column shifted over half an inch.
Since those days, computers have moved out of the office and into our homes, cars, pockets, and wrists. Much has changed over the past 20 or so years. Some constants throughout that time have remained, including Moore’s Law (to the point where there is debate whether or not it’s still a thing), the exponential growth of the amount of data we and our devices generate and need to store, and the increasing complexity of how we turn all of that data into information. It is this last part where “Edge Computing” comes into play.
Back in the Client-Server corporate model, when “data” referred to (ASCII) numbers and text, a database was shared with anywhere from a couple of thousand to a couple dozen thousand people. Today, data refers to anything that can be digitized including text, pictures, audio, and video. The audience for these various data sets can run into the millions. Back then, users were limited to what they could do with the data in the database by what their organization’s programmers would code but essentially, outside of basic-ish math equations and logic trees, nothing extremely taxing. In today’s world, companies are offering services like audio and video editing, or algorithmic trading development, not to mention online video game platforms and augmented and virtual reality environments. These all use vast amounts of data and users have either come to expect or demand near-instantaneous execution and results.
In as much as Cloud Computing has optimized the storage delivery, and routing of data via Edge Computing, the new trend looks to optimize the processing of all that data. Take, for example, Nvidia’s (NVDA) latest video processing platform (Nvidia Maxine) that doesn’t just compress video streams but uses artificial intelligence (AI) to trace and recreate the images it is processing. This new technology reduces the amount of transmitted data by up to 90% which greatly reduces bandwidth needs for video calls. It also requires more processing power.
It is always the case that more is better and now that case is shifting to include “and closer” as well. If you are at all familiar with stock exchange services, think of Edge Computing as a global co-location service. If you have no idea what we are talking about, think about Edge Computing as bringing computing power to you. For the same reasons why FedEx routes pretty much all its packages through Memphis, companies are looking to bring computing power physically closer to users. The benefits of this are several, including faster insights, improved response times, and better bandwidth availability.
This might sound silly given the “speed of the internet” but when you take into consideration that processor technology is currently pushing the limits of physics and millions (if not billions) of users are making more and increasingly complex demands on these systems, anything that will give companies and users an edge becomes table stakes. Research firm Gartner estimates that by 2025, 75% of data will be processed outside the traditional data center or cloud. All the more reason why investors should be watching pre-IPO companies like Mutable, Swim.AI, Mobiledge X, Packet, and Affirmed Networks.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.