
The internet has been with us for nearly 3 decades, and in that time, it has evolved from a simple method of sharing information to a powerful computing tool. At the beginning of the internet age, the speed of communication was limited by the hardware and infrastructure. The use of conventional telephone lines, combined with the processing capabilities of the best computers, meant that there was a speed limit imposed on communication.
As the power of computers and the speed of communication improved, so too did the capacity of the internet. The ability to send more data in a shorter time allowed users to share larger files such as videos for the first time, but the speed was still sufficiently limited to make real-time streaming as we now know it impractical.
As cloud computing infrastructures
have seen explosive growth over the
last decade, and especially over the
last two years, applications that rely
solely on the cloud for data storage
and processing are beginning to show
signs of strain. This is particularly
true of those that require sub-second
response times and high availability.
This makes sense because the success
of cloud computing hinges on having
a reliable connection to a centralized
data center via the internet. If the
connection goes down, you suffer
costly downtime. And even when your
applications are up, they must contend
with network latency that slows the
entire user experience.
It takes time for an application to send
data to a remote cloud data center for
processing and then wait for a result to
come back down over the wire in order
to respond to an input. When every
millisecond can mean the difference
between making the right or wrong
decision, that’s simply not acceptable.
Also, in many cases the clients
themselves amplify the unreliable
nature of internet connectivity by
moving from location to location.
A reliable way to eliminate these many
risks is to move data processing closer
to the applications. Enter the concept
of “Edge Computing.”
Introduction
WHITEPAPER 2