Edge computing optimizes Internet devices and web applications by bringing computing closer to the data source. This minimizes the need for long-distance communications between client and server, which reduces latency and bandwidth usage.
What is edge computing?
Edge computing is a networking philosophy focused on bringing computing as close to the data source as possible to reduce latency and bandwidth use. In simpler terms, edge computing means running fewer processes in the cloud and moving those processes to local places, such as on a user’s computer, an IoT device, or an edge server. Bringing computation to the network’s edge minimizes the amount of long-distance communication that has to happen between a client and server.
What is the network edge?
For Internet devices, the network edge is where the device, or the local network containing the device, communicates with the Internet. The edge is a bit of a fuzzy term; for example, a user’s computer or the processor inside an IoT camera can be considered the network edge, but the user’s router, ISP, or local edge server is also considered the edge. The critical takeaway is that the network’s edge is geographically close to the device, unlike origin servers and cloud servers, which can be very far from the devices they communicate with.
What differentiates edge computing from other computing models?
The first computers were large, bulky machines that could only be accessed directly or via terminals that were an extension of the computer. With the invention of personal computers, computing could occur in a much more distributed fashion. For a time, personal computing was the dominant computing model. Applications ran and data was stored locally on a user’s device, or sometimes within an on-premise data center.
Cloud computing, a recent development, offered several advantages over locally-based, on-premise computing. Cloud services are centralized in a vendor-managed “cloud” (or collection of data centers) and can be accessed from any device over the Internet.
However, cloud computing can introduce latency because of the distance between users and the data centers where cloud services are hosted. Edge computing moves computing closer to end users to minimize the distance that data travels while retaining the centralized nature of cloud computing.
To summarize:
- Early computing: Centralized applications only running on one isolated computer
- Personal computing: Decentralized applications running locally
- Cloud computing: Centralized applications running in data centers
- Edge computing: Centralized applications running close to users, either on the device itself or on the network edge
What is an example of edge computing?
Consider a building secured with dozens of high-definition IoT video cameras. These are “dumb” cameras that output a raw video signal and continuously stream that signal to a cloud server. On the cloud server, the video output from all the cameras is put through a motion-detection application to ensure that only clips featuring activity are saved to the server’s database. This means there is a constant and significant strain on the building’s Internet infrastructure, as significant bandwidth gets consumed by the high volume of video footage being transferred. Additionally, the cloud server has a hefty load that simultaneously processes the video footage from all the cameras.
Imagine that the motion sensor computation is moved to the network edge. What if each camera used its internal computer to run the motion-detecting application and then sent footage to the cloud server as needed? This would significantly reduce bandwidth use because much camera footage will never have to travel to the cloud server.
Additionally, the cloud server would now only be responsible for storing the critical footage, meaning that the server could communicate with a higher number of cameras without getting overloaded. This is what edge computing looks like.
What are other possible use cases for edge computing?
Edge computing can be incorporated into various applications, products, and services. A few possibilities include:
- Security system monitoring: As described above.
- IoT devices: Smart devices that connect to the Internet can benefit from running code on the device itself, rather than in the cloud, for more efficient user interactions.
- Self-driving cars: Autonomous vehicles must react in real time without waiting for instructions from a server.
- More efficient caching: By running code on a CDN edge network, an application can customize how content is cached to serve content to users more efficiently.
- Medical monitoring devices: Medical devices must respond in real time without waiting to hear from a cloud server.
- Video conferencing: Interactive live video takes quite a bit of bandwidth, so moving backend processes closer to the source of the video can decrease lag and latency.
What are the benefits of edge computing?
Cost savings
The example above shows that edge computing helps minimize bandwidth use and server resources. Bandwidth and cloud resources are finite and cost money. With every household and office equipped with smart cameras, printers, thermostats, and even toasters, Statista predicts that by 2025, over 75 billion IoT devices will be installed worldwide. Significant computation must be moved to the edge to support all those devices.
Performance
Another significant benefit of moving processes to the edge is to reduce latency—every time a device needs to communicate with a distant server somewhere, that creates a delay. For example, two coworkers in the same office chatting over an IM platform might experience a sizable delay because each message must be routed out of the building, communicated with a server across the globe, and brought back before it appears on the recipient’s screen. If that process is brought to the edge, and the company’s internal router transfers intra-office chats, that noticeable delay would not exist.
Similarly, when users of all kinds of web applications run into processes that have to communicate with an external server, they will encounter delays. The duration of these delays will vary based on their available bandwidth and the server’s location. Still, these delays can be avoided altogether by bringing more processes to the network edge.
New functionality
In addition, edge computing can provide new functionality previously unavailable. For example, a company can use edge computing to process and analyze its data at the edge, which makes it possible to do so in real-time.
To recap, the key benefits of edge computing are:
- Decreased latency
- Decrease in bandwidth use and associated cost
- Decrease in server resources and associated cost
- Added functionality
What are the drawbacks of edge computing?
One drawback of edge computing is that it can increase attack vectors. Adding more “smart” devices, such as edge servers and IoT devices with robust built-in computers, creates new opportunities for malicious attackers to compromise these devices.
Another drawback with edge computing is that it requires more local hardware. For example, while an IoT camera needs a built-in computer to send raw video data to a web server, it would require a much more sophisticated computer with more processing power to run its motion-detection algorithms. But, the dropping hardware costs are making it cheaper to build more intelligent devices.