Edge Computing: A game changer for service providers?

Edge Computing: A game changer for service providers?

Edge Computing can offer numerous empowering services and benefits which increase revenue streams and reduce network transport costs. Bart Salaets, Senior Director Solutions Architect EMEA at F5 Networks discusses the need for an Edge Computing strategy.

Today’s centralised cloud computing architectures mean unprecedented speed, scale and elasticity are at our fingertips.

In most conceivable instances, the technology is adaptable, agile and entirely fit for purpose. 

It is not, however, optimal for cost-effective, 5G-enabled Internet of Things (IoT) use cases that require ultra-low latency and extreme throughput.

This is where Edge Computing comes in.

Rather than transmitting data to the cloud or a central data warehouse to be analysed, processing can take place at the ‘edge’ of a network, reducing network latency, increasing bandwidth and delivering significantly faster response times.

This is a big deal for service providers who are now in a unique position to shake up entire industries (including their own) and offer new, pioneering and profitable services via distributed architectures.

Instead of being centrally anchored, this type of architecture features components presented on different platforms. These components then cooperate over a communication network in order to achieve a specific objective or goal. For example, it could entail distributing selected network functions such as cloud radio access network (C-RAN) for 5G, or hosting IoT-related applications.

Thanks to its distributed nature, Edge Computing can empower service providers to offer new solutions and services that simultaneously increase revenue streams and reduce network transport costs.

Consider applications that require ultra-low latency (self-driving cars) or high bandwidth (video surveillance). By leveraging Edge Computing, service providers can choose to bring these services to market via Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS) options – all depending on how deep they want to be in the value chain. Services of this nature cannot be offered via traditional public cloud.

Although we’re still in the early stages of Edge Computing’s evolution, we can confidently expect a host of influential IoT use cases to break into the mainstream in the coming years. For example, the development of Augmented Reality (AR), Virtual Reality (VR) and mobile gaming applications are already enthusiastically incorporating Edge Computing capabilities, increasingly reaping the benefits of rapid responsiveness in the face of high-bandwidth usage.

Virtualised content delivery network (vCDN) solutions are also easy to monetise. Content providers get to offload from their central servers and service providers can save on backhaul and transport costs. The customer gets a rapid and seamless user experience. Everyone wins.

Another eye-catching scenario involves service providers deploying small Edge Compute sites on enterprise campuses to deliver private 5G connectivity and services, thus deftly swerving the need for traditional Local Area Networks (LAN) and Wi-Fi.

Making it all work

So how can service providers take more proactive ownership of these nascent use cases – which are just the tip of iceberg – and alchemise them into safe, viable and profitable realities?

Without question, they will need both intelligent networking and traffic management functions at the Edge Compute site, as well as Application Delivery Controller (ADC) and security services in front of the applications hosted there.

It is worth noting that ADC and security services were traditionally delivered on purpose-built infrastructure, leveraging hardware-based acceleration in order to deliver high scale and capacity. While most Edge architectures will be built based on common off-the-shelf (COTS) servers, the need for high performance remains. Recent innovations like Intel’s Quick Assist Technology address this demand, ensuring service providers benefit from acceleration capabilities such as encryption and compression via COTS platforms.

Edge Computing also calls for a distributed approach to robust application layer security such as a web-application firewall (WAF). One of the biggest mistakes is to assume traditional security controls such as firewalls are enough. Fortunately, today’s Advanced WAF (AWAF) solutions are now capable of dynamically protecting applications with anti-bot capabilities and stopping credential theft using keystroke encryptions. It is also possible to extend app-layer DDoS detection and remediation for all applications via a combination of Machine Learning and behavioural analysis. Other technological must-haves include the ability to deliver cloud-native application services for microservices-based applications, as well as API gateway functions to securely interconnect with third parties accessing the Edge Compute platform.

While Edge Computing’s reach and influence may still be (relatively) embryonic, there has been a growing momentum across EMEA in the past couple of years, particularly in the automotive and manufacturing sectors. Soon, every organisation dealing with multiple interconnected devices and rapid data processing demands will need an Edge Computing strategy, not to mention the technology to make it all work.

Browse our latest issue

Intelligent Data Centres

View Magazine Archive