20/09/2019 | Expert opinion - Damien Pasquinelli, CTO, Advanced Solutions, Hardis Group

Edge computing offers a solution to the growth of the Internet of Things (IoT), providing the capability to store and process data right where it is generated, in close proximity to sensors and other connected devices. Combining this new-generation distributed architecture with the centralized cloud computing model is a cost-efficient way to distribute artificial intelligence (AI).

Managing the ever-growing volume of AI data in connected devices

The number of projects and experiments involving connected devices is on the rise, with use cases ranging from basic sensors and raw data transmitters (temperature, flow rate, vibrations, etc.) to smart sensors with in-built processing power (such as robots and drones). What's more, these connected devices aren't always situated at the start of the chain (i.e. as data transmitters). In some cases, they may be required to execute orders or respond according to the data they collect.

Take a robot in a logistics warehouse, for example. When the photos it takes are combined with its geolocation data, the robot may decide automatically to move a pallet for safety reasons, or to trigger an operational process to address an anomaly.

In IoT projects, AI algorithms are used to process, analyze and contextualize vast quantities of data—all of which requires ever more storage space and processing power.

Edge computing and cloud computing: a winning combination

In these circumstances, fully centralized architectures are no longer efficient. Cloud technologies may be quick to deploy and offer scalable storage capacity and processing power, but there's an undeniable fact: demand for data storage space is outpacing reductions in the cost-per-gigabyte. Yet AI technologies require a lot of data to function properly. Moreover, deep learning phases demand considerable processing power, which pushes costs up even further. And, in a fully centralized architecture, the sheer volume of data passing between connected devices and the cloud can quickly saturate network bandwidth.

Edge computing solves these problems by using local micro-datacenters (or mini-clouds). Storage capacity and processing power is brought closer to sensors and connected devices. Under this model, only the data and information that needs to be retrieved is transferred to the central information system.

In our example, the images taken by the robot are stored and analyzed locally. Only those images that add value—i.e. those that are needed to support decision-making—are sent to the cloud. In other words, only exceptions and anomalies are stored and processed centrally.

This new-generation distributed architecture has multiple benefits: it optimizes infrastructure, drives down associated costs, and reduces latency by freeing up network bandwidth. Yet contrary to the model deployed up to the early 2000s, edge computing is not a fully distributed architecture. The cloud—in whatever format the enterprise chooses—remains a vital central component of the system. Put another way, edge computing is not intended to replace cloud computing. The two models work in harmony to create an overall infrastructure that caters to IoT project needs.

IT expertise, security and local algorithm updates

As IoT projects become more widespread, edge computing will inevitably follow suit. Yet having a decentralized infrastructure requires enterprises to deploy mini-datacenters across different sites. That, in turn, will require them to rethink the IT expertise—especially network expertise—available locally. These skills have been largely overlooked in the past 15 years as IT architectures have become ever more centralized.

Edge computing also enhances data security because processing data locally means that less critical information needs to be transferred between sites and central servers. It's more straightforward to isolate devices from the rest of the system, meaning the risk of attack is lower—and it's easier for enterprises to constrain malicious activity. Yet edge computing has the same limitations that come with any distributed model. Because servers act as a gateway into the enterprise information system, physical access to racks needs to be tightly controlled. Likewise, data transfers between mini-datacenters and central servers must be more secure.

Last but not least, self-learning is an inherent feature of AI. The implication for multi-site enterprises is clear: although they may start with the same algorithm, it will evolve differently according to the activities or constraints of each site.