Are you in a fog or a cloud? Get ready for more complexity to come

So what is the difference between a fog and a cloud? Well, actually bandwidth is part of the answer and where data needs to be situated.

Slow connections are driving the cloud closer to the actual asset that has the information, the cloud needs. “Fog computing, or edge computing” is getting closer to those local computers and devices to solve this bandwidth problem we all will be having it seems.

Solutions are looking far more to the how and where we store data and how we are setting about how to access it.

Fogging solutions are coping with the problem that sensor loading is creating altering what goes to the cloud and why

The reason why I’m interested in this, on a dedicated site discussing platforms and ecosystems, is that this fog computing is attempting to solve multiple problems at the edge where innovation lies far more for us to understand, where the devices and their users generate the insights and raw data.

So those wanting to collaborate around platforms need to be able to communicate both in human and machine related understanding, at all the edges. It is the connections within distributed infrastructure that will partly drive platform participation and build the ecosystem effect as the data will generate the insights required as ‘fog nodes’ will increasingly manage the language, data and protocols to allow this connecting universe.

As we connect new kinds of things to the internet we create new business opportunities. Examples like pay-as-you-drive vehicle insurance. lighting-as-a-service and machine-as-a-service and this all need those different open structuring, requiring greater levels of collaboration but we also need increasing interoperability of what ‘talks’ to what, increasingly, in real-time.

A key insight here is the value of managing and controlling these fogging nodes becomes potentially the new battleground of intelligent devices or controllers. These nodes become the true ‘gatekeepers’ in connected networks, they dictate nearly everything going on.

This is where “fogging or edge computing” comes in, also called ‘fog nodes’. It allows a greater flexibility in the cloud to get closer to the “thing” producing the data, to give the potential for a hierarchy of decision-making devices to be made from all the data flowing off the “thing” via these ‘controlling’ nodes. Then a “software protocol” can ‘manage’ the multiple devices and keep them up to date, functioning and having constant two-way communications in real-time (or near real-time) to and from the end device and the cloud.

Fog computing can (again) decentralize computation at the edges of the network, with results being sent to the cloud, not the raw data itself. So fog computing keeps data closer to the ground, the end device, instead of routing through central clouds or round trips of analysis.

Bridging past problems, avoiding future ones

It does seem on first look at this fogging solution it boils down to attempting to solve some repeating problems, that today as in previous generations of computing power and networks, that increased data is running ahead of bandwidth and infrastructure and how it is to be transmitted. The where and how to deliver it in a timely fashion becomes the constraint.

With an estimated 50 billion “things” connected to the one internet by 2020 needs vast amounts of increasing bandwidth. Increasingly we are recognizing that most cloud models today are not designed for the exploding volume, variety, and velocity of data that this IoT generates.

With a suggested $6 trillion to be spent on IoT solutions over the next five years, data is being required to be processed quickly, substantially on on-site, it changes the view of the cloud and how you use it. Fogging computing is a distributed infrastructure managing at or close to the edge of the network of smart devices. It is becoming a middle layer between the cloud and hardware to provide alternative solutions to data processing, storage, and points of analysis.

When a sensor on an airplane can produce 40TB of data per hour of flight and you have 20 to 40 sensors on a plane are we not in Industrial overkill hype? We are getting to ‘hard’ choices of what needs aggregating and sending up into the cloud, what needs analytical focus to gain insights that have real value and actually are useful. Does fogging or edge computing help? It creates a new collecting and decision point between the device and the cloud that can manage increasing scale.

Also fogging computing offers us ‘intermediary’ solutions and it will increasingly be the place you manage the Industrial protocols that need to be pushed down to the ‘final’ device to keep them in operational shape, it allows for scaling management. Protocols give technology and functional scalability to the edge, it fixes and enhances them often remotely.

Cloud support gives us, for example, our software updates. The problem is we do not have enough storage to maintain all our increasing needs on our smart devices for all the apps and functions, so they are constantly transmitting and receiving data to provide the services you expect. The predicted explosive growth in devices simply is creating, even more, bandwidth issues to manage. We need to determine what we store in data and where and increasingly, on what.

So storage, accessing data and transmitting data are becoming our growing pressure points. Infrastructure and internet speeds are failing to keep up, so we need to find ways to push computing back to the edge of the networks, to the final device, through fog nodes that can be deployed anywhere in the network system.

Equally the cyber-physical systems are increasingly vulnerable to attack or ‘compromise’. Intermediary ‘nodes’ can limit this potential damage, so security begins to determine intervention.

So the value of fogging or being on the edge? Is it as sharp as you may think?

The opening up to fog computing, placing fog nodes that are made up of any devices with computing, storage and network connectivity are acting as ‘intelligent’ controllers, switches, routers, and embedded servers, where you can store, offload and hold gigabytes of network traffic. It can direct different types of data to the optimal place for analysis.

Fog nodes can also hold sensitive data inside your choice of secure networks (back to your own organization most likely) but as a question, does it just ramp up complexity and the need to constantly employ those who sold you these solutions (Cisco, IBM) to constantly gain a repeating business from all the upgrades you will need? For me, this is the rise of distributed boxes again to solve the same problems of response time, storage, and communication needs. It indicates ‘troubles down the road’.

It does seem the whole idea of cloud and where data does actually need to reside seems to be opening up and is changing rapidly, it is needing to just anywhere you store data – on site, near premises or distant but if we haven’t yet solved the communications and bandwidth issues, we focus on how to scale and manage data storage at the different levels of need, while we are constantly hampered by communications and bandwidth issues, so we increasingly need increasing solutions to bridge this.

So we go back into a new sales cycle of “things” as we chase the constant problem of never truly solving bandwidth and connection speeds into the device we use by adding more powerful intelligent devices. The device continues to get increasingly powerful, more intelligent but still, is highly constrained when you want to access the network or the Internet.

Increasing data generated suddenly gets caught up in analyzing value and you lose all that latency potential to avoid critical failures, downtime or allowing you to implement a cascading system failure plan, you needs solutions to separate bucket-loads of raw data with essential actionable data and then where it needs to reside to be actioned in a ‘timely’ fashion.

The big technology solution providers are pushing the fog and edge solutions- should we worry?

Cisco Systems and IBM to name two, are pushing the fog or edge computing and where to store all this massive data coming off your machines because of this growing bandwidth problem we all continue to have.  The internet needs are constantly ‘running ahead’ of the investment it requires in infrastructure, to deliver us what we all need when we need it, or what we want as and when we expect it. In many parts of the US and Europe bandwidth still can be really slow and if we all increasingly require increased bandwidth to ‘live on (off) the cloud we have problems.

I feel we have been (partly ) along this route before, so I feel a Deja Vu coming on

I love the new language on this concept of fogging or edge computing. For example “it becomes the extension of the cloud computing to the utmost edge of the Network” but I think we will get the same old results as in the past: 1) adding process and memory resources to edge devices, 2) pre-processing collected data at the edge, 3) sending aggregated results to the cloud through platforms made up of hardware and software architecture that allows for this connecting and communicating. Distributed boxes did these tasks so it seems will the new fogging nodes we deploy. They don’t solve the basic problem, they become interim solutions.

My going “back to the past” to see “the future”

So, as we get closer to the places the data is generated, the multiple sensors going inside machines and devices, we get increasingly into real problems that will have to be managed in the future. Be these software updates, retrofits, failed or faulty parts. I know that “Performance Management Protocols” are in the develop works to help support data exchanges and the keeping of the device secure. The road to interoperability is presently under construction but will it resolve what did happen from past technology solutions running out of control?

Fogging, Clouds and Connected devices. As we fill our lives with more sensors are we being sensible or just ensure we are shifting complexity yet again. I really think the business case, not the business ‘hype’ should be evaluated a whole lot harder, knowing what data needs to reside where and why.

Will the data all reside in some far off cloud and should it?  Or should it get caught up in some local fogging solution and then perhaps we slowly realize: we do not have this utopian world of all our devices exchanging data across connected industry or society, as we stay caught up in limited bandwidth and data communication protocol constraints needing increasing solution support, until the next great idea is hatched.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Share

2 thoughts on “Are you in a fog or a cloud? Get ready for more complexity to come

Comments are closed.