Posted on 14 March 2023

Hybrid cloud is a vital and necessary style of cloud computing. However, it can break multiple aspects of the cloud value proposition. CIOs must consider a continuum of next-generation cloud options called “distributed cloud” to retain cloud advantages.

Key considerations

  1. Distributed cloud is the first cloud model that incorporates the physical location of cloud-delivered services as part of its definition.

  2. Distributed cloud fixes discontinuities in the cloud value chain that are often broken in hybrid cloud models. In distributed cloud models, the control plane is consistent and responsibility for the distributed “substations” remains in the hands of the cloud provider.

  3. Distributed cloud exists in multiple variants from on-premises public cloud deployments in enterprises to citywide and mobile edge deployments serving local communities and networks.

  4. Distributed cloud will cross two phases of deployment, the first being data center deployments and the second being consumption of location-specific offerings delivered from third-party entities such as telcos and city governments to provide cloud services to local communities.

Hybrid cloud vs hybrid IT

Interest in hybrid cloud computing is rising, as evidenced by a 15% growth in requests for hybrid cloud discussions from clients over the past three years. The reason for this is simple: Many customers need to deal with technologies that they already own and operate (mostly in their own data centers).

These customers cannot abandon existing technologies in favor of complete and immediate migration to the public cloud model. Sunk costs, latency requirements, regulatory and data residency requirements, and even the need for integration with non-cloud, on-premises systems hold them back. Instead, they use a combination of private-cloud-inspired and public cloud styles of computing, which create a hybrid IT environment.

Organizations have discovered that building a private cloud is hard. Most private cloud projects do not successfully deliver the cloud outcomes and benefits organizations seek. This has led to the fact that the vast majority of client conversations about hybrid cloud are, in fact, not about true hybrid cloud scenarios (that is, public cloud plus private cloud). Instead, these conversations are about hybrid IT scenarios, where non-cloud technologies and cloud-inspired private cloud are used in combination with public cloud services (see Figure 1).

Hybrid cloud must involve some combination of true cloud styles (private, public, community). For that reason, using VMware on-premises in conjunction with cloud-native services in the public cloud is a hybrid IT scenario, not hybrid cloud, because the VMware capabilities are not delivered as cloud services. However, the vast majority of customers do not initially care about the distinction. This tends to lead them to use cases that emphasize their dependency on their own skills and technologies rather than on the need to shift responsibility to a service provider.

To be clear, there is nothing wrong with using hybrid IT cloud options when they are needed — cloud or not. In fact, they are recommended for certain critical use cases. The problem occurs when (just as with private cloud) tech providers and their customers fail to realize that building and supporting hybrid combinations is hard. Further, they often fail to realize that operating and maintaining the private and public cloud parts of a hybrid scenario separately can undermine or even break many of the cloud computing value propositions.

Hybrid Cloud Versus Hybrid IT

Figure 1. Hybrid Cloud Versus Hybrid IT


Hybrid Cloud Breaks Cloud Value Propositions

Cloud computing promises that consumers of cloud services will gain advantages from several key propositions:

  • Shifting the responsibility and work of running hardware and software infrastructure to cloud providers.

  • Leveraging the economics of cloud elasticity (scaling up and down) from a large pool of shared resources.

  • Benefiting from the pace of innovation in sync with the public cloud providers.

  • Leveraging the cost economics of global hyperscale services.

  • Leveraging the skills of large cloud providers in securing and operating world-class services.

  • And others …

Hybrid cloud and hybrid IT both break these particular value propositions. The reason is that one part of the hybrid is architected, owned, controlled and operated by the customer and the other by the public cloud provider.

This means the customer retains responsibility for their part of the operation and doesn’t leverage the capabilities (such as the skills, innovation pace, investments and techniques) of the public cloud provider. The economics available to the customer operating these hybrid environments are hampered by the reduced scale and elasticity of the services. Innovation is a problem as the customer struggles to remain in sync with the public cloud provider whose services evolve much faster than most enterprises can keep up with. Finally, the use of different control planes to administer customer-owned technology breaks the consistency of operations from public to private parts of the model.

While some of the newer generation of packaged hybrid cloud offerings from technology and service providers can help reduce the impact of these shortcomings, they are inherent to the model. Therefore, these shortcomings must be kept in mind as the hybrid cloud solution evolves.


Introducing Distributed Cloud, the Next Generation of Cloud Computing

Definition of Distributed Cloud: The distribution of public cloud services (often including necessary hardware and software) to different physical locations (i.e., the edge) while ownership, operation, governance, updates and evolution of the services remain the responsibility of the originating public cloud provider.

 Distributed cloud computing is a style of cloud computing where the location of the cloud services is a critical component of the model. Historically, location has not been relevant to cloud computing definitions. In fact, location has been explicitly abstracted away from the service, which inspired the term “cloud computing” in the first place.

While many people claim that private cloud or hybrid cloud requires on-premises computing, this is a misconception. Private cloud can be done in a hosted data center or, more often, in virtual private cloud (VPC) instances, which are not on-premises. Hybrid cloud, likewise, does not require that the private components of the hybrid are in any specific location. However, with the advent of distributed cloud, location formally enters the definition of a style of cloud services.

Distributed cloud supports tethered and untethered operations of like-for-like services. With distributed cloud, cloud services from public cloud providers are “distributed” out to specific and varied physical locations. This enables a key characteristic of distributed cloud operation — low latency compute, where the operations for the cloud services are physically closer to those who need the capabilities. It also ensures that the control plane that administers the cloud infrastructure is consistent from public to private cloud and extends consistently across both environments. This can deliver major improvements in performance due to the elimination of latency issues as well as reduce the risk of global network-related outages or control plane inefficiencies.

For those who would ask, “Isn’t this just a case of edge computing?” The answer is yes and no. All instances of distributed cloud are also instances of edge computing. However, not all instances of edge computing are distributed cloud.

Specific advantages of distributed cloud include:

  • Increased performance from low latency services that are closer to those who need them.

  • Increased compliance with regulatory requirements that data must be in a specific customer location.

  • Reduced network failure risk because the cloud services can reside in local or semi local subnets, allowing them to operate intermittently untethered.

  • A dramatic increase in the number and availability of locations where cloud services can be hosted or from which they can be consumed (compute zones).

IBM and Oracle were among the first companies to launch what could be described as a distributed cloud offering (namely, IBM’s Bluemix Local and Oracle’s early Cloud Appliance notions). However, at the time, neither company held the credibility in public cloud computing to grant them customer permission to own and operate a complete stack of technology to deliver cloud outcomes on customer premises and in the public cloud. This resulted in customers who would complain about having a vendor-owned stack sitting in their data centers. Microsoft has had more success with Office 365 variants that approximate a distributed cloud deployment.

One could ask, “Has anything changed?” In short, times have changed and so have the players. Now, multiple companies have the public cloud presence and successes to build trust with customers in having the cloud provider own and operate all aspects of the solution — from public to private. Public cloud leaders operate multiple regions and zones of compute where a distributed cloud model is inherent to their operation but not yet open to customer-defined locations. In addition, the technologies and services delivered have matured to a point where cloud infrastructure functionality and converged infrastructure are accepted means of delivering significant and even critical cloud-based outcomes.

Companies such as Microsoft, IBM and Oracle have increased their ability to offer packaged hybrid cloud offerings that customers recognize as consistently valuable for certain use-case scenarios. This leads to greater customer willingness to have vendor-owned solutions on their premises. Also, newer entrants such as Azure Stack, Google Anthos, Amazon Web Services (AWS) Outposts and even IBM’s Red Hat OpenShift are adding to the movement toward the distributed cloud vision.

Another driving factor was the maturation of enterprise cloud strategies. As strategies moved from easier issues, with the move to a centralized model for early workloads, the need for execution outside of vendors’ central cloud service became acute due to regulatory/compliance, perceived risks from lack of control/visibility and connectivity/latency issues. Finally, cloud providers are moving beyond the large primary global markets, such as the U.S. and EU. The needs of smaller geographies and the desire to host services “in country,” combined with cloud vendors’ challenge to expand data centers to these small markets, create incentives to package cloud services and partner with local delivery partners.

With the newer packaged hybrid models, the center of gravity is public cloud, extended out to private cloud, not vice versa, as with previous attempts by providers with limited public cloud presence.


The Distributed Cloud Continuum

The “distributed cloud continuum” is a conceptual juxtaposition of different styles of distributed cloud deployment (aka substations), designed to support location-specific use cases. Each style is designed to support the distribution of some cloud service capabilities from the public cloud of the originating provider.

As with the definition of distributed cloud itself, operation, management and evolution of the different deployment styles in the continuum must be the responsibility of the public cloud provider, no matter the location. We call this a continuum because the adjacent styles may not be radically different from one another in implementation technology, while the extremes of the use cases may be quite distinct (see Figure 2).

The “Distributed Cloud” Continuum

Figure 2. The Distributed Cloud Continuum

 Style 1: On-Premises Public Cloud: Designed primarily to support deployment into a customer’s private data center. We refer to this as on-premises public cloud because the control plane for operation and the security perimeter is defined by, and in the control of, the public cloud provider’s public cloud. Most initial deployments will keep access credentials private to the customer. Some might call this a “managed private cloud,” but this amounts to an implementation choice rather than a conceptual distinction. Calling this a private cloud would be short-sighted. Many companies may choose to open access to partner companies, and the public cloud provider could use the substation as a multitenant host for other customers as a pseudo availability zone, thus, a part of their public cloud. Providers such as AWS, Azure, IBM and Google all have options than can be deployed in this mode.

On-premises public cloud substations may be delivered as a pre-integrated hardware solution to deploy the services or as a software solution where the hardware is integrated at deployment time. It can be debated as to which option for integrating hardware is most effective. However, the use of custom-designed hardware shows advantages in other styles of deployment within the continuum (see Styles 3, 4 and 5).

Style 2: Metro Area Community Cloud: Delivery of cloud substations as local capabilities for a city or metro area. The delivery is not necessarily in a data center and is open to allow VPC connections. Multiple customers will connect to these local resources. AWS and Microsoft have moved into zone-based approaches that support this option.

Style 3: 5G Mobile Edge Cloud: Delivery and integration of a distributed cloud substation into a 5G telco/carrier network. This model leverages the telco’s customer relationships, partner network and 5G speeds. Example providers include AWS, Microsoft, Google and IBM, which have all made efforts on this front.

Style 4: IoT Edge Cloud: Delivery of a distributed cloud substation designed to interact directly with edge devices or to be edge devices hosting public cloud services. This would include Internet of Things (IoT) consumer and industrial capabilities, as well as specific use cases, such as collection, dissemination and movement. As with on-premises public cloud, these resources may be restricted in access and visibility to a single company if needed. Most cloud providers are offering edge support and data-oriented edge devices. Most do not rise to fully distributed cloud options yet. However, they are evolving.

Style 5: Global Network Edge Cloud: Delivery of a variety of distributed cloud substations designed to integrate with global network infrastructure, such as points of presence, cell towers, content delivery networks (CDNs), routers and hubs. Any network point of connectivity, transport or interaction could host such a substation potentially. Typically, this will require specific hardware designs to support. This style is not in general availability at this time.


Two Phases of Distributed Cloud

In practical terms, distributed cloud will evolve in two distinct phases (see Figure 3):

Phase 1 — Like-for-Like Hybrid: Enterprise customers will buy cloud substations to deliver on the promise of hybrid cloud and to avoid latency-based problems while retaining cloud value propositions. These customers will not initially embrace the idea of opening their substations to near neighbors. They will keep the substation on their premises, private to themselves. This will have the effect of “saving” private cloud and enhancing hybrid cloud by having public cloud providers take responsibility for everything.

Note, not all of today’s newly packaged hybrid notions fit this model; integrated technology hybrid is another type that does not fit the distributed model. Google Anthos, IBM/Red Hat and other portability approaches are examples.

Phase 2 — Next-Gen Cloud: In Phase 2, utilities, universities, city governments and telcos (among others) will buy cloud substations and open them for use by near neighbors. Near neighbors can be defined in terms of physical proximity (for example, in the same country) or “community” affiliation (for example, industry or high-performance computing [HPC] community). This begins to cement the idea that distributed cloud represents the foundation of the next generation of cloud computing. This also reflects the need for Styles 2 and above in the distributed cloud continuum. Next-generation cloud will work based on an assumption that cloud substations are everywhere — much like Wi-Fi hot spots.

The key here is that this is the holy grail of a different concept called “grid computing.” We are coming back to the idea that a number of companies can share resources with one another. When distributed cloud substations begin to proliferate, those who require their services need not always buy their own. Instead, they can use substations that exist in nearby locations. CIOs will benefit from the notion that their cloud services will expand as the number of substations expand. This can reduce the effort to build larger-scale regions of services and even more robust zones of compute.

In both phases, location becomes more transparent again. The vision allows for a customer to specify to a provider, “I need X to comply with policies Y and latencies Z,” and then let the provider configure automatically and transparently. This could potentially represent a future Phase 3. Other phases are possible. For example, there could be another dimension where a cloud provider “caches” its capabilities automatically “on prem” — perhaps even all the way to the client/browser in some cases. The future is ripe with possibilities.

Distributed Cloud will Happen in Two Phases

Figure 3. Distributed Cloud Will Happen in Two Phases


Within the distributed cloud model, some problematic issues must be addressed before the model can come to fruition. For example:

  1. How much of the public cloud capabilities will be available on the distributed cloud substation? While this is not definitional to distributed cloud, it is a factor in deciding what flavor of distributed cloud will be necessary — fully distributed or partially distributed?

  2. What custom scenarios for distributed cloud substations will emerge (for example, an Oracle data flavor, an IoT flavor as in the continuum Style 4, or perhaps a media flavor of substation)?

  3. If distributed cloud substations are opened to near-neighbor companies, who pays for the increased bandwidth necessary for effective operation and how will it be paid?

  4. How will revenue models play out for sharing substations across multiple companies? For example, do near-neighbor users of a substation pay the originating cloud provider or the enterprise that originally requested the substation be installed?

  5. Must a distributed cloud substation always be connected, or can it operate with variable connectivity?

Other potential challenges include:

  1. Enterprises may continue to not be willing to let go and trust cloud providers within the walls of their enterprise and potentially behind their firewalls.

  2. If an enterprise takes ownership and exerts more control and management of the service because of the service architecture or a desire for this control, then it is not truly a distributed cloud service. Hence, many of the cloud value propositions will be limited.

  3. Delays in bringing solutions to market may dampen enthusiasm. As an example, Azure Stack took three tries and more than five years to get where it is today, and AWS Outposts was announced with little updates 11 months later.


Distributed Cloud Can Fix Some of What Hybrid Cloud Breaks

Because distributed cloud substations are the responsibility (that is, owned, operated, governed, updated, and evolved) of the originating public cloud provider (from hardware up through services), the key cloud value propositions remain intact:

  1. The responsibility for the implementation being in the hands of the public cloud provider keeps the substation in sync with public cloud functionality.

  2. Maintaining a consistent control plane and full management responsibility for the services throughout the entire relationship increases leverage and productivity.

  3. Tethered connections (both intermittent and permanent) to the substations allow those substations to expand into the public cloud as needed.

  4. The responsibility and work of operating, maintaining, and supporting hardware and software infrastructure is shifted to cloud providers.

As such, organizations that use distributed cloud substations can still:

  1. Leverage the economics of cloud elasticity (scaling up and down) from a large pool of shared resources.

  2. Benefit from the pace of innovation in sync with the public cloud providers.

  3. Leverage the economics of global hyperscale services.

  4. Leverage the skills and ecosystems of large cloud providers in securing and operating world-class services.

It could be argued that the vast majority of private cloud computing has failed to deliver on the full promise of cloud computing, therefore, hybrid cloud efforts must strive to avoid the same problems. Distributed cloud does this by packaging cloud on both sides of the hybrid scenario and maintaining consistent responsibilities and corresponding lack of private control. In many ways, this is like giving the customer an “easy button” for engaging with hybrid cloud (and private cloud) while still progressing toward the value proposition of full public cloud computing.



In the eyes of CIOs, the distributed cloud concept will guide the roadmap for cloud evolution. Those seeking new opportunities to reach customers who are looking to extend the full public cloud value proposition to dispersed environments and who need location-specific services (including reduced latency) will benefit. Using Phase 1 like-for-like hybrids without sacrificing cloud value propositions will be essential to the long-term stability of hybrid cloud computing. Thus, distributed cloud not only ushers in a next generation of cloud in Phase 2, but it also helps build a firmer foundation for hybrid as it exists today.

Share this article