In our network posts of this series, we first introduced hybrid connectivity between on-premises and the Google Cloud and then further explained a typical network topology.
In the last blog post, we have discussed aspects of network design within a Google Cloud Landing Zone and introduced concepts like Private Google Access, Private Google Access for on-premises hosts, Private Service Access or Private Service Connect.
Now it’s time to further dive into network designs, however this time with more end-to-end scenarios. The following are examples of typical scenarios.
Calling a private Cloud Function from on-premises
Many customers would like to use Cloud Functions, but also wants to call them from on-premises.
For this scenario, one way to achieve that, would be to create a Private Service Connect endpoint. If you use VPC Service Controls, and are deploying your function allowing internal traffic only, you can secure your HTTP functions by allowing them to be called only by resources in the same Google Cloud project or VPC Service Controls service perimeter. As we are using PSC, the function can also be called from on-prem.
Hybrid connectivity to on-premises services through PSC and Network Endpoint Groups (NEG)
PSC can also be used to call on-premises services. For that use case, NEGs are used. A network endpoint group (NEG) is a configuration object that specifies a group of backend endpoints or services. With NEGs, Google Cloud load balancers can serve virtual machine (VM) instance group-based workloads, serverless workloads, and containerized workloads. A hybrid NEG can be described as one or more endpoints that resolve to on-premises services, server applications in another cloud, and other internet-reachable services outside Google Cloud. Setting up NEGs and hybrid load balancing also enables you to bring the benefits of Cloud Load Balancing’s networking capabilities to services running on your existing infrastructure outside of Google Cloud.
The setup is also suitable, if you run an application in a Kubernetes cluster and wants to access on-premises resources and need a firewall for a dedicated IP instead of a whole range, which would be needed for GKE nodes (as their IPs are always chaning when nodes are updated or replaced).
The following picture shows such a scenario:
Multi-regional high-availability and disaster recovery
As enterprises move workloads on to the public cloud, they need to translate their understanding of building resilient on-premises systems to the hyperscale infrastructure of cloud providers like Google Cloud. Industry standard concepts around disaster recovery such as RTO (Recovery Time Objective) and RPO (Recovery Point Objective) are often mentioned.
Google Cloud is already designed for resilience, however not every service is high-available. While for example, Big Query can store its data in multiple regions and hence support replication, this might not be the case with other services like Compute Engine. For such use cases network design must be implemented properly. In the following, we show a sample architecture for a microservices workload . However, bear in mind, that the architecture is not suitable for all use cases.
Secure Internet-facing applications
For such applications, the objective is to securely expose an application to the internet while protecting it against threats and ensuring controlled access. The following components can be used:
- Cloud Armor: A DDoS protection and web application firewall (WAF) service that helps safeguard your applications against distributed denial-of-service (DDoS) attacks and other threats.
- Identity-Aware Proxy (IAP): Provides granular access control to your application by verifying user identity and context before allowing access.
- Cloud Load Balancing: Distributes incoming traffic across multiple instances or backend services to ensure high availability and reliability.
- Firewall Rules: Network-level security controls to define what traffic is allowed or denied to your application.
The following picture shows a sample architecture:
That’s it for today. In the next blog post, we will talk about DNS.
> Click here for Part 9: DNS Design