Internal Developer Platforms – Part 12: Backstage Entities and OpenAPI Example

In the last blog post, we discussed the various entity types within Backstage’s Software Catalog. As the plain theory is sometimes quite abstract, we will continue with an concrete example and show how to publish the API of a simple Java Spring REST service in Backstage and publish its API in the OpenAPI specification format within Backstage.

The sample app

As we are using Spring Boot, we can simply create a simple application with Spring Initializr. We open the page and can provide the input as shown in the screenshot below:

The image shows the Spring Initializr web interface configured to create a new Spring Boot project. The project is set to use Maven for build automation, Java as the programming language, and Spring Boot version 3.2.5. Project metadata includes group ID 'cloud.sclabs.backstage', artifact ID 'sample-app', name 'sample-app', description 'Demo project for Spring Boot', and the package name 'cloud.sclabs.backstage.sample-app'. The project packaging is set to 'Jar' and the Java version is 17. Dependencies include 'Spring Web' for building web applications using Spring MVC.

Once unpacked we can write some simple logic for the RestController.

package cloud.sclabs.backstage.sampleapp;

import java.util.List;

import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

class EmployeeController {

  List<String> all() {
  Employee one(@PathVariable Long id) {

  Employee replaceEmployee(@RequestBody Employee newEmployee, @PathVariable Long id) {

  void deleteEmployee(@PathVariable Long id) {

In order to create the OPEN API documentation, we simply add the following dependency on the classpath:


Once we have built and started our application (./mvnw clean spring-boot:run) , we can see the documentation when opening the following URL: http://localhost:8080/swagger-ui/index.html

Backstage Entities

Now it is time to write some YAML for the Backstage entities. We create a backstage folder within the repository and add the following directories and files (mkdocs will be explained within the next blog post):

The image shows a directory structure in a terminal for a project related to Spotify Backstage. The structure includes several YAML files and directories. At the root level, there are 'all-apis.yaml', 'all-components.yaml', and 'all.yaml'. There are also 'apis' and 'components' directories, each containing a respective YAML file: 'employees-api.yaml' in 'apis' and 'employees-component.yaml' in 'components'. Additionally, there is a 'docs' directory with Markdown files: '', '', '', '', and ''. The root directory also contains 'mkdocs.yaml'.

Let’s take a look into the yaml files. Our entry point is the all.yaml:

kind: Location
  name: backstage-sample-app
  description: A collection of all Backstage entities
    - ./all-apis.yaml
    - ./all-components.yaml

kind: System
  name: backstage-sample-app
  annotations: dir:. gsoeldner/backstage-sample-app
  owner: user:default/guido.soeldner

Within the YAML, we define a location and load the all-apis.yaml and the all-components.yaml.

The location entity all-apis.yaml covers the different APIs – in our case just simple one:

kind: Location
  name: event-sourcing-poc-apis
  description: A collection of all Backstage event-sourcing-poc APIs
    - ./apis/employees-api.yaml

Finally, we have the employees-api.yaml:

kind: API
  name: employees-api
  description: The employees API
    - employees
    - query
    - rest
    - url:
      title: Server Root
      icon: github
  type: openapi
  lifecycle: experimental
  owner: user:default/guido.soeldner
  definition: |
	SPEC from Spring APP

The last line covers the Spring App OpenAI spec. The most easy way is to retrieve the API as JSON (http://localhost:8080/v3/api-docs) and replace the line in the sample.

Of course we also need to define the components:

kind: Location
  name: event-sourcing-poc-components
  description: A collection of all Backstage event-sourcing-poc components
    - ./components/employees-component.yaml
kind: Component
  name: employees-component
  description: command side of order aggregate
  annotations: url:
    - url:
      title: Server Root
      icon: github
  type: service
  lifecycle: experimental
  system: examples
  owner: user:default/guido.soeldner
    - employees-api

Once, everything is done, we can easily import the definitions into the Software Catalog.

Once done, we can easily browse all the APIs, similar like if you go to some kind of Developer Pages like the Google API docs.

The image shows an interface of the Spotify Backstage plugin for managing an API called 'employees-api' under the Soeldner Consult brand. The interface is in the 'Definition' tab of the 'employees-api'. It displays the OpenAPI definition, version 0, compliant with OAS 3.0. The server URL is set to 'http://localhost:8080'. Below, it lists the endpoints for the 'employee-controller' with three methods: GET, PUT, and DELETE, each for '/employees/{id}', along with parameters required for the requests. The sidebar on the left includes options such as Home, APIs, Docs, Create, and Tech Radar.

Google Cloud Landing Zone Series – Part 7: Network Design

In the last blog post, we have shown how to establish connectivity between the on-premises network and the Google Cloud Landing Zone. Now it is time to talk about some network concepts. Networking is at the core of a Landing Zone, hence that’s there is plenty to discuss. We will split that topic into two different blog posts. In this blog post we will first introduce the most important network concepts and in the second blog post we will introduce various architectural designs for different scenarios. We will assume that you have a basic understanding about cloud networking.

In detail, this blog post, will introduce the following networking components:

  • Private Google Access
  • Private Google Access for on-premises hosts
  • Private Service Access
  • Private Service Connect

Private Google Access

Private Google Access (PGA) allows instances in a Virtual Private Cloud (VPC) network to connect to Google APIs and services through internal IP addresses rather than using external IP addresses. This capability ensures secure and private communication between your Google Cloud resources and Google APIs and services without the need for public IP addresses or NAT (Network Address Translation) gateways.

Why should we use it:

1. Security and Privacy: By using internal IP addresses, traffic remains within Google’s network, enhancing security and privacy. For example, applications running on Google Cloud can securely access Google services like Cloud Storage, BigQuery, or Pub/Sub.

2. No Public IP Required: Instances without public IP addresses can still access Google APIs and services.

3. Cost-Effective: Helps in reducing costs associated with managing and securing public IP addresses. This helps allows by reducing reliance on NAT gateways.

The following picture shows an implementation of Private Google Access:

The image shows a network diagram depicting the Google Cloud Platform (GCP) architecture for a "Landing Zone."

At the top, the Internet is connected to Google APIs and Services via public IP addresses. Below that is the main project with a VPC network that includes an Internet Gateway and VPC Routing.

The VPC network consists of two regions: us-west1 and us-east1.

In the us-west1 region, there are two virtual machines (VMs) in subnet-a, where Private Google Access is enabled. VM A1 has the IP address, and VM A2 has the IP address with a public IP.
In the us-east1 region, there are two virtual machines (VMs) in subnet-b, where Private Google Access is disabled. VM B1 has the IP address, and VM B2 has the IP address with a public IP.
The diagram uses colored lines to indicate traffic paths: green for traffic to Google APIs and Services, and yellow for traffic to the Internet.

Private Google Access for on-premises hosts

Private Google Access for on-premises hosts extends the capability of Private Google Access to on-premises environments. This feature allows on-premises hosts to access Google APIs and services privately, over internal IP addresses, without exposing the traffic to the public internet.

Why should we use it?

1. Secure and Private Access: On-premises hosts can securely access Google Cloud services via internal IP addresses.

2. No Public IPs Required: Similar to PGA for VPC networks, it eliminates the need for public IP addresses for on-premises hosts.

3. Hybrid Cloud Integration: Facilitates seamless integration between on-premises data centers and Google Cloud services.

How does it work?

In order to configure Private Google Access for on-premises hosts, a couple of steps have to be done:

1. Establish a Secure Connection: Use Cloud Interconnect or VPN to connect your on-premises network to your Google Cloud VPC network.

2. Configure DNS: Ensure that DNS queries for Google APIs resolve to private IP addresses.

3. Enable Private Google Access: Make sure Private Google Access is enabled on the relevant VPC subnets in Google Cloud.

4. Update Routing: Configure routing to direct traffic from on-premises hosts to Google Cloud services via the secure connection.

The following picture show the implementation:

The image depicts a network architecture diagram for a Google Cloud Platform (GCP) landing zone with an on-premises network.

At the top, the on-premises network includes subnets and resources connected to an on-premises VPN Gateway with an external IP (BGP IP: A VPN tunnel carries encrypted traffic to the Internet.

Below, within the GCP project "my-project," there is a VPC network with an Internet Gateway connected to VPC Routing and a Routing Table. In the us-east1 region, a Cloud VPN Gateway with a regional external IP is connected to a Cloud Router ( This setup communicates with the on-premises VPN Gateway via the VPN tunnel.

There is also a restricted range for Google APIs and Services ( connected within the VPC network. The Cloud Router advertises this range, with the next hop being the Cloud Router ( DNS CNAME maps * to for secure access to Google services. The diagram uses colored lines to indicate different traffic paths: green for internal routing, red for encrypted VPN traffic, and connections to the Internet.

Traffic from on-premises hosts to Google APIs travels through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent through a route that uses the default internet gateway as its next hop. This next hop allows traffic to leave the VPC network and be delivered to (

Private Service Access

Private Service Access allows you to connect your Virtual Private Cloud (VPC) networks to Google-managed services such as Cloud SQL, AI Platform, and other Google APIs in a secure and private manner. The connection is made over internal IP addresses, hence ensuring that traffic does not traverse the public internet,

Why should we use it?

1. Private Connectivity: Establishes private connectivity between your VPC network and Google-managed services, avoiding public internet.

2. Enhanced Security: Keeps data traffic secure within the Google Cloud network.

3. Simplified Network Management: Reduces the complexity of managing firewall rules and NAT gateways for service access.

How It Works?

Private Service Access involves setting up private connections from your VPC to Google-managed services using VPC peering.

VPC Peering allows networks to communicate internally using private IP addresses without the need for public IPs or additional firewall rules.

The following picture shows the implementation:

The image depicts a network architecture diagram for a Google Cloud Platform (GCP) landing zone with a customer project and a service producer project.

On the left, the customer project includes a Customer VPC network in the us-central1 region with a virtual machine (VM1) having the IP address in subnet There is also an allocated range of for private connections.

On the right, the service producer project for the customer includes a Service Producer VPC network. In the us-central1 region, it contains a database instance (DB1) with the IP address in a subnet for Cloud SQL ( In the europe-west1 region, there is another resource with the IP address in a subnet for another service (

The two projects are connected via VPC Network Peering, allowing private services access traffic between the customer project and the service producer project. The green lines indicate the paths for private services access traffic.

In the diagramm, the customer VPC network allocated the address range for Google services and established a private connection that uses the allocated range. Each Google service creates a subnet from the allocated block to provision new resources in a given region, such as Cloud SQL instances.

Private Service Connect

Private Service Connect allows you to securely and privately access Google services, third-party services, and your own services through private IP addresses. It ensures that the traffic between your Virtual Private Cloud (VPC) network and these services does not traverse the public internet, thereby enhancing security and performance.

Why should we use it?

1. Private Connectivity: Establishes private connections using internal IP addresses, avoiding public internet exposure.

2. Enhanced Security: Protects data by keeping it within Google’s network, reducing the risk of external threats.

3. Simplified Network Configuration: Streamlines the process of connecting to Google services, third-party services, and your own services.

4. Service Access Control: Allows granular access control and policy management for services.

5. Load Balancing: Supports integration with Google Cloud’s load balancing services to distribute traffic efficiently.

How It Works?

Private Service Connect creates endpoints in your VPC network that serve as entry points to the service you want to access. These endpoints use internal IP addresses, ensuring that the communication remains within the private network.

The following picture shows this in more detail:

The image depicts a network architecture diagram for a Google Cloud Platform (GCP) landing zone utilizing Private Service Connect.

On the left, the Consumer VPC includes various clients accessing different types of Private Service Connect endpoints:

These connect through the central Private Service Connect, represented by a secure lock symbol.

On the right, the Producer VPC offers published services, categorized into:

Google services
Third-party services
Intra-org services
Above, managed services like Google APIs are also accessible via Private Service Connect. The diagram illustrates the secure, private connection paths between consumer clients and various managed and published services within GCP.

Internal Delevoper Platforms – Part 11: Backstage Entities

In the last blog post, we have shown how to register Software Entities into the Software Catalog and have found out that there are different kind of entities.

As a recap, the Backstage Software Catalog is a centralized system designed to manage and track ownership and metadata for all software within an ecosystem, including services, websites, libraries, and data pipelines. This catalog uses metadata YAML files, stored with the code, which are collected and displayed in Backstage, facilitating easy management and visualization.

Backstage and the Backstage Software Catalog make it easy for one team to manage 10 services — and makes it possible for your company to manage thousands of them.

In detail, the Software Catalog supports two primary use-cases:

1. Management and Maintenance: It provides teams with a consistent view of all their software assets, regardless of type—services, libraries, websites, or machine learning models. This enables teams to efficiently manage and maintain their software.

2. Discovery and Ownership: The catalog ensures all software within a company is easily discoverable and clearly associated with its respective owners, eliminating issues related to „orphan“ software that may otherwise be overlooked or lost within the broader ecosystem.

Entitity Overview

Overall, Backstage and its Software Catalog simplify the management of numerous services, making it feasible for a single team to oversee many services and for a company to handle thousands.

Now it’s time to address these entities, which include Components, Templates, APIs, Resources, and Systems among others. Each entity type has its specific descriptors:

  • Component: This type refers to a software component, usually closely tied to its source code, and is meant to be viewed as a deployable unit by developers. It typically comes with its own deployable artifact.
  • Template: Entities registered as Templates in Backstage have descriptor files that contain metadata, parameters, and the procedural steps required when executing a template.
  • API: This type covers interfaces that generally provide external endpoints, facilitating communication with other software systems.
  • Resource: Describes types that act as infrastructure resources, which usually provide the foundational technical elements of a system, such as databases or server clusters.
  • System: Unlike the singular definition of components, entities marked as Systems represent a collection of resources and components, meaning a system may encompass multiple other entities. The key advantage of this model is that it conceals the internal resources and private APIs from consumers, allowing system owners to modify components and resources as needed.
  • Domain: While Systems serve as a fundamental method for encapsulating related entities, but for enhanced organizational clarity and coherence, it is often beneficial to group multiple systems that share common characteristics into a bounded context. These characteristics can include shared terminology, domain models, metrics, key performance indicators (KPIs), business purposes, or documentation.

Quite interesting there are also organizational entities:

  • User: A user describes a person, such as an employee, a contractor, or similar.
  • Group: Describes an organizational entity, such as for example a team, a business unit, or a loose collection of people in an interest group.

Entity Details

Entities in Backstage are written in YAML and have basically a metadata and a spec section:

The metadata section consists of the following fields:

  • A required field that specifies the name of an entity.
  • metadata.namespace: Used for defining the namespace of the entity and for classifying entities.
  • metadata.annotations: Primarily for listing references to external systems, such as links to a GitHub repository.
  • metadata.links: Specifies which links are displayed in the „Overview“ tab of the entity’s page in Backstage.

On the other side, the spec field’s structure and content depend on the entity type selected in the „kind“ key. It determines how an entry is categorized in the software catalog and the possible relationships among entities.

For the entity type „Component,“ the spec fields include spec.type, spec.lifecycle, spec.system, among other relationship types. The fields within the spec section define essential attributes of the entity, such as the lifecycle stage (e.g., active, production, deprecated) and the entity’s owner, which is typically a person, team, or organizational unit responsible for the entity’s maintenance and development.

As it would be too long for the blog post, we will point to the documentation, where all the manifests are described in detail:

System Model Overview

All together these elements form a complete system model, which is shown in the following architecture diagram:

This diagram provides a comprehensive view of Spotify's Backstage entities and their relationships.

Key Entities:

Template: Defines parameters used in the frontend and steps executed in the scaffolding process.
Location: References other places for catalog data.
Domain (Orange box): Represents domain models, metrics, KPIs, and business purposes.
System (Yellow box): A collection of entities working together to perform a function.
API (Green box): Represents different APIs, including OpenAPI, gRPC, Avro, etc.
Resource (Light Green box): Contains resources such as SQL databases, S3 buckets.
Component (Light Blue box): Backend services, data pipelines, and similar components.
Group (Blue box): Groups related by type (team, business-unit, product-area).
User (Blue box): Represents users belonging to groups.

Domain is part of System.
Depends on Resource.
Depends on Component.
Part of Domain.
Provides API to API.
Part of System.
Contains types: database, S3-bucket, cluster.
Part of System.
Depends on other Component.
Provides API.
Consumes API.
Types: service, website, library.
Owned by User.
Has members and sub-groups.
Member of Group.
The diagram uses different colored boxes for distinct entity types and directional arrows to represent relationships.


Relations between entities always involve two parties, each assuming a specific role within the relationship. These relationships are directional, featuring a source and a target. In a YAML file, the source entity defines the type of relationship as the key name, while the target entity is specified as the value assigned to this key. For example, in the YAML file of a Component type entity, relationships like `dependsOn`, `providesApis`, `consumesApis`, and `subComponentOf` (noted as `partOf` in diagrams) can be defined as keys, followed by an entity reference according to the previously described pattern.

Each entity in the relationship has a corresponding opposite role, which need not be defined in the YAML file but is used in queries or visualizations of relationships. For instance, if Component A has a relationship role `providesApis`, the referenced Component B would assume the opposite role `apiProvidedBy`.

Let’s recap our example from before:

kind: Component 
 name: exampleappfrontend
 description: Simple Webseite
 - url:
   title: ExampleApp
   icon: web
 annotations: sclabs/exampleApp/frontend/ dir:.
   type: Website
   lifecycle: production
   owner: Joseph Cooper
   system: exampleapp
   consumesApis: ['component:exampleappservice']

In this application, the `exampleappfrontend` in its descriptor file might have a key-value pair `consumesApis: [‚component:exampleappservice‘]`, indicating a reference to the `exampleappservice` component, which serves as a backend providing an API.

Entity Lifeycylce

In the Backstage Software Catalog, the process for registering entities follows a standardized technical flow, regardless of how the entities are initially registered. This flow can be visualized in a comprehensive diagram, often referred to as „The Life of an Entity.“

This diagram represents the data flow of Spotify's Backstage catalog ingestion, processing, and stitching pipeline.

Pipeline Stages:

External Sources (Red box): The origin of entity data.
Entity Providers (Green box): Components that ingest data from external sources.
Unprocessed Entities (Yellow box): Entities directly fetched from providers.
Edges (Yellow box): Relationships extracted between unprocessed entities.
Processors (Green box): Modules that transform unprocessed entities.
Processed Entities (Yellow box): Entities after processing.
Relations (Yellow box): Extracted relationships among processed entities.
Errors (Yellow box): Issues detected during processing.
Stitcher (Green box): Combines processed entities and relationships into the final set.
Final Entities (Yellow box): Fully processed entities ready for use.
Search (Yellow box): Indexes entities for quick searching.
Catalog API (Red box): Serves the final entities via an API.
Directional arrows represent the flow of entities through the different stages and components.
  • Entity Ingestion and Provider: The process begins with the „Entity Provider“ which collects raw entity data from specified sources and translates them into unprocessed entity objects. To avoid duplicates, the database tracks which provider has ingested which entity. This stage also includes an initial validation to ensure critical fields like ‚kind‘ and ‚‘ are present.
  • Entity Processing: The next step involves processing these unprocessed entities. This includes validation and further processing through „Policies“ and „Processors.“ Policies are sets of rules for validation, while Processors apply these rules to validate the entities. This step may involve exporting relationships, error messages, or the entity itself from the raw data.
  • Entity Stitching: The final step is „Stitching,“ where processed entities, along with any error messages and relationships, are retrieved from the database and combined into the final entity that will be used in the Software Catalog. This process considers relationships that might be defined in other entities and handles any error messages, displaying them within the catalog as necessary.

Throughout these steps, developers have the flexibility to implement custom Providers and Processors to fetch entities from unique sources or at specific intervals, though some system constraints like processing intervals are predefined. The recommended method for automating entity ingestion in the catalog is through Custom Entity Providers. Once all sub-steps are completed, the final entities are stored in the internal database and presented in the Software Catalog, ready for use.

Google Cloud Landing Zone Series – Part 6: Connectivity

One of the most important things to consider when creating a Landing Zone is how connectivity can be implemented. It is easy to figure out that various options are possible and as cloud technologies including networking are evolving, a Landing Zone that was created in the past might need to be modernized since new technologies and services exist now. The same applies for the future: A Landing Zone can only be built with the available technologies from today, and if there is something new on the market, you might consider changing or modernizing some parts of your landing zone and that – of course – includes connectivity and networking as well.

Most companies tend to implement a hybrid cloud model where some workload remains on-premises. In that case connectivity between the cloud and on-premises must be established.

Connectivity options

So, let’s shortly introduce the different options:

First, there is Google Cloud Interconnect, that provides a high-speed, highly available connection directly to Google’s network. There are two main types:

  • Dedicated Interconnect: This provides physical connections between your on-premises network and Google’s network. This is suitable for high-volume, business-critical workloads that require high throughput and low latency. With Google you could have 10 Gbps circuits and 100 Gbps circuits.
  • Partner Interconnect: If you want to start smaller, but still want to have an Interconnect, then Partner Interconnect might be right solution. It allows you to connect to Google through a supported service provider. This is a more flexible and cost-effective option if you don’t need the full scale of a dedicated connection.

On the other side, there is Cloud VPN: If you’re looking for a less expensive option than Interconnect and can tolerate the generally higher latency of internet-based connections, Google Cloud VPN is a good choice. It securely connects your on-premises network to your VPC (virtual private cloud) in GCP over the public internet using IPsec VPN tunnels.

If you start your cloud journey, you might consider starting with a VPN and changing later to an Interconnect.

What about MACsec?

MACsec (Media Access Control Security) is a security technology that provides secure communication for Ethernet traffic. It is designed to protect data as it travels on the point-to-point Ethernet links between supported devices or between a supported device and a host. In the context of Google Cloud and hybrid cloud setups, MACsec can be used with Dedicated Interconnect and Partner Interconnect.

Like VPN, MACsec encrypts traffic and hence it is recommended for customers to use in combination with an Interconnect, as the former does not encrypt traffic.

The following figure shows an architectural diagram for MACsec with a Dedicated Interconnect:

The diagram illustrates the network connectivity between Google Cloud and an on-premises network through a colocation facility.
Diagram showing Google Cloud Landing Zone Connectivity. On the left, in a Google Cloud network (labeled as my-network), a Compute Engine instance (IP: and a Cloud Router (Link-local address: are connected. The Cloud Router is linked via the Google peering edge within a colocation facility (Zone 1). A dedicated interconnect labeled my-interconnect with MACsec encryption connects to the On-premises router (Link-local address: in the on-premises network (Subnet: A User device (IP: is connected to the on-premises router. The diagram shows seamless connectivity between the Google Cloud network and the on-premises network via a secure interconnect through the colocation facility.

In the picture, a VLAN attachment for Cloud Interconnect will be configured at the Cloud Router. Behind the scenes, Cloud Router uses Border Gateway Protocol (BGP) to exchange routes between your Virtual Private Cloud (VPC) network and your on-premises network.

Since shortly, MACsec can be also used for Partner Interconnect. The following picture depicts the architecture:

Diagram showing Google Cloud Landing Zone Connectivity using a service provider network. On the left, in the Google Cloud network (labeled as vpc1 (VPC network)), a Compute Engine instance (IP: and a Cloud Router (ASN: 16550, Link-local address: are connected. The Cloud Router is linked via the Google peering edge within a colocation facility (Zone 1). The Google peering edge connects securely to a Service provider peering edge using MACsec encryption via an interconnect labeled my-interconnect (my-project1). The Service provider peering edge leads to another Service provider peering edge through a service provider network. Finally, the connection reaches the On-premises router (Link-local address: in the on-premises network (Subnet: A User device (IP: is connected to the on-premises router. The diagram demonstrates how the service provider network facilitates secure connectivity between Google Cloud and the on-premises network through the colocation facility.

Connectivity and Beyond

After having described, how to establish connectivity between on-premises between on-premises and the Google Cloud with a Cloud Router, we have discuss how to come up with a design for the workload. As always, the design depends on your requirements, but basically two “flavors” are quite popular:

  • If you work, with (Partner/Dedicated) Interconnect and are using few Shared VPCs – for example for different stages like Test, Int or Prod – a feasible option is to create dedicated MACsec connections with having a Cloud Router and a VLAN attachment in every Shared VPC. In that case, the different Shared VPCs are isolated from each other. If you need connection between the different VPCs, you still can setup a VPN between the VPCs or use Private Service Connect to publish services to other VPCs. However, keep in mind that you are limited in the number of VLAN attachment (often between 10 and 15), so you better use Shared VPCs.
  • Another way would be to setup a Transit VPC with a MACsec connection and hence use VPNs to connect to other VPCs or Shared VPCs. This approach scales better as you can have much more VPN connections as VLAN attachments.

While we have been discussing MACsec, basically the same considerations apply when using VPN between on-premises and the Google Cloud.

In addition, while there is also the possibility to create a peering between VPCs, please consider its limitations. There is a hard limit on creating peerings and in addition there is no transitive routing between three or more different VPCs.

Another possibility would be to use a Third-Vendor appliance for the connectivity. If you prefer such a solution, that might be possible, but you should check if there is integration between the appliance and the Google Cloud Router – otherwise BGP routes cannot be exchanged.

What about Google Network Connectivity Center?

It is quite important to know that there is also a service called Network Connectivity Center. It is designed to act as a single place to manage global connectivity, providing elastic connectivity across Google Cloud, multicloud, and hybrid networks and giving deep visibility into Google Cloud and tight integration with third-party solutions.

For those of you who have experience with Microsoft Azure Virtual WAN, or AWS Transit Gateway it is interesting to learn that Google Network Connectivity Center basically is designed to work somehow similar. However, at the current time, not all features for the Network Connectivity Center are right now available, so we do not recommend it right now and wait most likely until 2025.

Illusive Networks, Fortinet FortiDeceptor und Proofpoint-Lösungen (Shadow & Identity Threat Assessment)


In der heutigen, von digitalen Bedrohungen dominierten Ära, sind fortschrittliche Cybersicherheitsstrategien unerlässlich. Illusive Networks (Teil von Proofpoint), Fortinet mit ihrem Produkt FortiDeceptor und Proofpoints Shadow sowie Identity Threat Assessment bieten hierbei innovative Lösungen. Diese Blog-Analyse verbindet die Funktionen und strategischen Rollen dieser Technologien und hebt deren Bedeutung im Bereich Identity Threat Detection and Response (ITDR) hervor.

Illusive Networks und Fortinet FortiDeceptor: Detaillierter Vergleich und Proofpoint-Erweiterungen

  1. Illusive Networks (Teil von Proofpoint):
  1. Kernkompetenz: Spezialisiert auf ITDR zur Bekämpfung von Identitäts-basierten Cyberbedrohungen.
  2. Technologie und Funktionen: Einsatz von agentenloser Technologie und Täuschungstechniken zur proaktiven Erkennung und Behebung von Identitätsrisiken.
  3. Integration mit Proofpoint: Die Übernahme durch Proofpoint verstärkte Illusives Position im Cybersicherheitsmarkt durch erweiterte ITDR-Fähigkeiten und Ressourcen.
  4. Strategischer Vorteil: Bietet ein breites Spektrum an Lösungen, das auf die Erkennung komplexer Bedrohungen und den Schutz sensibler Identitätsdaten ausgerichtet ist.
  5. Fortinet FortiDeceptor:
  1. Schwerpunkt: Fokussierung auf Honeypot-Technologien zur Abwehr von Cyberangriffen durch Täuschung.
  2. Rolle in der Cybersicherheit: Aktive Erkennung und Reaktion auf unautorisierte Aktivitäten durch Täuschungssysteme.
  3. Unterscheidungsmerkmale: Spezialisiert auf die Schaffung von Täuschungsumgebungen zur Falle von Angreifern, ergänzt traditionelle Sicherheitsansätze.

Proofpoint ITDR-Lösungen: Shadow & Identity Threat Assessment

  • Proofpoint Shadow:
  • Vorteile: Ermöglicht frühzeitige Erkennung von Angreifern, bietet umfassende Bedrohungsuntersuchungen und reduziert Falschmeldungen.
  • Ansatz: Schafft ein Täuschungsnetzwerk auf Endpunkten, um Lateralbewegungen von Angreifern zu erkennen und zu alarmieren.
  • Technologie: Agentenlose Architektur, die sich von traditionellen, auf Signaturen oder Verhaltensanalysen basierenden Werkzeugen unterscheidet.
  • Proofpoint Identity Threat Assessment:
  • Prozess: Ein SaaS-basierter Prozess zur schnellen Lieferung handlungsrelevanter Einblicke und Sicherheitslücken.
  • Erkenntnisse: Identifizierung von Risiken wie unverwalteten lokalen Admin-Zugängen, fehlkonfigurierten privilegierten Credentials und exponierten Admin-Konten, die in Ransomware-Angriffen und APTs (Advanced Persistent Threats) häufig ausgenutzt werden.
  • Bedeutung: Entdeckt privilegierte Identitätsrisiken auf einem von sechs Unternehmens-Endpunkten, was eine wesentliche Rolle bei der Verhinderung von Sicherheitskompromissen spielt.

Abschließende Betrachtung

Illusive Networks, Fortinet FortiDeceptor und Proofpoints Shadow sowie Identity Threat Assessment bieten entscheidende Lösungen für den modernen Cybersicherheitsmarkt. Illusive, jetzt unterstützt durch Proofpoint, bietet ein breites Spektrum an ITDR-Lösungen. FortiDeceptor hingegen liefert tiefe Einblicke in Angreiferstrategien durch Honeypot-Technologie und arbeitet mit anderen Fortinet-Produkten über die Fortinet Security Fabric zusammen. Die Proofpoint-Lösungen ergänzen diese Technologien durch fortschrittliche Täuschungstechniken, präzise Bedrohungsanalysen und die Identifikation von Identitätsrisiken. Zusammen bilden sie ein umfassendes Sicherheitsnetz gegen moderne Cyberbedrohungen und sind unerlässlich für effektive Cybersicherheitsstrategien, die sowohl präventive als auch reaktive Maßnahmen gegen eine Vielzahl von Bedrohungen bieten.


Illusive (now part of Proofpoint)

Proofpoint Spotlight

Proofpoint Shadow

Proofpoint Identity Threat Defense

Fortinet FortiDeceptor

FortiDeceptor – Innovative Täuschungstechnologie für OT-Umgebungen


In der Welt der Operational Technology (OT) stehen Sicherheitsexperten vor einzigartigen Herausforderungen. Viele OT-Systeme können keine herkömmlichen Sicherheitsagenten unterstützen, sei es aufgrund begrenzter Ressourcen oder weil sie in einem zertifizierten, unveränderlichen Zustand betrieben werden. Hier kommt FortiDeceptor ins Spiel, eine innovative Lösung, die speziell für die Herausforderungen in OT- und IoT-Umgebungen entwickelt wurde.

Die Herausforderung in OT-Netzwerken

In OT-Netzwerken ist die Installation von Endpunkt-Erkennungs- und Reaktionslösungen (EDR) oft nicht möglich. Netzwerkerkennung und -reaktion (NDR) können zu vielen Fehlalarmen führen. Viele OT-Systeme laufen auf veralteten Betriebssystemen, wie Windows 3.1 im Falle von KUKA-Robotern, was sie anfällig für Cyberangriffe macht.

FortiDeceptor: Eine Lösung für OT-Sicherheit

FortiDeceptor von Fortinet bietet eine elegante Lösung für diese Herausforderungen. Es handelt sich um eine Täuschungstechnologie, die falsche Systeme (Decoys) im Netzwerk platziert, um Angreifer anzulocken und von kritischen Systemen fernzuhalten.

Vorteile von FortiDeceptor

  • Früherkennung von Bedrohungen: Durch den Einsatz von Breadcrumb und Decoys erkennt FortiDeceptor Bedrohungen frühzeitig und ermöglicht eine automatisierte Reaktion, um sowohl IT- als auch OT-Segmente zu schützen.
  • Einfache und schnelle Implementierung: Im Gegensatz zu anderen Sicherheitslösungen erfordert FortiDeceptor keine Infrastrukturänderungen und verursacht keine Betriebsunterbrechungen.
  • Zentralisiertes Management: FortiDeceptor ermöglicht eine zentrale Verwaltung verteilter Deployments und bietet eine intuitive Benutzeroberfläche für die Überwachung und Konfiguration.
  • Integration in die Fortinet Security Fabric: FortiDeceptor ist nahtlos in andere Fortinet-Produkte integriert, was eine umfassende und kohärente Sicherheitsstrategie ermöglicht.

Erweiterte Funktionen und Einsatzbereiche

FortiDeceptor bietet erweiterte Funktionen für die OT-Sicherheit:

  • Automatische Erkennung von Netzwerkressourcen: FortiDeceptor entdeckt automatisch Netzwerkressourcen und empfiehlt geeignete Köder.
  • Unterstützung für SCADA-Köder: Es unterstützt eine Vielzahl von SCADA-Protokollen wie MODBUS, S7COMM, BACNET und viele andere, um eine realistische OT-Umgebung zu simulieren.
  • Integration mit Fortinet-Produkten: FortiDeceptor lässt sich nahtlos in Fortinet-Produkte wie FortiGate, FortiNAC, FortiSOAR, FortiSIEM, FortiAnalyzer und FortiSandbox integrieren, um eine umfassende Sicherheitslösung zu bieten.
  • Überwachung der Hacker-Aktivitäten: FortiDeceptor ermöglicht die Überwachung von Vorfällen, Ereignissen und Kampagnen, um die Taktiken der Angreifer zu verstehen und entsprechend zu reagieren.

Integration in Nicht-Fortinet-Umgebungen

  • Vielseitige Kompatibilität: FortiDeceptor lässt sich problemlos in Nicht-Fortinet-Umgebungen integrieren und kann Log-Informationen an fremde SIEM- oder SOAR-Systeme weiterleiten.
  • Erweiterung der Sicherheitsarchitektur: Diese Flexibilität ermöglicht es Unternehmen, FortiDeceptor als Ergänzung zu ihrer bestehenden Sicherheitsinfrastruktur zu nutzen, unabhängig von den eingesetzten Systemen.

Schnelle Installation und sofortige Ergebnisse

  • Zeiteffiziente Einrichtung: Die Installation von FortiDeceptor ist in etwa 2-4 Stunden abgeschlossen, was eine schnelle Implementierung in jedem Netzwerk ermöglicht.
  • Sofortige Wirkung: Nach der Installation beginnt FortiDeceptor sofort mit der Erkennung von Bedrohungen, was Unternehmen ermöglicht, schnell auf potenzielle Sicherheitsrisiken zu reagieren.

Formfaktoren von FortiDeceptor

FortiDeceptor ist in verschiedenen Formfaktoren verfügbar, um unterschiedliche Einsatzanforderungen zu erfüllen:

  • FortiDeceptor VM: Ideal für flexible und skalierbare Cloud- oder virtualisierte Umgebungen. Unterstützt verschiedene Hypervisoren und bietet eine breite Palette an Betriebssystemen für Decoy VMs.
  • FortiDeceptor Appliance: Eine dedizierte Hardwarelösung für On-Premise-Umgebungen, die robuste Hardware-Spezifikationen und hohe Leistung bietet.
  • FortiDeceptor Rugged Appliance: Speziell für den Einsatz in rauen oder industriellen Umgebungen konzipiert. Bietet Widerstandsfähigkeit und Zuverlässigkeit in anspruchsvollen industriellen Umgebungen.


FortiDeceptor stellt eine effektive Lösung dar, um die einzigartigen Sicherheitsherausforderungen in OT- und IoT-Umgebungen zu bewältigen. Mit seiner Fähigkeit, Angreifer frühzeitig zu erkennen und abzulenken, bietet es einen entscheidenden Vorteil in der modernen Landschaft der Cybersecurity. FortiDeceptor ist ein Beispiel dafür, wie innovative Technologie eingesetzt werden kann, um komplexe und spezialisierte Umgebungen sicher zu halten.


Übersicht zu Fortinet FortiDeceptor und anderen Honeypot-Systemen

FortiDeceptor von Fortinet ist ein hochentwickeltes Honeypot-System, das in der Cybersecurity-Industrie eine wichtige Rolle spielt. Honeypots dienen als Sicherheitsmechanismen, die Angriffe erkennen, ablenken oder auf andere Weise Gegenmaßnahmen gegen unbefugte Nutzung von Informationssystemen einleiten. Sie sind entscheidend für das Verständnis und die Abwehr von Cyberangriffen, indem sie als Köder dienen, um potenzielle Angreifer von wertvolleren Zielen abzulenken und Einblicke in deren Vorgehensweisen zu gewähren.

Neuerungen und Fusionen: Illusive Networks und Proofpoint Inc.

Die Übernahme von Illusive Networks durch Proofpoint Inc. in 2022 stellt eine wichtige Entwicklung dar. Illusive, bekannt für seine agentenlose Technologie und Täuschungstechnologien, hat sich mit Proofpoint zusammengeschlossen, um verbesserte Lösungen im Bereich der Identitätsbedrohungserkennung und -reaktion anzubieten. Diese Fusion verbindet Illusive’s spezialisierte Täuschungstechnologien mit Proofpoint’s umfangreichen Cybersicherheits- und Compliance-Kapazitäten, was ein stärkeres Angebot im Bereich der ITDR schafft.

Bezüglich Cisco, Palo Alto Networks und Checkpoint ist es wichtig zu beachten, dass sie zwar umfassende Sicherheitslösungen anbieten, aber ihre Fokussierung auf Honeypot-Technologie im Vergleich zu spezialisierten Anbietern wie FortiDeceptor und anderen unterschiedlich ist und keine Honeypot-Produkte z.Z. vorhanden sind.

Vorteile der Deceptor-Technologie:

  • Früherkennung von Angriffen: Honeypots können Cyberangriffe erkennen, bevor sie kritische Systeme erreichen, und somit als Frühwarnsystem dienen.
  • Ablenkung von Angreifern: Sie lenken potenzielle Angreifer von den tatsächlichen Zielen ab und reduzieren dadurch das Risiko echter Sicherheitsverletzungen.
  • Sammlung wertvoller Daten: Honeypots sammeln Informationen über Angriffsmethoden, Taktiken und das Verhalten von Angreifern, die zur Verbesserung der Sicherheitsstrategien genutzt werden können.
  • Verbesserung der Bedrohungsintelligenz: Die durch Honeypots gewonnenen Erkenntnisse können genutzt werden, um Sicherheitssysteme zu trainieren und die Erkennungsfähigkeiten zu verbessern.
  • Geringe False-Positive-Rate: Da Honeypots nur selten im regulären Netzwerkverkehr verwendet werden, sind die von ihnen generierten Alarme meist eindeutig auf böswillige Aktivitäten zurückzuführen.
  • Kosten-Effizienz: Honeypots sind oft kostengünstig zu implementieren und zu warten, vor allem im Vergleich zu anderen Sicherheitsmaßnahmen.
  • Flexibilität und Anpassbarkeit: Sie können an spezifische Netzwerkumgebungen angepasst und für verschiedene Szenarien konfiguriert werden, von einfachen Fallen bis hin zu komplexen Simulationen.
  • Abschreckung von Angreifern: Allein die Präsenz von Honeypots kann potenzielle Angreifer abschrecken, da sie das Risiko eingehen, entdeckt zu werden.
  • Unterstützung bei Compliance und Audits: Honeypots können helfen, regulatorische Anforderungen zu erfüllen, indem sie Beweise für Sicherheitsvorfälle und deren Handhabung liefern.
  • Forschung und Bildung: Sie bieten eine wertvolle Ressource für Sicherheitsforscher und Bildungseinrichtungen, um Cyberangriffe zu studieren und Sicherheitspersonal auszubilden.

Andere Honeypot-Hersteller und Daten zur technischen Ausrichtung

  • Cynet 360 AutoXDR
    • Technisch: Bietet eine integrierte Plattform mit Funktionen wie automatisierter Bedrohungserkennung, Reaktion und Monitoring.
    • Differenzierung: Cynet konzentriert sich stark auf die Automatisierung und Vereinfachung des Sicherheitsmanagements, was es für Unternehmen mit begrenzten Sicherheitsressourcen attraktiv macht.
  • SentinelOne Singularity
    • Technisch: Nutzt künstliche Intelligenz für die Erkennung und Reaktion auf Bedrohungen über Endpunkte, Cloud und IoT hinweg.
    • Differenzierung: Die Stärke liegt in der KI-gesteuerten Analyse und dem proaktiven Ansatz zur Bedrohungsabwehr, der über traditionelle Honeypot-Funktionen hinausgeht.
  • Morphisec
    • Technisch: Fokus auf Prävention, insbesondere gegen Zero-Day- und unbekannte Bedrohungen, durch Verschleierung und Täuschung.
    • Differenzierung: Morphisec’s Ansatz basiert auf der aktiven Veränderung der Angriffsfläche, um Angreifer proaktiv zu täuschen.
    • Technisch: Spezialisiert auf aktive Verteidigung und Reaktion mit einem Fokus auf Täuschungstechnologien.
    • Differenzierung: LMNTRIX verwendet eine Kombination aus Täuschung, Verhaltensanalyse und Bedrohungsintelligenz, um Angriffe zu erkennen und darauf zu reagieren.
  • Zscaler Deception
    • Technisch: Cloud-basierte Sicherheitslösungen mit einem Fokus auf Täuschung und Verkehrsanalyse.
    • Differenzierung: Bietet eine cloud-native Architektur, die sich gut für Unternehmen eignet, die eine flexible und skalierbare Sicherheitslösung suchen.
  • CyberTrap
    • Technisch: Spezialisiert auf fortgeschrittene Täuschungstechnologien und Angriffsforensik.
    • Differenzierung: Fokus auf detaillierte Forensik und Nachverfolgung von Angreiferaktivitäten, um Einblicke in Angriffsmethoden zu gewinnen.
  • Forescout Continuum
    • Technisch: Bietet Lösungen für Geräteerkennung, Compliance und Netzwerksicherheit.
    • Differenzierung: Forescout bietet umfassende Einblicke in Netzwerke und Geräte, was besonders für das Management von IoT-Geräten vorteilhaft ist.
  • Attivo BOTsink
    • Technisch: Bietet fortschrittliche Täuschungsnetzwerke und Reaktionen auf Bedrohungen.
    • Differenzierung: Attivo konzentriert sich auf die Bereitstellung von Täuschungstechnologie, die sich nahtlos in bestehende Sicherheitsinfrastrukturen integrieren lässt.
  • InsightIDR (Rapid7)
    • Technisch: Kombiniert XDR und SIEM in einer Lösung, mit Schwerpunkt auf Verhaltensanalyse und Erkennung.
    • Differenzierung: Bietet eine Kombination aus fortschrittlicher Analytik und automatisierter Erkennung, die sich für mittelgroße Unternehmen eignet.
  • Symantec Endpoint Security
    • Technisch: Umfassende Endpunktsicherheit mit Schwerpunkt auf Malware-Erkennung und EDR.
    • Differenzierung: Symantec ist bekannt für seine robuste und umfassende Endpunktsicherheitslösung, die sich für große Unternehmen eignet.
  • FireMon
    • Technisch: Spezialisiert auf Netzwerksicherheitsmanagement und -analyse.
    • Differenzierung: FireMon bietet fortschrittliche Netzwerküberwachung und -managementfunktionen, die sich gut für komplexe Netzwerke eignen.
  • Akamai Guardicore Segmentation
    • Technisch: Fokussiert auf Netzwerksegmentierung zur Verbesserung der Sicherheit.
    • Differenzierung: Stellt fortschrittliche Segmentierungslösungen bereit, die sich besonders für Cloud- und Rechenzentrums-Umgebungen eignen.

OT-und Deception-Systeme

Die Eignung der Deceptor-Technik, also der Einsatz von Honeypots und ähnlichen Täuschungstechnologien, im Operational Technology (OT)-Umfeld verdient besondere Beachtung. OT-Systeme sind kritisch für die Steuerung von industriellen Prozessen und physischen Geräten und umfassen alles von Produktionsliniensteuerungen bis hin zu Infrastrukturmanagementsystemen. Hier sind einige wichtige Punkte zu berücksichtigen:

  • Anpassung an OT-Umgebungen: OT-Systeme haben oft spezifische Protokolle und Netzwerkcharakteristika, die sich von typischen IT-Netzwerken unterscheiden. Deceptor-Technologien, die im OT-Umfeld eingesetzt werden, müssen in der Lage sein, die Besonderheiten dieser Umgebungen zu emulieren, um glaubwürdig zu sein und effektiv Angreifer anzulocken.
  • Sicherheitsbedenken: In OT-Umgebungen können Sicherheitsvorfälle schwerwiegende physische Folgen haben, einschließlich Schäden an Ausrüstungen und potenziellen Gefahren für das Personal. Daher müssen Deceptor-Lösungen sicherstellen, dass sie keine zusätzlichen Risiken für die OT-Systeme darstellen, etwa durch falsch positive Erkennungen oder Störungen im Netzwerkverkehr.
  • Erfassung spezifischer Bedrohungen: OT-Systeme können Ziel spezieller Arten von Cyberangriffen sein, einschließlich solcher, die auf industrielle Kontrollsysteme und kritische Infrastrukturen abzielen. Deceptor-Technologien in diesem Bereich müssen darauf ausgelegt sein, solche spezifischen Bedrohungen zu erkennen und wertvolle Einblicke in die Angriffsmethoden zu liefern.
  • Integration mit bestehenden Systemen: OT-Umgebungen enthalten oft eine Mischung aus älteren Systemen und neueren Technologien. Honeypots und Deceptor-Lösungen müssen nahtlos mit dieser heterogenen Landschaft interagieren können, ohne Betriebsabläufe zu beeinträchtigen.
  • Compliance und Regulierung: OT-Umgebungen unterliegen oft strengen regulatorischen Anforderungen. Jegliche Sicherheitsmaßnahmen, einschließlich Deceptor-Technologien, müssen diese Anforderungen erfüllen und dürfen die Compliance nicht gefährden.

Zusammenfassend lässt sich sagen, dass während Deceptor-Technologien effektive Instrumente zur Verbesserung der Cybersicherheit in OT-Umgebungen sein können, ihre Implementierung sorgfältig geplant und an die spezifischen Bedürfnisse und Herausforderungen dieser kritischen Systeme angepasst werden muss.

Google Cloud Landing Zone Series – Part 5: Organizational Policies

As described, a Landing Zone serves as the foundation and enables customers to effectively deploy workloads and operate their cloud environment at scale. But while enabling is important, it is also crucial to define standards and define guardrails what the different teams can do or cannot do. At this point, organizational policies come into play and that’s reason enough to discuss them in our Google Cloud Landing Zone series.

What are Organizational Policies?

Let’s give some kind of formal description:

Basically, Organizational Policies in Google Cloud Platform (GCP) are a set of constraints that apply to resources across your entire organization. These policies help govern resource usage and enforce security and compliance practices across all projects and resources within a GCP organization. Organizational Policies ensure that the actions of individual resources align with the broader business rules and regulations that a company wants to enforce.

How do Organizational Policies work?

Basically, Organizational Policies are easy to understand. Let’s discuss the most important aspects:

  • Constraints: Policies are enforced through constraints, which define the specific rules or limitations for resource management within the organization. For example, a constraint can limit which Google Cloud services can be activated or restrict the locations (regions and zones) where resources can be deployed.
  • Policy types: There are two policy types: Boolean constraints are simple enable/disable toggles for certain features or behaviors. For example, disabling serial port access for VM instances. On the other side, list constraints manage lists of values that either deny or allow specific behaviors. For example, restricting which Google Cloud APIs can be enabled in a project
  • Hierachy and Scope: Organizational Policies are implemented within a hierarchical structure in GCP. This hierarchy starts from the organization level, extends to folders, and then to projects. Policies set at a higher level (like the organization) apply to all items within it unless explicitly overridden at a lower level (like a project).
  • Customizability: Each constraint can be customized to meet specific organizational needs. This means policies can be tailored to allow exceptions, enforce stricter controls, or completely block certain actions.
  • Enforcement and Compliance: Organizational policies are automatically enforced by the platform, ensuring compliance and reducing the risk of human error. This automated enforcement helps maintain security standards and compliance with internal policies and regulatory requirements.

The following picture shows how Organizational Polices are embedded within the GCP organization hierarchy:

Flowchart depicting the policy management structure in a Google Cloud Landing Zone. The chart shows an Organization Policy Administrator defining an Org Policy, which is set on a Resource Hierarchy Node. This policy is inherited by default to Descendant Resource Hierarchy Nodes, which enforce constraints outlined in the policy. Constraints are defined and referenced by GCP Services, indicating how policies are evaluated and enforced across the cloud resource hierarchy.

Why do I need Organizational Policies?

I think it is basically easy to understand, why guardrails should be set in a cloud enviroment, but let’s write down the reasons:

  • Security and Compliance: Organizational policies help ensure that your cloud environment complies with both internal security policies and external regulatory requirements. For example, you can enforce policies that restrict the deployment of resources to specific regions to comply with data residency laws.
  • Risk Management: Policies reduce the risk of data breaches and other security incidents by limiting how resources are configured and who can access them. For example, disabling public IP addresses on virtual machines can prevent accidental exposure of services to the internet.
  • Consistency and Standardization: Applying uniform policies across an entire organization helps maintain consistency in how resources are managed and configured. This standardization is crucial for large organizations where different teams might deploy and manage their resources differently.
  • Operational Visibility: With organizational policies, administrators have a clearer view of the entire organization’s configurations.
  • Minimize Human Error: By enforcing certain configurations and restrictions at the organizational level, you minimize the risk of human error. This can be particularly valuable in preventing misconfigurations that might otherwise lead to security vulnerabilities or operational issues.

What are examples of Organizational Policies?

Currently, at the time this blog post has been written, there are 121 different Organizational Policies in GCP and this number is still increasing. The list of Organizational Policies can be found in the Google Cloud documentation:

While it is too long to discuss all the Organizational Policies in detail, we will nevertheless give some examples of some policies:

  1. Resource Location Restriction: This policy restricts the geographical location where resources can be created. Organizations can enforce data residency requirements by ensuring that data and resources are stored in specific regions or countries, complying with local laws and regulations. For example, you could restrict the locations for the European Union.
  2. Restricting VM IP Forwarding: This policy prevents virtual machines from forwarding packets, which can be a critical security measure to avoid misuse of the network.
  3. Disable Serial Port Access: By disabling serial port access for VM instances, organizations can enhance the security of their virtual machines by preventing potential external access through these ports.
  4. Service Usage Restrictions: Organizations can control which Google Cloud services are available for use. For example, you might want to restrict the use of certain services that are not compliant with your security standards or are deemed unnecessary for your business operations.
  5. Restrictions on External IP Addresses: This policy can be used to prevent resources such as virtual machines from being assigned external IP addresses, reducing exposure to external threats and helping to enforce a more secure network perimeter.
  6. Enforce uniform bucket-level access: For Google Cloud Storage, enabling the “ Enforce uniform bucket-level access“ setting ensures that access controls are uniformly managed through IAM roles, rather than through both IAM and Access Control Lists (ACLs), simplifying management and improving security.
  7. Enforcing Disk Encryption: You can enforce the encryption of compute disks, ensuring that all data is encrypted at rest and reducing the risk of data theft or exposure.
  8. Enforcing Minimum TLS Version: This policy ensures that services communicate using a minimum version of TLS, enhancing the security of data in transit by protecting against vulnerabilities in older versions of the protocol.
  9. Disabling Service Account Key Creation: By preventing the creation of new service account keys, organizations can encourage more secure and manageable authentication methods, such as using the IAM roles or the Workload Identity feature.

These examples represent just a few of the many organizational policies available in GCP that can be applied to secure and manage cloud resources effectively, ensuring they align with organizational objectives and compliance requirements.

Are Organizational Policies related to regulatory frameworks like Digital Operational Resilience Act  (DORA) or the revised Directive on Security of Network and Information Systems (NIS2)?

Yes, organizational policies help you with implementing those regulations. For example, in CHAPTER II, ICT risk management Article 5, Governance and organization the following is written:

Financial entities shall have in place an internal governance and control framework that ensures an effective and prudent management of ICT risk, in accordance with Article 6(4), in order to achieve a high level of digital operational resilience.

The management body of the financial entity shall define, approve, oversee, and be responsible for the implementation of all arrangements related to the ICT risk management framework referred to in Article 6(1).

Here are some examples, which are also available as Organizational Policies:

– Appropriate Service Accounts Access Key Rotation

– Object Storage – Blocked Public Access (Organization-wise)

– Disabled Endpoint Public Access in Existing Clusters 

We at Soeldner Consult can support you not only in building a safe Landing Zone, but also help you with setting Organizational Policies the right way.

Internal Developer Platforms – Part 10: Working with Software Templates and Backstage Search

After having described how to install Backstage, we want finally show how to work with Backstage. In this blog post, we will focus on software templates and the Backstage Search.

Software Catalog

The Software Catalog is a fundamental component of a Backstage deployment, serving as an essential management tool. After deployment and login, users can easily access the Software Catalog by clicking the „Home“ button. This interface presents a list of all registered entities—such as software components, resources, or users—and offers filtering options to navigate through them. Entities, defined in YAML files and registered in Backstage, represent either physical or virtual organizational units, each classified by type and kind. Initially, the catalog displays entities of the „Component“ type, but users can adjust the view to include „Location,“ „Resource,“ „System,“ and „Template“ types based on available registrations. The availability of entity types in the dropdown menus depends directly on their presence in the catalog, exemplifying the customizable and interconnected nature of the Backstage ecosystem.

As a user, we can register applications into Backstage in different ways:

  1. Direct Creation from Backstage: Users can create and register new entities directly in Backstage using pre-existing templates. This involves selecting a template via the „Create…“ button in the sidebar, which leads to the „Create a New Component“ page. After completing the creation and registration process, the new component becomes available in the Software Catalog.
Screenshot of the Soeldner Consult Catalog user interface displaying a list of owned components in a management system, similar to the component feature in Spotify Backstage. The interface shows a navigation bar on the left with options like Home, APIs, and Docs. The main part of the interface lists components like fw-alexander and fw-bjorn, all owned by Guido Söldner, marked as 'experimental' under the service type. The top right includes buttons for 'Create' and 'Support'.

2. Configuration File Entry: Entities can be added by specifying a „Location“ in the app-config.yaml configuration file, which links to a YAML descriptor file for each entity. These files are read and their entities are registered in the Software Catalog upon deployment.

- type: url
- allow: [Template]
- type: file
target: template.yaml # Backstage will expect the file to be in packages/backend/template.yaml


3. Custom Entity Providers or Processors: For companies with extensive software portfolios, Backstage’s backend can be extended with custom entity providers or processors. These read from internal or external sources to generate new entities automatically and regularly. For example, you can use a provider for GitHub to automatically load entities like users from an external source into your Backstage instance.

4. Manual Registration via UI: If a component is already stored online in a repository with the required YAML descriptor file, it can be manually registered. Users navigate to the „Create a New Component“ page, click „Register existing component,“ and input the URL of the YAML file. After verification, the „Import“ button adds the component to the Software Catalog.

Screenshot of the 'Register an existing component' page on the Soeldner Consult interface, mirroring functionality found in Spotify Backstage for tracking software components via SC GitOps. The interface features a step-by-step wizard on the right to start tracking by entering a URL to a source code repository and linking to an entity file, with an example URL provided. The left side displays a navigation panel with options like Home, APIs, Docs, and Tech Radar. The top right corner includes a 'Support' button.

How to define entities

When manually registering an entity, a YAML file must be available for the corresponding entry. Each entry in the Software Catalog corresponds to a YAML file, ideally housed alongside the code of the related software in a repository. This YAML file contains metadata and additional information required to display the component in the Software Catalog. While users can choose any name for the YAML file, known as the „descriptor file,“ Backstage recommends the name „catalog-info.yaml.“ Consequently, when creating a repository, especially in „mono-repos“ where multiple components are hosted in a single repository, it’s crucial to structure the repository using folders to avoid name conflicts with existing descriptor files.

In the following we show how you could define your repository structure to register an application consisting of a backend with api and db, and a frontent:

The structure of the YAML files used by Backstage must adhere to specific rules for the entries to be properly recognized during the import process. An example of such a descriptor file, specifically for the exampleApp frontend component, illustrates the essential structure which consists of elements like `apiVersion`, `kind`, `metadata`, and `spec` that are mandatory in the YAML file. The `apiVersion` describes the specification format version for the entity the file is created for, consistently set as `` in the current project. The `kind` field defines the type of entity it should be listed as in the Software Catalog. Recognized types by Backstage and its plugins include Component, API, Group, User, Resource, and Location, among others. Backstage provides detailed definitions for all entity types in its documentation, assisting the platform team in orientation and categorization of the units represented.

kind: Component
  name: exampleappfrontend
  description: Simple Webseite
    - url:
      title: ExampleApp
      icon: web
  annotations: sclabs/exampleApp/frontend/ dir:.
  type: Website
  lifecycle: production
  owner: Joseph Cooper
  system: exampleapp
  consumesApis: ['component:exampleappservice']

That’s it for today. In the next blog post, we will describe the different entity types in detail.

Google Cloud Landing Zone Series – Part 4: Naming Conventions

Naming Conventions

In the last blog post, we talked a lot about resource hierarchies. Resource hierarchies help to group projects into a folder structure and help with issues like governance, automation, access control, billing and cost management and other things.

Advantages of naming conventions

Another important issue for building up a scalable landing zone are naming conventions. Naming conventions bring up advantages for your environment, let’s shortly name some of them:

Clarity and Readability: Good naming conventions help in clearly identifying resources, their purpose, and their relationships. This enhances readability and understanding for anyone who interacts with the cloud environment, from developers to system administrators.

Consistency: Consistent naming makes it easier to manage resources across your different teams and projects. It reduces confusion and helps in setting standard practices for operations within the cloud environment.

Automation and Tooling Compatibility: Automated tools and scripts often rely on naming patterns to select and manage resources. Consistent naming conventions ensure that these tools can function correctly and efficiently, whether they are used for monitoring, provisioning, or management.

Security: Proper naming can aid in implementing security policies. For instance, names can indicate the sensitivity level of data stored in a resource, or whether a resource is in a production or development environment, helping in applying appropriate security controls.

Cost Management: Naming conventions can also aid in tracking and managing costs. By identifying resources clearly, organizations can monitor usage and costs more effectively, making it easier to optimize resource allocation and reduce wastage.

Examples of naming conventions

So there are clearly a lot of advantages of naming conventions, so let’s continue with that and provide some examples.

For projects you might want to embed some information within the project name. Common components might be:

  • The stage of the project, e.g. Test, QA or Prod
  • If you have a CMDB in your place, you might reuse some kind of service numbers or project numbers and embed them in the project name.
  • The purpose of the project, for example a network project, a project for storing audit information

In Google, it is also important to remember that project ids cannot be changed, but project names can be changed. So if a project number or something changes over time, changing the project name is possible, but changing the project id is not. That’s why it might be better to use a surrogate as the project id, and a descriptive name for the project name.

Another thing to consider is that resource ids might not be re-used – at least not immediately. For example, if you delete a project it will be first be in the trash for something like a month until it is eventually deleted. In this time, the name for the resource cannot be resued. Luckily, you can easily deal with the problem by appending some random suffix on your ressource.

Another important thing is to clearly adhere to the naming standard – even in edge cases. For example, if you separate your components in your project name by means of “-“, you can run into problems if your descriptive name also use “-“.  Here is a small example:

Good: p-1234557-landingzone-ab123

Bad: p-1234566-landing-zone-ab123

The latter example might break your automation processes later, because you cannot clearly figure out the purpose of the project anymore.

Also important, while naming conventions are important, you do not need naming conventions for everything. Cloud providers and Google is no exception here have hundreds of services and components and if you come up with a naming convention, you would only be busy with setting up and enforcing naming conventions.


Beside naming conventions, cloud providers also allow the use of labels to store metadata information. Here are some examples of labels you might encounter or use in a cloud setting:



Project or Application


Owner or Team


Cost Center or Budget


Automation Support

To demonstrate the how you can use automation for working with projects, let’s take a look at a Python snippet. In Google, you use the Resource Manager API for working with projects. Luckily, like for all APIs, there is a comprehensive library to be imported:

from import resource_manager
def list_projects():
    client = resource_manager.Client()
    projects = client.list_projects()
    project_ids = [project.project_id for project in projects]
    # For debugging or direct response, convert list to string
    return str(project_ids)

As you can see, automation in the Google Cloud is really easy. For many use cases, the best way to deploy such scripts, is to use a Cloud Function and trigger it manually, with a schedule or based on some event.