This page shows how to deploy a cross-region internal Application Load Balancer to load balance traffic to network endpoints that are on-premises or in other public clouds and that are reachable by using hybrid connectivity.
If you haven't already done so, review the Hybrid connectivity NEGs overview to understand the network requirements to set up hybrid load balancing.
Setup overview
The example sets up a cross-region internal Application Load Balancer for mixed zonal and hybrid connectivity NEG backends, as shown in the following figure:
You must configure hybrid connectivity before setting up a hybrid load balancing deployment. Depending on your choice of hybrid connectivity product, use either Cloud VPN or Cloud Interconnect (Dedicated or Partner).
Set up an SSL certificate resource
Create a Certificate Manager SSL certificate resource as described in the following:
- Deploy a global self-managed certificate.
- Create a Google-managed certificate issued by your Certificate Authority Service instance.
- Create a Google-managed certificate with DNS authorization.
We recommend using a Google-managed certificate.
Permissions
To set up hybrid load balancing, you must have the following permissions:
On Google Cloud
- Permissions to establish hybrid connectivity between Google Cloud and your on-premises environment or other cloud environments. For the list of permissions needed, see the relevant Network Connectivity product documentation.
- Permissions to create a hybrid connectivity NEG and the load balancer.
The Compute Load Balancer Admin
role
(
roles/compute.loadBalancerAdmin
) contains the permissions required to perform the tasks described in this guide.
On your on-premises environment or other non-Google Cloud cloud environment
- Permissions to configure network endpoints that
allow services on your on-premises environment or other cloud environments to be reachable
from Google Cloud by using an
IP:Port
combination. For more information, contact your environment's network administrator. - Permissions to create firewall rules on your on-premises environment or other cloud environments to allow Google's health check probes to reach the endpoints.
- Permissions to configure network endpoints that
allow services on your on-premises environment or other cloud environments to be reachable
from Google Cloud by using an
Additionally, to complete the instructions on this page, you need to create a hybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints) to serve as Google Cloud-based backends for the load balancer.
You should be either a project Owner or Editor, or you should have the following Compute Engine IAM roles.
Task | Required role |
---|---|
Create networks, subnets, and load balancer components | Compute Network Admin
(roles/compute.networkAdmin ) |
Add and remove firewall rules | Compute Security Admin
(roles/compute.securityAdmin ) |
Create instances | Compute Instance Admin
(roles/compute.instanceAdmin ) |
Establish hybrid connectivity
Your Google Cloud and on-premises environment or other cloud environments must be connected through hybrid connectivity by using either Cloud Interconnect VLAN attachments or Cloud VPN tunnels with Cloud Router. We recommend that you use a high availability connection.
A Cloud Router enabled with global dynamic routing learns about the specific endpoint through Border Gateway Protocol (BGP) and programs it into your Google Cloud VPC network. Regional dynamic routing is not supported. Static routes are also not supported.
The VPC network that you use to configure either Cloud Interconnect or Cloud VPN is the same network that you use to configure the hybrid load balancing deployment. Ensure that your VPC network's subnet CIDR ranges do not conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.
For instructions, see the following documentation:
Set up your environment that is outside Google Cloud
Perform the following steps to set up your on-premises environment or other cloud environment for hybrid load balancing:
- Configure network endpoints to expose on-premises services to
Google Cloud (
IP:Port
). - Configure firewall rules on your on-premises environment or other cloud environment.
- Configure Cloud Router to advertise certain required routes to your private environment.
Set up network endpoints
After you set up hybrid connectivity, you configure one or
more network endpoints within your on-premises environment or other cloud environments that
are reachable through Cloud Interconnect or Cloud VPN by using an
IP:port
combination. This IP:port
combination is configured as one or
more endpoints for the hybrid connectivity NEG that is created in
Google Cloud later on in this process.
If there are multiple paths to the IP endpoint, routing follows the behavior described in the Cloud Router overview.
Set up firewall rules
The following firewall rules must be created on your on-premises environment or other cloud environment:
- Create an ingress allow firewall rule in on-premises or other cloud environments to allow traffic from the region's proxy-only subnet to reach the endpoints.
Allowlisting Google's health check probe ranges isn't required for hybrid NEGs. However, if you're using a combination of hybrid and zonal NEGs in a single backend service, you need to allowlist the Google health check probe ranges for the zonal NEGs.
Advertise routes
Configure Cloud Router to advertise the following custom IP ranges to your on-premises environment or other cloud environment:
- The range of the region's proxy-only subnet.
Set up the Google Cloud environment
For the following steps, make sure you use the same VPC network
(called NETWORK
in this procedure) that
was used to configure hybrid connectivity between the environments.
Additionally, make sure the regions used
(called REGION_A
and
REGION_B
in this procedure)
are the same as those used to create the Cloud VPN tunnel or
Cloud Interconnect VLAN attachments.
GEO
to route client traffic to the load balancer VIP in the region
closest to the client during regional outages.
Configure the backend subnets
Use this subnet to create the load balancer's zonal NEG backends:
Console
In the Google Cloud console, go to the VPC networks page.
Go to the network that was used to configure hybrid connectivity between the environments.
In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Provide a Name for the subnet.
- Select a Region: REGION_A
- Enter an IP address range.
- Click Done.
Click Create.
To add more subnets in different regions, click Add subnet and repeat the previous steps for REGION_B
gcloud
Create subnets in the network that was used to configure hybrid connectivity between the environments.
gcloud compute networks subnets create SUBNET_A \ --network=NETWORK \ --range=LB_SUBNET_RANGE1 \ --region=REGION_A
gcloud compute networks subnets create SUBNET_B \ --network=NETWORK \ --range=LB_SUBNET_RANGE2 \ --region=REGION_B
API
Make a POST
request to the
subnetworks.insert
method.
Replace PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks { "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "LB_SUBNET_RANGE1", "region": "projects/PROJECT_ID/regions/REGION_A", }
Make a POST
request to the
subnetworks.insert
method.
Replace PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks { "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "LB_SUBNET_RANGE2", "region": "projects/PROJECT_ID/regions/REGION_B", }
Replace the following:
SUBNET_A
andSUBNET_B
: the name of the subnetsLB_SUBNET_RANGE1
andLB_SUBNET_RANGE2
: the IP address range for the subnetsREGION_A
andREGION_B
: the regions where you have configured the load balancer
Configure the proxy-only subnet
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based regional load balancers in the same region of the VPC network. There can only be one active proxy-only subnet for a given purpose, per region, per network.
Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to the VPC networks page.
- Click the name of the VPC network.
- On the Subnets tab, click Add subnet.
- Provide a Name for the proxy-only subnet.
- In the Region list, select REGION_A.
- In the Purpose list, select Cross-region Managed Proxy.
- In the IP address range field, enter
10.129.0.0/23
. - Click Add.
Create the proxy-only subnet in REGION_B
- Click Add subnet.
- Provide a Name for the proxy-only subnet.
- In the Region list, select REGION_B.
- In the Purpose list, select Cross-region Managed Proxy.
- In the IP address range field, enter
10.130.0.0/23
. - Click Add.
gcloud
Create the proxy-only subnets with the
gcloud compute networks subnets create
command.
gcloud compute networks subnets create PROXY_SN_A \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=NETWORK \ --range=PROXY_ONLY_SUBNET_RANGE1
gcloud compute networks subnets create PROXY_SN_B \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_B \ --network=NETWORK \ --range=PROXY_ONLY_SUBNET_RANGE2
Replace the following:
PROXY_SN_A
andPROXY_SN_B
: the name of the proxy-only subnetsPROXY_ONLY_SUBNET_RANGE1
andPROXY_ONLY_SUBNET_RANGE2
: the IP address range for the proxy-only subnetsREGION_A
andREGION_B
: the regions where you have configured the load balancer
API
Create the proxy-only subnets with the
subnetworks.insert
method, replacing
PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks { "name": "PROXY_SN_A", "ipCidrRange": "PROXY_ONLY_SUBNET_RANGE1", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_A", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" }
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks { "name": " PROXY_SN_B", "ipCidrRange": "PROXY_ONLY_SUBNET_RANGE2", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_B", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" }
Create firewall rules
In this example, you create the following firewall rules for the zonal NEG backends on Google Cloud:
fw-allow-health-check
: An ingress rule, applicable to the instances being load balanced, that allows traffic from Google Cloud health checking systems (130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagallow-health-check
to identify the zonal NEGs to which it should apply.fw-allow-ssh
: An ingress rule that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the systems from which you will initiate SSH sessions. This example uses the target tagallow-ssh
to identify the virtual machine (VM) instances to which it should apply.fw-allow-proxy-only-subnet
: An ingress rule that allows connections from the proxy-only subnet to reach the zonal NEG backends.
Console
In the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule to create the rule to allow traffic from health check probes:
- Enter a Name of
fw-allow-health-check
. - For Network, select NETWORK.
- For Targets, select Specified target tags.
- Populate the Target tags field with
allow-health-check
. - Set Source filter to IPv4 ranges.
- Set Source IPv4 ranges to
130.211.0.0/22
and35.191.0.0/16
. - For Protocols and ports, select Specified protocols and ports.
- Select TCP and then enter
80
for the port number. - Click Create.
- Enter a Name of
Click Create firewall rule again to create the rule to allow incoming SSH connections:
- Name:
fw-allow-ssh
- Network: NETWORK
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0
- Protocols and ports: Choose Specified protocols and ports.
- Select TCP and then enter
22
for the port number. - Click Create.
- Name:
Click Create firewall rule again to create the rule to allow incoming connections from the proxy-only subnet:
- Name:
fw-allow-proxy-only-subnet
- Network: NETWORK
- Priority:
1000
- Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-proxy-only-subnet
- Source filter: IPv4 ranges
- Source IPv4 ranges: PROXY_ONLY_SUBNET_RANGE1 and PROXY_ONLY_SUBNET_RANGE2
- Protocols and ports: Choose Specified protocols and ports
- Select TCP and then enter
80
for the port number. - Click Create.
- Name:
gcloud
Create the
fw-allow-health-check-and-proxy
rule to allow the Google Cloud health checks to reach the backend instances on TCP port80
:gcloud compute firewall-rules create fw-allow-health-check \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port
80
:gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-proxy-only-subnet \ --source-ranges=PROXY_ONLY_SUBNET_RANGE1,PROXY_ONLY_SUBNET_RANGE2 \ --rules=tcp:80
Set up the zonal NEG
For Google Cloud-based backends, we recommend you configure multiple zonal NEGs in the same region where you configured hybrid connectivity.
For this example, set up a zonal NEG (with GCE_VM_IP_PORT
type endpoints)
in the REGION_A
region. First create the VMs in
the ZONE_A
zone. Then
create a zonal NEG in the ZONE_A
zone, and then
add the VMs' network endpoints to the NEG.
To support high availability, set up a similar zonal NEG in the
REGION_B
region. If backends in one region happen to be
down, traffic fails over to the other region.
Create VMs
Console
In the Google Cloud console, go to the VM instances page.
Repeat steps 3 to 8 for each VM, using the following name and zone combinations.
- Name: of
vm-a1
- Zone: ZONE_A in the region REGION_A
- Subnet: SUBNET_A
- Name: of
vm-b1
- Zone: ZONE_B in the region REGION_B
- Subnet: SUBNET_B
- Name: of
Click Create instance.
Set the name as indicated in the preceding step.
For the Region, choose as indicated in the earlier step.
For the Zone, choose as indicated in the earlier step.
In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. Click Choose to change the image if necessary.
In the Advanced options section, expand Networking, and then do the following:
- Add the following Network tags:
allow-ssh
,allow-health-check
, andallow-proxy-only-subnet
. - In the Network interfaces section, click Add a network interface
make the following changes, and then click Done:
- Network: NETWORK
- Subnetwork: as indicated in the earlier step.
- Primary internal IP: Ephemeral (automatic)
- External IP: Ephemeral
Expand Management. In the Automation field, copy and paste the following script contents. The script contents are identical for all VMs:
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
- Add the following Network tags:
Click Create.
gcloud
Create the VMs by running the following command, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.
gcloud compute instances create VM_NAME \ --zone=GCP_NEG_ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --subnet=LB_SUBNET_NAME \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
VM_NAME
ofvm-a1
- The zone
GCP_NEG_ZONE
asZONE_A
in the regionREGION_A
- The subnet
LB_SUBNET_NAME
asSUBNET_A
- The zone
VM_NAME
ofvm-b1
- Zone
GCP_NEG_ZONE
asZONE_B
in the regionREGION_B
- Subnet
LB_SUBNET_NAME
asSUBNET_B
- Zone
Create the zonal NEG
Console
To create a zonal network endpoint group:
In the Google Cloud console, go to the Network Endpoint Groups page.
Repeat steps 3 to 8 for each zonal NEG, using the following name and zone combinations:
- Name:
neg-1
- Zone: ZONE_A in the region REGION_A
- Subnet: SUBNET_A
- Name:
neg-2
- Zone: ZONE_B in the region REGION_B
- Subnet: SUBNET_B
- Name:
Click Create network endpoint group.
Set the name as indicated in the preceding step.
Select the Network endpoint group type: Network endpoint group (Zonal).
Select the Network: NETWORK
Select the Subnetwork as indicated in earlier step.
Select the Zone as indicated in earlier step.
Enter the Default port:
80
.Click Create.
Add endpoints to the zonal NEG:
In the Google Cloud console, go to the Network Endpoint Groups page.
Click the Name of the network endpoint group created in the previous step. You see the Network endpoint group details page.
In the Network endpoints in this group section, click Add network endpoint. You see the Add network endpoint page.
Select a VM instance to add its internal IP addresses as network endpoints. In the Network interface section, the name, zone, and subnet of the VM is displayed.
Enter the IP address of the new network endpoint.
Select the Port type.
- If you select Default, the endpoint uses the default port
80
for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port80
. - If you select Custom, enter the Port number for the endpoint to use.
- If you select Default, the endpoint uses the default port
To add more endpoints, click Add network endpoint and repeat the previous steps.
After you add all the endpoints, click Create.
gcloud
Create zonal NEGs (with
GCE_VM_IP_PORT
endpoints) using the name, zone, and subnet combinations. Use thegcloud compute network-endpoint-groups create
command.gcloud compute network-endpoint-groups create GCP_NEG_NAME \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=GCP_NEG_ZONE \ --network=NETWORK \ --subnet=LB_SUBNET_NAME
- Name:
neg-1
- Zone
GCP_NEG_ZONE
:ZONE_A
in the regionREGION_A
- Subnet
LB_SUBNET_NAME
:SUBNET_A
- Zone
- Name:
neg-2
- Zone
GCP_NEG_ZONE
:ZONE_B
in the regionREGION_B
- Subnet
LB_SUBNET_NAME
:SUBNET_B
- Zone
You can either specify a port using the
--default-port
option while creating the NEG, or specify a port number for each endpoint as shown in the next step.- Name:
Add endpoints to
neg1
andneg2
.gcloud compute network-endpoint-groups update neg1 \ --zone=ZONE_A \ --add-endpoint='instance=vm-a1,port=80'
gcloud compute network-endpoint-groups update neg2 \ --zone=ZONE_B \ --add-endpoint='instance=vm-b1,port=80'
Set up the hybrid connectivity NEG
When creating the NEG, use a zone that minimizes the geographic
distance between Google Cloud and your on-premises or other cloud
environment. For example, if you are hosting a service in an on-premises
environment in Frankfurt, Germany, you can specify the europe-west3-a
Google Cloud zone when you create the NEG.
And, if you're using Cloud Interconnect, the zone used to create the NEG is in the same region where the Cloud Interconnect attachment was configured.
Hybrid NEGs support only the distributed Envoy health checks.
Console
To create a hybrid connectivity network endpoint group:
In the Google Cloud console, go to the Network Endpoint Groups page.
Click Create network endpoint group.
Repeat steps 4 to 9 for each hybrid NEG, using the following name and zone combinations.
- Name ON_PREM_NEG_NAME:
hybrid-1
- Zone: ON_PREM_NEG_ZONE1
- Subnet: SUBNET_A
- Name ON_PREM_NEG_NAME:
hybrid-2
- Zone: ON_PREM_NEG_ZONE2
- Subnet: SUBNET_B
- Name ON_PREM_NEG_NAME:
Set the name as indicated in the previous step.
Select the Network endpoint group type: Hybrid connectivity network endpoint group (Zonal).
Select the Network: NETWORK
For the Subnet, choose as indicated in the previous step.
For the Zone, choose as indicated in the previous step.
Enter the Default port.
Click Create
Add endpoints to the hybrid connectivity NEG:
In the Google Cloud console, go to the Network Endpoint Groups page.
Click the Name of the network endpoint group created in the previous step. You see the Network endpoint group detail page.
In the Network endpoints in this group section, click Add network endpoint. You see the Add network endpoint page.
Enter the IP address of the new network endpoint.
Select the Port type.
- If you select Default, the endpoint uses the default port for all endpoints in the network endpoint group.
- If you select Custom, you can enter a different Port number for the endpoint to use.
To add more endpoints, click Add network endpoint and repeat the previous steps.
After you add all the non-Google Cloud endpoints, click Create.
gcloud
Create a hybrid connectivity NEG that uses the following name combinations. Use the
gcloud compute network-endpoint-groups create
command.gcloud compute network-endpoint-groups create ON_PREM_NEG_NAME \ --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \ --zone=ON_PREM_NEG_ZONE \ --network=NETWORK
- Name
ON_PREM_NEG_NAME
:hybrid-1
- Zone
ON_PREM_NEG_ZONE
:ON_PREM_NEG_ZONE1
- Zone
- Name
ON_PREM_NEG_NAME
:hybrid-2
- Zone
GCP_NEG_ZONE
:ON_PREM_NEG_ZONE2
- Zone
- Name
Add the on-premises backend VM endpoint to
ON_PREM_NEG_NAME
:gcloud compute network-endpoint-groups update ON_PREM_NEG_NAME \ --zone=ON_PREM_NEG_ZONE \ --add-endpoint="ip=ON_PREM_IP_ADDRESS_1,port=PORT_1" \ --add-endpoint="ip=ON_PREM_IP_ADDRESS_2,port=PORT_2"
You can use this command to add the network endpoints you previously
configured on premises or in your cloud environment.
Repeat --add-endpoint
as many times as needed.
Configure the load balancer
Console
gcloud
Define the HTTP health check with the
gcloud compute health-checks create http
command.gcloud compute health-checks create http gil7-basic-check \ --use-serving-port \ --global
Create the backend service and enable logging with the
gcloud compute backend-services create
command.gcloud compute backend-services create BACKEND_SERVICE \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --enable-logging \ --logging-sample-rate=1.0 \ --health-checks=gil7-basic-check \ --global-health-checks \ --global
Add backends to the backend service with the
gcloud compute backend-services add-backend
command.gcloud compute backend-services add-backend BACKEND_SERVICE \ --global \ --balancing-mode=RATE \ --max-rate-per-endpoint=MAX_REQUEST_RATE_PER_ENDPOINT \ --network-endpoint-group=neg1 \ --network-endpoint-group-zone=ZONE_A \ --network-endpoint-group=neg2 \ --network-endpoint-group-zone=ZONE_B
For details about configuring the balancing mode, see the gcloud CLI documentation for the
--max-rate-per-endpoint
flag.Add the hybrid NEGs as a backend to the backend service.
gcloud compute backend-services add-backend BACKEND_SERVICE \ --global \ --balancing-mode=RATE \ --max-rate-per-endpoint=MAX_REQUEST_RATE_PER_ENDPOINT \ --network-endpoint-group=hybrid1 \ --network-endpoint-group-zone=ON_PREM_NEG_ZONE1 \ --network-endpoint-group=hybrid2 \ --network-endpoint-group-zone=ON_PREM_NEG_ZONE2 \
For details about configuring the balancing mode, see the gcloud CLI documentation for the
--max-rate-per-endpoint
parameter.Create the URL map with the
gcloud compute url-maps create
command.gcloud compute url-maps create gil7-map \ --default-service=BACKEND_SERVICE \ --global
Create the target proxy.
For HTTP:
Create the target proxy with the
gcloud compute target-http-proxies create
command.gcloud compute target-http-proxies create gil7-http-proxy \ --url-map=gil7-map \ --global
For HTTPS:
To create a Google-managed certificate, see the following documentation:
- Create a Google-managed certificate issued by your Certificate Authority Service instance.
- Create a Google-managed certificate with DNS authorization.
After you create the Google-managed certificate, attach the certificate to the target proxy. Certificate maps are not supported by cross-region internal Application Load Balancers.
To create a self-managed certificate, see the following documentation:
Assign your file paths to variable names.
export LB_CERT=PATH_TO_PEM_FORMATTED_FILE
export LB_PRIVATE_KEY=PATH_TO_PEM_LB_PRIVATE_FILE
Create an all region SSL certificate using the
gcloud certificate-manager certificates create
command.gcloud certificate-manager certificates create gilb-certificate \ --private-key-file=$LB_CERT \ --certificate-file=$LB_PRIVATE_KEY \ --scope=all-regions
Use the SSL certificate to create a target proxy with the
gcloud compute target-https-proxies create
commandgcloud compute target-https-proxies create gil7-https-proxy \ --url-map=gil7-map \ --certificate-manager-certificates=gilb-certificate \ --global
Create two forwarding rules, one with a VIP
IP_ADDRESS1
in theREGION_A
region and another one with a VIPIP_ADDRESS2
in theREGION_B
region. For the forwarding rule's IP address, use theLB_SUBNET_RANGE1
orLB_SUBNET_RANGE2
IP address range. If you try to use the proxy-only subnet, forwarding rule creation fails.For custom networks, you must reference the subnet in the forwarding rule. Note that this is the VM subnet, not the proxy subnet.
For HTTP:
Use the
gcloud compute forwarding-rules create
command with the correct flags.gcloud compute forwarding-rules create FWRULE_A \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_A \ --subnet-region=REGION_A \ --address=IP_ADDRESS1 \ --ports=80 \ --target-http-proxy=gil7-http-proxy \ --global
gcloud compute forwarding-rules create FWRULE_B \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_B \ --subnet-region=REGION_B \ --address=IP_ADDRESS2 \ --ports=80 \ --target-http-proxy=gil7-http-proxy \ --global
For HTTPS:
Create the forwarding rule with the
gcloud compute forwarding-rules create
command with the correct flags.gcloud compute forwarding-rules create FWRULE_A \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_A \ --subnet-region=REGION_A \ --address=IP_ADDRESS1 \ --ports=443 \ --target-https-proxy=gil7-https-proxy \ --global
gcloud compute forwarding-rules create FWRULE_B \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_B \ --subnet-region=REGION_B \ --address=IP_ADDRESS2 \ --ports=443 \ --target-https-proxy=gil7-https-proxy \ --global
Connect your domain to your load balancer
After you create the load balancer, note the IP address that is associated with
the load balancer—for example, IP_ADDRESS1
and
IP_ADDRESS2
.
To point your domain to your load balancer, create an A
record by using
Cloud DNS or domain registration service. If
you added multiple domains to your SSL certificate, you must add an A
record
for each one, all pointing to the load balancer's IP address.
Test the load balancer
Create a VM instance to test connectivity
Create a client VM:
gcloud compute instances create l7-ilb-client-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_A \ --zone=ZONE_A \ --tags=allow-ssh
gcloud compute instances create l7-ilb-client-b \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_B \ --zone=ZONE_B \ --tags=allow-ssh
Use SSH to connect to each client instance.
gcloud compute ssh l7-ilb-client-a \ --zone=ZONE_A
gcloud compute ssh l7-ilb-client-b \ --zone=ZONE_B
Verify that the IP address is serving its hostname.
Verify that the client VM can reach both IP addresses. The command should succeed and return the name of the backend VM which served the request:
curl IP_ADDRESS1
curl IP_ADDRESS2
For HTTPS testing, replace
curl
with:curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:IP_ADDRESS1:443
curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:IP_ADDRESS2:443
The
-k
flag causes curl to skip certificate validation.Optional: Use the configured DNS record to resolve the IP address closest to the client VM. For example, DNS_ENTRY can be
service.example.com
.curl DNS_ENTRY
Run 100 requests
Run 100 curl requests and confirm from the responses that they are load balanced.
For HTTP:
Verify that the client VM can reach both IP addresses. The command should succeed and return the name of the backend VM which served the request:
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silent IP_ADDRESS1)" done echo "***" echo "*** Results of load-balancing to IP_ADDRESS1: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silent IP_ADDRESS2)" done echo "***" echo "*** Results of load-balancing to IP_ADDRESS2: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
For HTTPS:
Verify that the client VM can reach both IP addresses. The command should succeed and return the name of the backend VM which served the request:
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:IP_ADDRESS1:443)" done echo "***" echo "*** Results of load-balancing to IP_ADDRESS1: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:IP_ADDRESS2:443)" done echo "***" echo "*** Results of load-balancing to IP_ADDRESS2: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
Test failover
Verify failover to backends in the
REGION_A
region when backends in theREGION_B
are unhealthy or unreachable. We simulate this by removing all the backends fromREGION_B
:gcloud compute backend-services remove-backend BACKEND_SERVICE \ --balancing-mode=RATE \ --network-endpoint-group=neg2 \ --network-endpoint-group-zone=ZONE_B
Use SSH to connect to the client VM in
REGION_B
.gcloud compute ssh l7-ilb-client-b \ --zone=ZONE_B
Send requests to the load balanced IP address in the
REGION_B
region. The output is similar to the following:{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:IP_ADDRESS2:443)" done echo "***" echo "*** Results of load-balancing to IP_ADDRESS2: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }
Additional configuration options
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Configure DNS routing policies
If your clients are in multiple regions, you might want to make your cross-region internal Application Load Balancer accessible by using VIPs in these regions. This multi-region setup minimizes latency and network transit costs. In addition, it lets you set up a DNS-based, global, load balancing solution that provides resilience against regional outages. For more information, see Manage DNS routing policies and health checks.
gcloud
To create a DNS entry with a 30 second TTL, use the
gcloud dns record-sets create
command.
gcloud dns record-sets create DNS_ENTRY --ttl="30" \ --type="A" --zone="service-zone" \ --routing-policy-type="GEO" \ --routing-policy-data="REGION_A=gil7-forwarding-rule-a@global;REGION_B=gil7-forwarding-rule-b@global" \ --enable-health-checking
Replace the following:
DNS_ENTRY
: DNS or domain name of the record-setFor example,
service.example.com
REGION_A
andREGION_B
: the regions where you have configured the load balancer
API
Create the DNS record by making a POST
request to the
ResourceRecordSets.create
method.
Replace PROJECT_ID with your project ID.
POST https://www.googleapis.com/dns/v1/projects/PROJECT_ID/managedZones/SERVICE_ZONE/rrsets { "name": "DNS_ENTRY", "type": "A", "ttl": 30, "routingPolicy": { "geo": { "items": [ { "location": "REGION_A", "healthCheckedTargets": { "internalLoadBalancers": [ { "loadBalancerType": "globalL7ilb", "ipAddress": "IP_ADDRESS", "port": "80", "ipProtocol": "tcp", "networkUrl": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "project": "PROJECT_ID" } ] } }, { "location": "REGION_B", "healthCheckedTargets": { "internalLoadBalancers": [ { "loadBalancerType": "globalL7ilb", "ipAddress": "IP_ADDRESS_B", "port": "80", "ipProtocol": "tcp", "networkUrl": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "project": "PROJECT_ID" } ] } } ] } } }
Update client HTTP keepalive timeout
The load balancer created in the previous steps has been configured with a default value for the client HTTP keepalive timeout.To update the client HTTP keepalive timeout, use the following instructions.
Console
In the Google Cloud console, go to the Load balancing page.
- Click the name of the load balancer that you want to modify.
- Click Edit.
- Click Frontend configuration.
- Expand Advanced features. For HTTP keepalive timeout, enter a timeout value.
- Click Update.
- To review your changes, click Review and finalize, and then click Update.
gcloud
For an HTTP load balancer, update the target HTTP proxy by using the
gcloud compute target-http-proxies update
command:
gcloud compute target-http-proxies update TARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --global
For an HTTPS load balancer, update the target HTTPS proxy by using the
gcloud compute target-https-proxies update
command:
gcloud compute target-https-proxies update TARGET_HTTPS_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --global
Replace the following:
TARGET_HTTP_PROXY_NAME
: the name of the target HTTP proxy.TARGET_HTTPS_PROXY_NAME
: the name of the target HTTPS proxy.HTTP_KEEP_ALIVE_TIMEOUT_SEC
: the HTTP keepalive timeout value from 5 to 600 seconds.
Enable outlier detection
You can enable outlier detection on global backend services to identify unhealthy serverless NEGs and reduce the number the requests sent to the unhealthy serverless NEGs.
Outlier detection is enabled on the backend service by using one of the following methods:
- The
consecutiveErrors
method (outlierDetection.consecutiveErrors
), in which a5xx
series HTTP status code qualifies as an error. - The
consecutiveGatewayFailure
method (outlierDetection.consecutiveGatewayFailure
), in which only the502
,503
, and504
HTTP status codes qualify as an error.
Use the following steps to enable outlier detection for an existing backend
service. Note that even after enabling outlier detection, some requests can be
sent to the unhealthy service and return a 5xx
status code to
the clients. To further reduce the error rate, you can configure more aggressive
values for the outlier detection parameters. For more information, see the
outlierDetection
field.
Console
In the Google Cloud console, go to the Load balancing page.
Click the name of the load balancer whose backend service you want to edit.
On the Load balancer details page, click
Edit.On the Edit cross-region internal Application Load Balancer page, click Backend configuration.
On the Backend configuration page, click
Edit for the backend service that you want to modify.Scroll down and expand the Advanced configurations section.
In the Outlier detection section, select the Enable checkbox.
Click
Edit to configure outlier detection.Verify that the following options are configured with these values:
Property Value Consecutive errors 5 Interval 1000 Base ejection time 30000 Max ejection percent 50 Enforcing consecutive errors 100 In this example, the outlier detection analysis runs every one second. If the number of consecutive HTTP
5xx
status codes received by an Envoy proxy is five or more, the backend endpoint is ejected from the load-balancing pool of that Envoy proxy for 30 seconds. When the enforcing percentage is set to 100%, the backend service enforces the ejection of unhealthy endpoints from the load-balancing pools of those specific Envoy proxies every time the outlier detection analysis runs. If the ejection conditions are met, up to 50% of the backend endpoints from the load-balancing pool can be ejected.Click Save.
To update the backend service, click Update.
To update the load balancer, on the Edit cross-region internal Application Load Balancer page, click Update.
gcloud
Export the backend service into a YAML file.
gcloud compute backend-services export BACKEND_SERVICE_NAME \ --destination=BACKEND_SERVICE_NAME.yaml --global
Replace
BACKEND_SERVICE_NAME
with the name of the backend service.Edit the YAML configuration of the backend service to add the fields for outlier detection as highlighted in the following YAML configuration, in the
outlierDetection
section:In this example, the outlier detection analysis runs every one second. If the number of consecutive HTTP
5xx
status codes received by an Envoy proxy is five or more, the backend endpoint is ejected from the load-balancing pool of that Envoy proxy for 30 seconds. When the enforcing percentage is set to 100%, the backend service enforces the ejection of unhealthy endpoints from the load-balancing pools of those specific Envoy proxies every time the outlier detection analysis runs. If the ejection conditions are met, up to 50% of the backend endpoints from the load-balancing pool can be ejected.name: BACKEND_SERVICE_NAME backends: - balancingMode: UTILIZATION capacityScaler: 1.0 group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/networkEndpointGroups/SERVERLESS_NEG_NAME - balancingMode: UTILIZATION capacityScaler: 1.0 group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/networkEndpointGroups/SERVERLESS_NEG_NAME_2 outlierDetection: baseEjectionTime: nanos: 0 seconds: 30 consecutiveErrors: 5 enforcingConsecutiveErrors: 100 interval: nanos: 0 seconds: 1 maxEjectionPercent: 50 port: 80 selfLink: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices/BACKEND_SERVICE_NAME sessionAffinity: NONE timeoutSec: 30 ...
Replace the following:
BACKEND_SERVICE_NAME
: the name of the backend servicePROJECT_ID
: the ID of your projectREGION_A
andREGION_B
: the regions where the load balancer has been configured.SERVERLESS_NEG_NAME
: the name of the first serverless NEGSERVERLESS_NEG_NAME_2
: the name of the second serverless NEG
Update the backend service by importing the latest configuration.
gcloud compute backend-services import BACKEND_SERVICE_NAME \ --source=BACKEND_SERVICE_NAME.yaml --global
Outlier detection is now enabled on the backend service.
What's next
- Convert Application Load Balancer to IPv6
- Internal Application Load Balancer overview
- Proxy-only subnets for Envoy-based load balancers
- Manage certificates
- Clean up a load balancing setup