Convert proxy Network Load Balancer to IPv6

This document shows you how to convert proxy Network Load Balancer resources and backends from IPv4 only (single-stack) to IPv4 and IPv6 (dual-stack). The main advantage of using IPv6 is that a much larger pool of IP addresses can be allocated to your load balancer. You can configure the load balancer to terminate inbound IPv6 traffic and send this traffic over an IPv4 or IPv6 connection to your backends, based on your preference. For more information, see IPv6 for Application Load Balancers and proxy Network Load Balancers.

In this document, IPv4 only (single-stack) refers to the resources that use only IPv4 addresses, and IPv4 and IPv6 (dual-stack) refers to the resources that use both IPv4 and IPv6 addresses.

The instructions in this document apply to both TCP and SSL proxy Network Load Balancers.

Before you begin

Note the following conditions before you begin the conversion process:

  • You must be using one of the following types of proxy Network Load Balancers:

    • Global external proxy Network Load Balancer
    • Regional external proxy Network Load Balancer
    • Cross-region internal proxy Network Load Balancer
    • Regional internal proxy Network Load Balancer

    Classic proxy Network Load Balancers don't support dual-stack backends or subnets. For more information about IPv6 support, see IPv6 for Application Load Balancers and proxy Network Load Balancers.

  • Your load balancer is using either VM instance group backends or zonal network endpoint groups (NEGs) with GCE_VM_IP_PORT endpoints. Other backend types don't have dual-stack support.

Additonally, the conversion process differs based on the type of load balancer.

  • For global external proxy Network Load Balancers, you convert the backends to dual-stack and you create an IPv6 forwarding rule that can handle incoming IPv6 traffic.

  • For cross-region internal proxy Network Load Balancers, regional external proxy Network Load Balancers, and regional internal proxy Network Load Balancers, you only convert the backends to dual-stack. IPv6 forwarding rules aren't supported for these load balancers.

For information about how to set up global external proxy Network Load Balancers, see the following documentation:

Identify the resources to convert

Note the names of the resources that your load balancer is associated with. You need to provide these names later.

  1. To list all the subnets, use the gcloud compute networks subnets list command:

    gcloud compute networks subnets list
    

    Note the name of the subnet with IPv4 only stack to convert to dual-stack. This name is referred to later as SUBNET. The VPC network is referred to later as NETWORK.

  2. To list all the backend services, use the gcloud compute backend-services list command:

    gcloud compute backend-services list
    

    Note the name of the backend service to convert to dual-stack. This name is referred to later as BACKEND_SERVICE.

  3. If you already have a load balancer, to view the IP stack type of your backends, use the gcloud compute instances list command:

    gcloud compute instances list \
        --format= \
        "table(
        name,
        zone.basename(),
        networkInterfaces[].stackType.notnull().list(),
        networkInterfaces[].ipv6AccessConfigs[0].externalIpv6.notnull().list():label=EXTERNAL_IPV6,
        networkInterfaces[].ipv6Address.notnull().list():label=INTERNAL_IPV6)"
    
  4. To list all the VM instance and instance templates, use the gcloud compute instances list command and the gcloud compute instance-templates list command:

    gcloud compute instances list
    
    gcloud compute instance-templates list
    

    Note the names of the instances and instance templates to convert to dual-stack. This name is referred to later as VM_INSTANCE and INSTANCE_TEMPLATES.

  5. To list all the instance groups, use the gcloud compute instance-groups list command:

    gcloud compute instance-groups list
    

    Note the name of the network endpoint groups to convert to dual stack. This name is referred to later as INSTANCE_GROUP.

  6. To list all the zonal NEGs, use the gcloud compute network-endpoint-groups list command:

    gcloud compute network-endpoint-groups list
    

    Note the name of the network endpoint groups to convert to dual stack. This name is referred to later as ZONAL_NEG.

  7. To list all the target SSL proxies, use the gcloud compute target-ssl-proxies list command:

    gcloud compute target-ssl-proxies list
    

    Note the name of the target proxy associated with your load balancer. This name is referred to later as TARGET_PROXY.

  8. To list all the target TCP proxies, use the gcloud compute target-tcp-proxies list command:

    gcloud compute target-tcp-proxies list
    

    Note the name of the target proxy associated with your load balancer. This name is referred to later as TARGET_PROXY.

Convert from single-stack to dual-stack backends

This section shows you how to convert your load balancer resources and backends using IPv4 only (single-stack) addresses to IPv4 and IPv6 (dual-stack) addresses.

Update the subnet

Dual-stack subnets are supported on custom mode VPC networks only. Dual-stack subnets are not supported on auto mode VPC networks or legacy networks. Though auto mode networks can be useful for early exploration, custom mode VPCs are better suited for most production environments. We recommend that you use VPCs in custom mode.

To update the VPC to the dual-stack setting, follow these steps:

  1. If you are using an auto mode VPC network, you must first convert the auto mode VPC network to custom mode.

    If you are using the default network, you must convert it to a custom mode VPC network.

  2. To enable IPv6, see Change a subnet's stack type to dual stack.

    Make sure that the IPv6 access type of the subnet is set to External.

  3. Optional: If you want to configure internal IPv6 address ranges on subnets in this network, complete these steps:

    1. For VPC network ULA internal IPv6 range, select Enabled.
    2. For Allocate internal IPv6 range, select Automatically or Manually.

      If you select Manually, enter a /48 range from within the fd20::/20 range. If the range is in use, you are prompted to provide a different range.

Update the proxy-only subnet

If you are using an Envoy based load balancer, we recommend that you change the proxy-only subnet stack type to dual stack. For information about load balancers that support proxy-only subnets, see Supported load balancers.

To change the proxy-only subnet's stack type to dual stack, do the following:

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. To view the VPC network details page, click the name of a network.

  3. Click the Subnets tab.

  4. In the Reserved proxy-only subnets for load balancing section, click the name of the proxy-only subnet that you want to modify.

  5. On the Subnet details page, click Edit.

  6. For IP stack type, select IPv4 and IPv6 (dual-stack). Set the IPv6 access type to Internal.

  7. Click Save.

gcloud

Use the subnets update command.

gcloud compute networks subnets update PROXY_ONLY_SUBNET \
    --stack-type=IPV4_IPV6 \
    --ipv6-access-type=INTERNAL \
    --region=REGION

Replace the following:

  • PROXY_ONLY_SUBNET: the name of the proxy-only subnet.
  • REGION: the region of the subnet.
  • IPv6_ACCESS_TYPE: the IPv6 access type of the subnet is INTERNAL.

Update the VM instance or templates

You can configure IPv6 addresses on a VM instance if the subnet that the VM is connected to has an IPv6 range configured. Only the following backends can support IPv6 addresses:

  • Instance group backends: One or more managed, unmanaged, or a combination of managed and unmanaged instance group backends.
  • Zonal NEGs: One or more GCE_VM_IP_PORT type zonal NEGs.

Update VM instances

You cannot edit VM instances that are part of a managed or an unmanaged instance group. To update the VM instances to dual stack, follow these steps:

  1. Delete specific instances from a group
  2. Create a dual-stack VM
  3. Create instances with specific names in MIGs

Update VM instance templates

You can't update an existing instance template. If you need to make changes, you can create another template with similar properties. To update the VM instance templates to dual stack, follow these steps:

Console

  1. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click the instance template that you want to copy and update.
    2. Click Create similar.
    3. Expand the Advanced options section.
    4. For Network tags, enter allow-health-check-ipv6.
    5. In the Network interfaces section, click Add a network interface.
    6. In the Network list, select the custom mode VPC network.
    7. In the Subnetwork list, select SUBNET.
    8. For IP stack type, select IPv4 and IPv6 (dual-stack).
    9. Click Create.
  2. Starting a basic rolling update on the managed instance group MIG associated with the load balancer.

Update the zonal NEG

Zonal NEG endpoints cannot be edited. You must delete the IPv4 endpoints and create a new dual-stack endpoint with both IPv4 and IPv6 addresses.

To set up a zonal NEG (with GCE_VM_IP_PORT type endpoints) in the REGION_A region, first create the VMs in the GCP_NEG_ZONE zone. Then add the VM network endpoints to the zonal NEG.

Create VMs

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Click Create instance.

  3. Set the Name to vm-a1.

  4. For the Region, choose REGION_A, and choose any value for the Zone field. This zone is referred to as GCP_NEG_ZONE in this procedure.

  5. In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. Click Choose to change the image if necessary.

  6. Expand the Advanced options section and make the following changes:

    • Expand the Networking section.
    • In the Network tags field, enter allow-health-check.
    • In the Network interfaces section, make the following changes:
      • Network: NETWORK
      • Subnet: SUBNET
      • IP stack type: IPv4 and IPv6 (dual-stack)
    • Click Done.
    • Click Management. In the Startup script field, copy and paste the following script contents.

      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      a2ensite default-ssl
      a2enmod ssl
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://metadata.google.internal/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      systemctl restart apache2
      
  7. Click Create.

  8. Repeat the following steps to create a second VM, using the following name and zone combination:

    • Name: vm-a2, zone: GCP_NEG_ZONE

gcloud

Create the VMs by running the following command two times, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.

  • VM_NAME of vm-a1 and any GCP_NEG_ZONE zone of your choice.
  • VM_NAME of vm-a2 and the same GCP_NEG_ZONE zone.

    gcloud compute instances create VM_NAME \
        --zone=GCP_NEG_ZONE \
        --stack-type=IPV4_IPV6 \
        --image-family=debian-12 \
        --image-project=debian-cloud \
        --tags=allow-health-check \
        --subnet=SUBNET \
        --metadata=startup-script='#! /bin/bash
          apt-get update
          apt-get install apache2 -y
          a2ensite default-ssl
          a2enmod ssl
          vm_hostname="$(curl -H "Metadata-Flavor:Google" \
          http://metadata.google.internal/computeMetadata/v1/instance/name)"
          echo "Page served from: $vm_hostname" | \
          tee /var/www/html/index.html
          systemctl restart apache2'
    

Add endpoints to the zonal NEG

Console

To add endpoints to the zonal NEG:

  1. In the Google Cloud console, go to the Network endpoint groups page.

    Go to Network endpoint groups

  2. In the Name list, click the name of the network endpoint group (ZONAL_NEG). You see the Network endpoint group details page.

  3. In the Network endpoints in this group section, select the previously created NEG endpoint. Click Remove endpoint.

  4. In the Network endpoints in this group section, click Add network endpoint.

  5. Select the VM instance.

  6. In the Network interface section, the name, zone, and subnet of the VM is displayed.

  7. In the IPv4 address field, enter the IPv4 address of the new network endpoint.

  8. In the IPv6 address field, enter the IPv6 address of the new network endpoint.

  9. Select the Port type.

    1. If you select Default, the endpoint uses the default port 80 for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port 80.
    2. If you select Custom, enter the Port number for the endpoint to use.
  10. To add more endpoints, click Add network endpoint and repeat the previous steps.

  11. After you add all the endpoints, click Create.

gcloud

  1. Add endpoints (GCE_VM_IP_PORT endpoints) to ZONAL_NEG.

    gcloud compute network-endpoint-groups update ZONAL_NEG \
        --zone=GCP_NEG_ZONE \
        --add-endpoint='instance=vm-a1,ip=IPv4_ADDRESS, \
          ipv6=IPv6_ADDRESS,port=80' \
        --add-endpoint='instance=vm-a2,ip=IPv4_ADDRESS, \
          ipv6=IPv6_ADDRESS,port=80'
    

Replace the following:

IPv4_ADDRESS: IPv4 address of the network endpoint. The IPv4 must belong to a VM in Compute Engine (either the primary IP or as part of an aliased IP range). If the IP address is not specified, then the primary IP address for the VM instance in the network that the network endpoint group belongs to is used.

IPv6_ADDRESS: IPv6 address of the network endpoint. The IPv6 address must belong to a VM instance in the network that the network endpoint group belongs (external IPv6 address).

Create a firewall rule for IPv6 health check probes

You must create a firewall rule to allow health checks from the IP ranges of Google Cloud probe systems. For more information, see probe IP ranges.

Ensure that the ingress rule is applicable to the instances being load balanced and that it allows traffic from the Google Cloud health checking systems. This example uses the target tag allow-health-check-ipv6 to identify the VM instances to which it applies.

Without this firewall rule, the default deny ingress rule blocks incoming IPv6 traffic to the backend instances.

Console

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. To allow IPv6 subnet traffic, click Create firewall rule again and enter the following information:

    • Name: fw-allow-lb-access-ipv6
    • Network: NETWORK
    • Priority: 1000
    • Direction of traffic: ingress
    • Targets: Specified target tags
    • Target tags: allow-health-check-ipv6
    • Source filter: IPv6 ranges
    • Source IPv6 ranges:

      • For global external Application Load Balancer and global external proxy Network Load Balancer, enter 2600:2d00:1:b029::/64,2600:2d00:1:1::/64

      • For cross-region internal Application Load Balancer, regional external Application Load Balancer, regional internal Application Load Balancer, cross-region internal proxy Network Load Balancer, regional external proxy Network Load Balancer, and regional internal proxy Network Load Balancer, enter 2600:2d00:1:b029::/64

    • Protocols and ports: Allow all

  3. Click Create.

gcloud

  1. Create the fw-allow-lb-access-ipv6 firewall rule to allow communication with the subnet.

    For global external Application Load Balancer and global external proxy Network Load Balancer, use the following command:

    gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check-ipv6 \
        --source-ranges=2600:2d00:1:b029::/64,2600:2d00:1:1::/64 \
        --rules=all
    

    For cross-region internal Application Load Balancer, regional external Application Load Balancer, regional internal Application Load Balancer, cross-region internal proxy Network Load Balancer, regional external proxy Network Load Balancer, and regional internal proxy Network Load Balancer, use the following command:

    gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check-ipv6 \
        --source-ranges=2600:2d00:1:b029::/64 \
        --rules=all
    

Create a firewall rule for the proxy-only subnet

If you are using a regional external proxy Network Load Balancer or an internal proxy Network Load Balancer, you must update the ingress firewall rule fw-allow-lb-access-ipv6 to allow traffic from the proxy-only subnet to the backends.

To get the IPv6 address range of the proxy-only subnet, run the following command:

gcloud compute networks subnets describe PROXY_ONLY_SUBNET \
    --region=REGION \
    --format="value(internalIpv6Prefix)"

Note the internal IPv6 address range; this range is later referred to as IPV6_PROXY_ONLY_SUBNET_RANGE.

To update the firewall rule fw-allow-lb-access-ipv6 for proxy-only subnet, do the following:

Console

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. In the VPC firewall rules panel, click fw-allow-lb-access-ipv6.

    • Source IPv6 ranges: 2600:2d00:1:b029::/64, IPV6_PROXY_ONLY_SUBNET_RANGE
  3. Click Save.

gcloud

  1. Update the fw-allow-lb-access-ipv6 firewall rule to allow communication with the proxy-only subnet:

    gcloud compute firewall-rules update fw-allow-lb-access-ipv6 \
        --source-ranges=2600:2d00:1:b029::/64,IPV6_PROXY_ONLY_SUBNET_RANGE
    

Update the backend service and create an IPv6 forwarding rule

This section provides instructions to update the backend service with dual-stack backends and create an IPv6 forwarding rule.

Note that the IPv6 forwarding rule can be created only for global external proxy Network Load Balancers. IPv6 forwarding rules aren't supported for cross-region internal proxy Network Load Balancers, regional external proxy Network Load Balancers, and regional internal proxy Network Load Balancers.

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. Click Edit.

Configure the backend service for IPv6

  1. Click Backend configuration.
  2. For Backend type, select Zonal network endpoint group.
  3. In the IP address selection policy list, select Prefer IPv6.
  4. In the Protocol field:
    • For TCP proxy, select TCP.
    • For SSL proxy, select SSL.
  5. For zonal NEGs:
    1. In the Backends section, click Add a backend.
    2. In the New Backend panel, do the following:
      • In the network endpoint group list, select ZONAL_NEG.
      • In the Maximum connections field, enter 10.
  6. For instance groups: if you have already updated the VM instance or templates to dual stack, this need not be updated.
  7. Click Done.
  8. In the Health check list, select an HTTP health check.

Configure the IPv6 forwarding rule

IPv6 forwarding rules aren't supported for cross-region internal proxy Network Load Balancers, regional external proxy Network Load Balancers, and regional internal proxy Network Load Balancers.

  1. Click Frontend configuration.
  2. Click Add frontend IP and port.
  3. In the Name field, enter a name for the forwarding rule.
  4. In the Protocol field:
    • For TCP proxy, select TCP.
    • For SSL proxy, select SSL.
  5. Set IP version to IPv6.
  6. For SSL proxy, in the Certificates list, select a certificate.
  7. Click Done.
  8. Click Update.

gcloud

  1. Add the dual-stack zonal NEGs as the backend to the backend service.

    global

    For the global external proxy Network Load Balancer, use the command:

     gcloud compute backend-services add-backend BACKEND_SERVICE \
         --network-endpoint-group=ZONAL_NEG \
         --max-rate-per-endpoint=10 \
         --global
    

    cross-region

    For the cross-region internal proxy Network Load Balancer, use the command:

     gcloud compute backend-services add-backend BACKEND_SERVICE \
         --network-endpoint-group=ZONAL_NEG \
         --max-rate-per-endpoint=10 \
         --global
    

    regional

    For the regional external proxy Network Load Balancer and the regional internal proxy Network Load Balancer, use the command:

     gcloud compute backend-services add-backend BACKEND_SERVICE \
         --network-endpoint-group=ZONAL_NEG \
         --max-rate-per-endpoint=10 \
         --region=REGION
    
  2. Add the dual-stack instance group as the backend to the backend service. Given that you have already updated the VM instance or templates to dual stack, no more action needs to be taken.

  3. For global external proxy Network Load Balancers only. To create the IPv6 forwarding rule for your global external proxy Network Load Balancer with a target SSL proxy, use the following command:

    gcloud compute forwarding-rules create FORWARDING_RULE_IPV6 \
       --load-balancing-scheme=EXTERNAL_MANAGED \
       --network-tier=PREMIUM \
       --address=lb-ipv6-1 \
       --global \
       --target-ssl-proxy=TARGET_PROXY \
       --ports=80
    

    To create the IPv6 forwarding rule for your global external proxy Network Load Balancer with a target TCP proxy, use the following command:

    gcloud compute forwarding-rules create FORWARDING_RULE_IPV6 \
       --load-balancing-scheme=EXTERNAL_MANAGED \
       --network-tier=PREMIUM \
       --global \
       --target-tcp-proxy=TARGET_PROXY \
       --ports=80
    

Configure the IP address selection policy

This step is optional. After you have converted your resources and backends to dual-stack, you can use the IP address selection policy to specify the traffic type that is sent from the backend service to your backends.

Replace IP_ADDRESS_SELECTION_POLICY with any of the following values:

IP address selection policy Description
Only IPv4 Only send IPv4 traffic to the backends of the backend service, regardless of traffic from the client to the GFE. Only IPv4 health checks are used to check the health of the backends.
Prefer IPv6

Prioritize the backend's IPv6 connection over the IPv4 connection (provided there is a healthy backend with IPv6 addresses).

The health checks periodically monitor the backends' IPv6 and IPv4 connections. The GFE first attempts the IPv6 connection; if the IPv6 connection is broken or slow, the GFE uses happy eyeballs to fall back and connect to IPv4.

Even if one of the IPv6 or IPv4 connections is unhealthy, the backend is still treated as healthy, and both connections can be tried by the GFE, with happy eyeballs ultimately selecting which one to use.

Only IPv6

Only send IPv6 traffic to the backends of the backend service, regardless of traffic from the client to the proxy. Only IPv6 health checks are used to check the health of the backends.

There is no validation to check if the backend traffic type matches the IP address selection policy. For example, if you have IPv4-only backends and select Only IPv6 as the IP address selection policy, the configuration results in unhealthy backends because traffic fails to reach those backends and the HTTP 503 response code is returned to the clients.

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. Click Edit.

  4. Click Backend configuration.

  5. In the Backend service field, select BACKEND_SERVICE.

  6. The Backend type must be Zonal network endpoint group or Instance group.

  7. In the IP address selection policy list, select IP_ADDRESS_SELECTION_POLICY.

  8. Click Done.

gcloud

Update the IP address selection policy for the backend service:

global

For the global external proxy Network Load Balancer, use the command:

gcloud compute backend-services update BACKEND_SERVICE_IPV6 \
    --load-balancing-scheme=EXTERNAL_MANAGED \
    --protocol=SSL | TCP \
    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \
    --global

cross-region

For the cross-region internal proxy Network Load Balancer, use the command:

gcloud compute backend-services update BACKEND_SERVICE_IPV6 \
    --load-balancing-scheme=INTERNAL_MANAGED \
    --protocol=TCP \
    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \
    --global

regional

For the regional external proxy Network Load Balancer, use the command:

gcloud compute backend-services update BACKEND_SERVICE_IPV6 \
    --load-balancing-scheme=EXTERNAL_MANAGED \
    --protocol=TCP \
    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \
    --region=REGION

For the regional internal proxy Network Load Balancer, use the command:

gcloud compute backend-services update BACKEND_SERVICE_IPV6 \
    --load-balancing-scheme=INTERNAL_MANAGED \
    --protocol=TCP \
    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \
    --region=REGION

Test your load balancer

You must validate that all required resources are updated to dual stack. After you update all the resources, the traffic must automatically flow to the backends. You can check the logs and verify that the conversion is complete.

Test the load balancer to confirm that the conversion is successful and the incoming traffic is reaching the backends as expected.

Look up the load balancer IP addresses

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. In the Frontend section, two load balancer IP addresses are displayed. In this procedure, the IPv4 address is referred to as IP_ADDRESS_IPV4 and the IPv6 address is referred as IP_ADDRESS_IPV6.

  4. In the Backends section, when the IP address selection policy is Prefer IPv6 two health check statuses are displayed for the backends.

Send traffic to the load balancer

In this example, requests from the curl command are distributed randomly to the backends.

For external load balancers

  1. Repeat the following commands a few times until you see all the backend VMs responding:

    curl -m1 IP_ADDRESS_IPV4:PORT
    
    curl -m1 IP_ADDRESS_IPV6:PORT
    

    For example, if the IPv6 address is [fd20:1db0:b882:802:0:46:0:0]:80, the command looks similar to this:

    curl -m1 [fd20:1db0:b882:802:0:46:0:0]:80
    

For internal load balancers

  1. Create a test client VM in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone.

    gcloud compute instances create client-vm \
        --zone=ZONE \
        --image-family=debian-12 \
        --image-project=debian-cloud \
        --tags=allow-ssh \
        --subnet=SUBNET
    
  2. Use SSH to connect to the client instance.

    gcloud compute ssh client-vm \
       --zone=ZONE
    
  3. Repeat the following commands a few times until you see all the backend VMs responding:

    curl -m1 IP_ADDRESS_IPV4:PORT
    
    curl -m1 IP_ADDRESS_IPV6:PORT
    

    For example, if the IPv6 address is [fd20:1db0:b882:802:0:46:0:0]:80, the command looks similar to this:

    curl -m1 [fd20:1db0:b882:802:0:46:0:0]:80
    

Check the logs

Every log entry captures the destination IPv4 and IPv6 address for the backend. Because we support dual stack, it is important to observe the IP address used by the backend.

You can check that traffic is going to IPv6 or failing back to IPv4 by viewing the logs.

The logs contain the backend_ip address associated with the backend. By examining the logs and comparing the destination IPv4 or IPv6 address of the backend_ip, you can confirm which IP address is used.