編集

次の方法で共有


Quickstart: Deploy an Azure Nexus Kubernetes cluster by using Azure Resource Manager template (ARM template)

  • Deploy an Azure Nexus Kubernetes cluster using an Azure Resource Manager template.

This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure Nexus Kubernetes cluster.

An Azure Resource Manager template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. You describe your intended deployment without writing the sequence of programming commands to create the deployment.

Prerequisites

If you don't have an Azure subscription, create an Azure free account before you begin.

  • Install the latest version of the necessary Azure CLI extensions.

  • This article requires version 2.61.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.

  • If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the az account command.

  • Refer the VM SKU table in the reference section for the list of supported VM SKUs.

  • Refer the supported Kubernetes versions for the list of supported Kubernetes versions.

  • Create a resource group using the az group create command. An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. The following example creates a resource group named myResourceGroup in the eastus location.

    az group create --name myResourceGroup --location eastus
    

    The following output example resembles successful creation of the resource group:

    {
      "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup",
      "location": "eastus",
      "managedBy": null,
      "name": "myResourceGroup",
      "properties": {
        "provisioningState": "Succeeded"
      },
      "tags": null
    }
    
  • To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a cluster, you need Microsoft.NetworkCloud/kubernetesclusters/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see Azure built-in roles.

  • You need the custom location resource ID of your Azure Operator Nexus cluster.

  • You need to create various networks according to your specific workload requirements, and it's essential to have the appropriate IP addresses available for your workloads. To ensure a smooth implementation, it's advisable to consult the relevant support teams for assistance.

  • This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see Kubernetes core concepts for Azure Kubernetes Service (AKS).

Review the template

Before deploying the Kubernetes template, let's review the content to understand its structure.

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "kubernetesClusterName": {
        "type": "string",
        "metadata": {
          "description": "The name of Nexus Kubernetes cluster"
        }
      },
      "location": {
        "type": "string",
        "metadata": {
          "description": "The Azure region where the cluster is to be deployed"
        },
        "defaultValue": "[resourceGroup().location]"
      },
      "extendedLocation": {
        "type": "string",
        "metadata": {
          "description": "The custom location of the Nexus instance"
        },
        "defaultValue": ""
      },
      "tags": {
        "type": "object",
        "metadata": {
          "description": "The metadata tags to be associated with the cluster resource"
        },
        "defaultValue": {}
      },
      "adminUsername": {
        "type": "string",
        "metadata": {
          "description": "The username for the administrative account on the cluster"
        },
        "defaultValue": "azureuser"
      },
      "adminGroupObjectIds": {
        "type": "array",
        "metadata": {
          "description": "The object IDs of Azure Active Directory (AAD) groups that will have administrative access to the cluster"
        },
        "defaultValue": []
      },
      "cniNetworkId": {
        "type": "string",
        "metadata": {
          "description": "The Azure Resource Manager (ARM) id of the network to be used as the Container Networking Interface (CNI) network"
        }
      },
      "cloudServicesNetworkId": {
        "type": "string",
        "metadata": {
          "description": "The ARM id of the network to be used for cloud services network"
        }
      },
      "podCidrs": {
        "type": "array",
        "metadata": {
          "description": "The CIDR blocks used for Nexus Kubernetes PODs in the cluster"
        },
        "defaultValue": ["10.244.0.0/16"]
      },
      "serviceCidrs": {
        "type": "array",
        "metadata": {
          "description": "The CIDR blocks used for k8s service in the cluster"
        },
        "defaultValue": ["10.96.0.0/16"]
      },
      "dnsServiceIp": {
        "type": "string",
        "metadata": {
          "description": "The IP address of the DNS service in the cluster"
        },
        "defaultValue": "10.96.0.10"
      },
      "agentPoolL2Networks": {
        "type": "array",
        "metadata": {
          "description": "The Layer 2 networks associated with the initial agent pool"
        },
        "defaultValue": []
        /*
          {
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
          }
        */
      },
      "agentPoolL3Networks": {
        "type": "array",
        "metadata": {
          "description": "The Layer 3 networks associated with the initial agent pool"
        },
        "defaultValue": []
        /*
          {
            "ipamEnabled": "True/False",
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
          }
        */
      },
      "agentPoolTrunkedNetworks": {
        "type": "array",
        "metadata": {
          "description": "The trunked networks associated with the initial agent pool"
        },
        "defaultValue": []
        /*
          {
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
          }
        */
      },
      "l2Networks": {
        "type": "array",
        "metadata": {
          "description": "The Layer 2 networks associated with the cluster"
        },
        "defaultValue": []
        /*
          {
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
          }
        */
      },
      "l3Networks": {
        "type": "array",
        "metadata": {
          "description": "The Layer 3 networks associated with the cluster"
        },
        "defaultValue": []
        /*
          {
            "ipamEnabled": "True/False",
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
          }
        */
      },
      "trunkedNetworks": {
        "type": "array",
        "metadata": {
          "description": "The trunked networks associated with the cluster"
        },
        "defaultValue": []
        /*
          {
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
          }
        */
      },
      "ipAddressPools": {
        "type": "array",
        "metadata": {
          "description": "The LoadBalancer IP address pools associated with the cluster"
        },
        "defaultValue": []
        /*
          {
            "addresses": [
              "string"
            ],
            "autoAssign": "True/False",
            "name": "sting",
            "onlyUseHostIps": "True/False"
          }
        */
      },
      "fabricPeeringEnabled": {
        "type": "string",
        "metadata": {
          "description": "The indicator to specify if the load balancer peers with the network fabric."
        },
        "defaultValue": "True"
      },
      "bgpAdvertisements": {
        "type": "array",
        "metadata": {
          "description": "The association of IP address pools to the communities and peers, allowing for announcement of IPs."
        },
        "defaultValue": []
        /*
          {
            "advertiseToFabric": "True/False",
            "communities": [
              "string"
            ],
            "ipAddressPools": [
              "string"
            ],
            "pools": [
              "string"
            ]
          }
        */
      },
      "bgpPeers": {
        "type": "array",
        "metadata": {
          "description": "The list of additional BgpPeer entities that the Kubernetes cluster will peer with. All peering must be explicitly defined."
        },
        "defaultValue": []
        /*
          {
            "bfdEnabled": "True/False",
            "bgpMultiHop": "True/False",
            "myAsn": 0-4294967295,
            "name": "string",
            "password": "string",
            "peerAddress": "string",
            "peerPort": 179
          }
        */
      },
      "kubernetesVersion": {
        "type": "string",
        "metadata": {
          "description": "The version of Kubernetes to be used in the Nexus Kubernetes cluster"
        },
        "defaultValue": "v1.27.1"
      },
      "controlPlaneCount": {
        "type": "int",
        "metadata": {
          "description": "The number of control plane nodes to be deployed in the cluster"
        },
        "defaultValue": 1
      },
      "controlPlaneZones": {
        "type": "array",
        "metadata": {
          "description": "The zones/racks used for placement of the control plane nodes"
        },
        "defaultValue": []
        /* array of strings Example: ["1", "2", "3"] */
      },
      "agentPoolZones": {
        "type": "array",
        "metadata": {
          "description": "The zones/racks used for placement of the agent pool nodes"
        },
        "defaultValue": []
        /* array of strings Example: ["1", "2", "3"] */
      },
      "controlPlaneVmSkuName": {
        "type": "string",
        "metadata": {
          "description": "The size of the control plane nodes"
        },
        "defaultValue": "NC_G6_28_v1"
      },
      "systemPoolNodeCount": {
        "type": "int",
        "metadata": {
          "description": "The number of worker nodes to be deployed in the initial agent pool"
        },
        "defaultValue": 1
      },
      "workerVmSkuName": {
        "type": "string",
        "metadata": {
          "description": "The size of the worker nodes"
        },
        "defaultValue": "NC_P10_56_v1"
      },
      "initialPoolAgentOptions": {
        "type": "object",
        "metadata": {
          "description": "The configurations for the initial agent pool"
        },
        "defaultValue": {}
        /*
          "hugepagesCount": int,
          "hugepagesSize": "2M/1G"
        */
      },
      "sshPublicKeys": {
        "type": "array",
        "metadata": {
          "description": "The cluster wide SSH public key that will be associated with the given user for secure remote login"
        },
        "defaultValue": []
        /*
          {
            "keyData": "ssh-rsa AAAAA...."
          },
          {
            "keyData": "ssh-rsa BBBBB...."
          }
        */
      },
      "controlPlaneSshKeys": {
        "type": "array",
        "metadata": {
          "description": "The control plane SSH public key that will be associated with the given user for secure remote login"
        },
        "defaultValue": []
        /*
          {
            "keyData": "ssh-rsa AAAAA...."
          },
          {
            "keyData": "ssh-rsa BBBBB...."
          }
        */
      },
      "agentPoolSshKeys": {
        "type": "array",
        "metadata": {
          "description": "The agent pool SSH public key that will be associated with the given user for secure remote login"
        },
        "defaultValue": []
        /*
          {
            "keyData": "ssh-rsa AAAAA...."
          },
          {
            "keyData": "ssh-rsa BBBBB...."
          }
        */
      },
      "labels": {
        "type": "array",
        "metadata": {
          "description": "The labels to assign to the nodes in the cluster for identification and organization"
        },
        "defaultValue": []
        /*
          {
            "key": "string",
            "value": "string"
          }
        */
      },
      "taints": {
        "type": "array",
        "metadata": {
          "description": "The taints to apply to the nodes in the cluster to restrict which pods can be scheduled on them"
        },
        "defaultValue": []
        /*
          {
            "key": "string",
            "value": "string:NoSchedule|PreferNoSchedule|NoExecute"
          }
        */
      }
    },
    "resources": [
      {
        "type": "Microsoft.NetworkCloud/kubernetesClusters",
        "apiVersion": "2024-07-01",
        "name": "[parameters('kubernetesClusterName')]",
        "location": "[parameters('location')]",
        "tags": "[parameters('tags')]",
        "extendedLocation": {
          "name": "[parameters('extendedLocation')]",
          "type": "CustomLocation"
        },
        "properties": {
          "kubernetesVersion": "[parameters('kubernetesVersion')]",
          "managedResourceGroupConfiguration": {
            "name": "[concat(uniqueString(resourceGroup().name), '-', parameters('kubernetesClusterName'))]",
            "location": "[parameters('location')]"
          },
          "aadConfiguration": {
            "adminGroupObjectIds": "[parameters('adminGroupObjectIds')]"
          },
          "administratorConfiguration": {
            "adminUsername": "[parameters('adminUsername')]",
            "sshPublicKeys": "[if(empty(parameters('sshPublicKeys')), createArray(), parameters('sshPublicKeys'))]"
          },
          "initialAgentPoolConfigurations": [
            {
              "name": "[concat(parameters('kubernetesClusterName'), '-nodepool-1')]",
              "administratorConfiguration": {
                "adminUsername": "[parameters('adminUsername')]",
                "sshPublicKeys": "[if(empty(parameters('agentPoolSshKeys')), createArray(), parameters('agentPoolSshKeys'))]"
              },
              "count": "[parameters('systemPoolNodeCount')]",
              "vmSkuName": "[parameters('workerVmSkuName')]",
              "mode": "System",
              "labels": "[if(empty(parameters('labels')), json('null'), parameters('labels'))]",
              "taints": "[if(empty(parameters('taints')), json('null'), parameters('taints'))]",
              "agentOptions": "[if(empty(parameters('initialPoolAgentOptions')), json('null'), parameters('initialPoolAgentOptions'))]",
              "attachedNetworkConfiguration": {
                "l2Networks": "[if(empty(parameters('agentPoolL2Networks')), json('null'), parameters('agentPoolL2Networks'))]",
                "l3Networks": "[if(empty(parameters('agentPoolL3Networks')), json('null'), parameters('agentPoolL3Networks'))]",
                "trunkedNetworks": "[if(empty(parameters('agentPoolTrunkedNetworks')), json('null'), parameters('agentPoolTrunkedNetworks'))]"
              },
              "availabilityZones": "[if(empty(parameters('agentPoolZones')), json('null'), parameters('agentPoolZones'))]",
              "upgradeSettings": {
                "maxSurge": "1"
              }
            }
          ],
          "controlPlaneNodeConfiguration": {
            "administratorConfiguration": {
              "adminUsername": "[parameters('adminUsername')]",
              "sshPublicKeys": "[if(empty(parameters('controlPlaneSshKeys')), createArray(), parameters('controlPlaneSshKeys'))]"
            },
            "count": "[parameters('controlPlaneCount')]",
            "vmSkuName": "[parameters('controlPlaneVmSkuName')]",
            "availabilityZones": "[if(empty(parameters('controlPlaneZones')), json('null'), parameters('controlPlaneZones'))]"
          },
          "networkConfiguration": {
            "cniNetworkId": "[parameters('cniNetworkId')]",
            "cloudServicesNetworkId": "[parameters('cloudServicesNetworkId')]",
            "dnsServiceIp": "[parameters('dnsServiceIp')]",
            "podCidrs": "[parameters('podCidrs')]",
            "serviceCidrs": "[parameters('serviceCidrs')]",
            "attachedNetworkConfiguration": {
              "l2Networks": "[if(empty(parameters('l2Networks')), json('null'), parameters('l2Networks'))]",
              "l3Networks": "[if(empty(parameters('l3Networks')), json('null'), parameters('l3Networks'))]",
              "trunkedNetworks": "[if(empty(parameters('trunkedNetworks')), json('null'), parameters('trunkedNetworks'))]"
            },
            "bgpServiceLoadBalancerConfiguration": {
              "ipAddressPools": "[if(empty(parameters('ipAddressPools')), json('null'), parameters('ipAddressPools'))]",
              "fabricPeeringEnabled": "[if(empty(parameters('fabricPeeringEnabled')), json('null'), parameters('fabricPeeringEnabled'))]",
              "bgpAdvertisements": "[if(empty(parameters('bgpAdvertisements')), json('null'), parameters('bgpAdvertisements'))]",
              "bgpPeers": "[if(empty(parameters('bgpPeers')), json('null'), parameters('bgpPeers'))]"
            }
          }
        }
      }
    ]
  }

Once you have reviewed and saved the template file named kubernetes-deploy.json, proceed to the next section to deploy the template.

Deploy the template

  1. Create a file named kubernetes-deploy-parameters.json and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "kubernetesClusterName":{
      "value": "myNexusK8sCluster"
    },
    "adminGroupObjectIds": {
      "value": [
        "00000000-0000-0000-0000-000000000000"
      ]
    },
    "cniNetworkId": {
      "value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
    },
    "cloudServicesNetworkId": {
      "value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
    },
    "extendedLocation": {
      "value": "/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
    },
    "location": {
      "value": "eastus"
    },
    "sshPublicKeys": {
      "value": [
        {
          "keyData": "ssh-rsa AAAAA...."
        },
        {
          "keyData": "ssh-rsa BBBBB...."
        }
      ]
    }
  }
}
  1. Deploy the template.
    az deployment group create \
      --resource-group myResourceGroup \
      --template-file kubernetes-deploy.json \
      --parameters @kubernetes-deploy-parameters.json

If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.

Note

The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.

Review deployed resources

After the deployment finishes, you can view the resources using the CLI or the Azure portal.

To view the details of the myNexusK8sCluster cluster in the myResourceGroup resource group, execute the following Azure CLI command:

az networkcloud kubernetescluster show \
  --name myNexusK8sCluster \
  --resource-group myResourceGroup

Additionally, to get a list of agent pool names associated with the myNexusK8sCluster cluster in the myResourceGroup resource group, you can use the following Azure CLI command.

az networkcloud kubernetescluster agentpool list \
  --kubernetes-cluster-name myNexusK8sCluster \
  --resource-group myResourceGroup \
  --output table

Connect to the cluster

Now that the Nexus Kubernetes cluster has been successfully created and connected to Azure Arc, you can easily connect to it using the cluster connect feature. Cluster connect allows you to securely access and manage your cluster from anywhere, making it convenient for interactive development, debugging, and cluster administration tasks.

For more detailed information about available options, see Connect to an Azure Operator Nexus Kubernetes cluster.

Note

When you create a Nexus Kubernetes cluster, Nexus automatically creates a managed resource group dedicated to storing the cluster resources, within this group, the Arc connected cluster resource is established.

To access your cluster, you need to set up the cluster connect kubeconfig. After logging into Azure CLI with the relevant Microsoft Entra entity, you can obtain the kubeconfig necessary to communicate with the cluster from anywhere, even outside the firewall that surrounds it.

  1. Set CLUSTER_NAME, RESOURCE_GROUP and SUBSCRIPTION_ID variables.

    CLUSTER_NAME="myNexusK8sCluster"
    RESOURCE_GROUP="myResourceGroup"
    SUBSCRIPTION_ID=<set the correct subscription_id>
    
  2. Query managed resource group with az and store in MANAGED_RESOURCE_GROUP

     az account set -s $SUBSCRIPTION_ID
     MANAGED_RESOURCE_GROUP=$(az networkcloud kubernetescluster show -n $CLUSTER_NAME -g $RESOURCE_GROUP --output tsv --query managedResourceGroupConfiguration.name)
    
  3. The following command starts a connectedk8s proxy that allows you to connect to the Kubernetes API server for the specified Nexus Kubernetes cluster.

    az connectedk8s proxy -n $CLUSTER_NAME  -g $MANAGED_RESOURCE_GROUP &
    
  4. Use kubectl to send requests to the cluster:

    kubectl get pods -A
    

    You should now see a response from the cluster containing the list of all nodes.

Note

If you see the error message "Failed to post access token to client proxyFailed to connect to MSI", you may need to perform an az login to re-authenticate with Azure.

Add an agent pool

The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ARM template. The following example creates an agent pool named myNexusK8sCluster-nodepool-2:

  1. Review the template.

Before adding the agent pool template, let's review the content to understand its structure.

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "kubernetesClusterName": {
        "type": "string",
        "metadata": {
          "description": "The name of Nexus Kubernetes cluster"
        }
      },
      "location": {
        "type": "string",
        "defaultValue": "[resourceGroup().location]",
        "metadata": {
          "description": "The Azure region where the cluster is to be deployed"
        }
      },
      "extendedLocation": {
        "type": "string",
        "metadata": {
          "description": "The custom location of the Nexus instance"
        }
      },
      "adminUsername": {
        "type": "string",
        "defaultValue": "azureuser",
        "metadata": {
          "description": "The username for the administrative account on the cluster"
        }
      },
      "agentPoolSshKeys": {
        "type": "array",
        "metadata": {
          "description": "The agent pool SSH public key that will be associated with the given user for secure remote login"
        },
        "defaultValue": []
        /*
          {
            "keyData": "ssh-rsa AAAAA...."
          },
          {
            "keyData": "ssh-rsa BBBBB...."
          }
        */
      },
      "agentPoolNodeCount": {
        "type": "int",
        "defaultValue": 1,
        "metadata": {
          "description": "Number of nodes in the agent pool"
        }
      },
      "agentPoolName": {
        "type": "string",
        "defaultValue": "nodepool-2",
        "metadata": {
          "description": "Agent pool name"
        }
      },
      "agentVmSku": {
        "type": "string",
        "defaultValue": "NC_P10_56_v1",
        "metadata": {
          "description": "VM size of the agent nodes"
        }
      },
      "agentPoolZones": {
        "type": "array",
        "defaultValue": [],
        "metadata": {
          "description": "The zones/racks used for placement of the agent pool nodes"
        }
        /* array of strings Example: ["1", "2", "3"] */
      },
      "agentPoolMode": {
        "type": "string",
        "defaultValue": "User",
        "metadata": {
          "description": "Agent pool mode"
        }
      },
      "agentOptions": {
        "type": "object",
        "defaultValue": {},
        "metadata": {
          "description": "The configurations for the initial agent pool"
        }
        /*
          "hugepagesCount": int,
          "hugepagesSize": "2M/1G"
        */
      },
      "labels": {
        "type": "array",
        "defaultValue": [],
        "metadata": {
          "description": "The labels to assign to the nodes in the cluster for identification and organization"
        }
        /*
          {
            "key": "string",
            "value": "string"
          }
        */
      },
      "taints": {
        "type": "array",
        "defaultValue": [],
        "metadata": {
          "description": "The taints to apply to the nodes in the cluster to restrict which pods can be scheduled on them"
        }
        /*
          {
            "key": "string",
            "value": "string:NoSchedule|PreferNoSchedule|NoExecute"
          }
        */
      },
      "l2Networks": {
        "type": "array",
        "defaultValue": [],
        "metadata": {
          "description": "The Layer 2 networks to connect to the agent pool"
        }
        /*
          {
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
          }
        */
      },
      "l3Networks": {
        "type": "array",
        "defaultValue": [],
        "metadata": {
          "description": "The Layer 3 networks to connect to the agent pool"
        }
        /*
          {
            "ipamEnabled": "True/False",
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
          }
        */
      },
      "trunkedNetworks": {
        "type": "array",
        "defaultValue": [],
        "metadata": {
          "description": "The trunked networks to connect to the agent pool"
        }
        /*
          {
            "networkId": "string",
            "pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
          }
        */
      }
    },
    "resources": [
      {
        "type": "Microsoft.NetworkCloud/kubernetesClusters/agentpools",
        "apiVersion": "2024-07-01",
        "name": "[concat(parameters('kubernetesClusterName'), '/', parameters('kubernetesClusterName'), '-', parameters('agentPoolName'))]",
        "location": "[parameters('location')]",
        "extendedLocation": {
          "name": "[parameters('extendedLocation')]",
          "type": "CustomLocation"
        },
        "properties": {
          "administratorConfiguration": {
            "adminUsername": "[parameters('adminUsername')]",
            "sshPublicKeys": "[if(empty(parameters('agentPoolSshKeys')), json('null'), parameters('agentPoolSshKeys'))]"
          },
          "count": "[parameters('agentPoolNodeCount')]",
          "mode": "[parameters('agentPoolMode')]",
          "vmSkuName": "[parameters('agentVmSku')]",
          "labels": "[if(empty(parameters('labels')), json('null'), parameters('labels'))]",
          "taints": "[if(empty(parameters('taints')), json('null'), parameters('taints'))]",
          "agentOptions": "[if(empty(parameters('agentOptions')), json('null'), parameters('agentOptions'))]",
          "attachedNetworkConfiguration": {
            "l2Networks": "[if(empty(parameters('l2Networks')), json('null'), parameters('l2Networks'))]",
            "l3Networks": "[if(empty(parameters('l3Networks')), json('null'), parameters('l3Networks'))]",
            "trunkedNetworks": "[if(empty(parameters('trunkedNetworks')), json('null'), parameters('trunkedNetworks'))]"
          },
          "availabilityZones": "[if(empty(parameters('agentPoolZones')), json('null'), parameters('agentPoolZones'))]",
          "upgradeSettings": {
            "maxSurge": "1"
          }
        },
        "dependsOn": []
      }
    ]
}

Once you have reviewed and saved the template file named kubernetes-add-agentpool.json, proceed to the next section to deploy the template.

  1. Create a file named kubernetes-nodepool-parameters.json and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "kubernetesClusterName":{
        "value": "myNexusK8sCluster"
      },
      "extendedLocation": {
        "value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
      }
    }
  }
  1. Deploy the template.
    az deployment group create \
      --resource-group myResourceGroup \
      --template-file kubernetes-add-agentpool.json \
      --parameters @kubernetes-nodepool-parameters.json

Note

You can add multiple agent pools during the initial creation of your cluster itself by using the initial agent pool configurations. However, if you want to add agent pools after the initial creation, you can utilize the above command to create additional agent pools for your Nexus Kubernetes cluster.

The following output example resembles successful creation of the agent pool.

$ az networkcloud kubernetescluster agentpool list --kubernetes-cluster-name myNexusK8sCluster --resource-group myResourceGroup --output table
This command is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Count    Location    Mode    Name                          ProvisioningState    ResourceGroup    VmSkuName
-------  ----------  ------  ----------------------------  -------------------  ---------------  -----------
1        eastus      System  myNexusK8sCluster-nodepool-1  Succeeded            myResourceGroup  NC_P10_56_v1
1        eastus      User    myNexusK8sCluster-nodepool-2  Succeeded            myResourceGroup  NC_P10_56_v1

Clean up resources

When no longer needed, delete the resource group. The resource group and all the resources in the resource group are deleted.

Use the az group delete command to remove the resource group, Kubernetes cluster, and all related resources except the Operator Nexus network resources.

az group delete --name myResourceGroup --yes --no-wait

Next steps

You can now deploy the CNFs either directly via cluster connect or via Azure Operator Service Manager.