Deploy existing pipeline jobs to batch endpoints

APPLIES TO: Azure CLI ml extension v2 (current) Python SDK azure-ai-ml v2 (current)

Batch endpoints allow you to deploy pipeline components, providing a convenient way to operationalize pipelines in Azure Machine Learning. Batch endpoints accept pipeline components for deployment. However, if you already have a pipeline job that runs successfully, Azure Machine Learning can accept that job as input to your batch endpoint and create the pipeline component automatically for you. In this article, you'll learn how to use your existing pipeline job as input for batch deployment.

You'll learn to:

  • Run and create the pipeline job that you want to deploy
  • Create a batch deployment from the existing job
  • Test the deployment

About this example

In this example, we're going to deploy a pipeline consisting of a simple command job that prints "hello world!". Instead of registering the pipeline component before deployment, we indicate an existing pipeline job to use for deployment. Azure Machine Learning will then create the pipeline component automatically and deploy it as a batch endpoint pipeline component deployment.

The example in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy or paste YAML and other files, use the following commands to clone the repository and go to the folder for your coding language:

git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli

The files for this example are in:

cd endpoints/batch/deploy-pipelines/hello-batch

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.

  • An Azure Machine Learning workspace. To create a workspace, see Manage Azure Machine Learning workspaces.

  • The following permissions in the Azure Machine Learning workspace:

    • For creating or managing batch endpoints and deployments: Use an Owner, Contributor, or custom role that has been assigned the Microsoft.MachineLearningServices/workspaces/batchEndpoints/* permissions.
    • For creating Azure Resource Manager deployments in the workspace resource group: Use an Owner, Contributor, or custom role that has been assigned the Microsoft.Resources/deployments/write permission in the resource group where the workspace is deployed.
  • The Azure Machine Learning CLI or the Azure Machine Learning SDK for Python:

    Run the following command to install the Azure CLI and the ml extension for Azure Machine Learning:

    az extension add -n ml
    

    Pipeline component deployments for batch endpoints are introduced in version 2.7 of the ml extension for the Azure CLI. Use the az extension update --name ml command to get the latest version.


Connect to your workspace

The workspace is the top-level resource for Azure Machine Learning. It provides a centralized place to work with all artifacts you create when you use Azure Machine Learning. In this section, you connect to the workspace where you perform your deployment tasks.

In the following command, enter your subscription ID, workspace name, resource group name, and location:

az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>

Run the pipeline job you want to deploy

In this section, we begin by running a pipeline job:

The following pipeline-job.yml file contains the configuration for the pipeline job:

pipeline-job.yml

$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline

experiment_name: hello-pipeline-batch
display_name: hello-pipeline-batch-job
description: This job demonstrates how to run the a pipeline component in a pipeline job. You can use this example to test a component in an standalone job before deploying it in an endpoint.

compute: batch-cluster
component: hello-component/hello.yml

Create the pipeline job:

JOB_NAME=$(az ml job create -f pipeline-job.yml --query name -o tsv)

Create a batch endpoint

Before we deploy the pipeline job, we need to deploy a batch endpoint to host the deployment.

  1. Provide a name for the endpoint. A batch endpoint's name needs to be unique in each region since the name is used to construct the invocation URI. To ensure uniqueness, append any trailing characters to the name specified in the following code.

    ENDPOINT_NAME="hello-batch"
    
  2. Configure the endpoint:

    The endpoint.yml file contains the endpoint's configuration.

    endpoint.yml

    $schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
    name: hello-batch
    description: A hello world endpoint for component deployments.
    auth_mode: aad_token
    
  3. Create the endpoint:

    az ml batch-endpoint create --name $ENDPOINT_NAME  -f endpoint.yml
    
  4. Query the endpoint URI:

    az ml batch-endpoint show --name $ENDPOINT_NAME
    

Deploy the pipeline job

To deploy the pipeline component, we have to create a batch deployment from the existing job.

  1. We need to tell Azure Machine Learning the name of the job that we want to deploy. In our case, that job is indicated in the following variable:

    echo $JOB_NAME
    
  2. Configure the deployment.

    The deployment-from-job.yml file contains the deployment's configuration. Notice how we use the key job_definition instead of component to indicate that this deployment is created from a pipeline job:

    deployment-from-job.yml

    $schema: https://azuremlschemas.azureedge.net/latest/pipelineComponentBatchDeployment.schema.json
    name: hello-batch-from-job
    endpoint_name: hello-pipeline-batch
    type: pipeline
    job_definition: azureml:job_name_placeholder
    settings:
        continue_on_step_failure: false
        default_compute: batch-cluster
    

    Tip

    This configuration assumes you have a compute cluster named batch-cluster. You can replace this value with the name of your cluster.

  3. Create the deployment:

    Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.

    az ml batch-deployment create --endpoint $ENDPOINT_NAME --set job_definition=azureml:$JOB_NAME -f deployment-from-job.yml
    

    Tip

    Notice the use of --set job_definition=azureml:$JOB_NAME. Since job names are unique, the command --set is used here to change the name of the job when you run it in your workspace.

  4. Your deployment is ready for use.

Test the deployment

Once the deployment is created, it's ready to receive jobs. You can invoke the default deployment as follows:

JOB_NAME=$(az ml batch-endpoint invoke -n $ENDPOINT_NAME --query name -o tsv)

You can monitor the progress of the show and stream the logs using:

az ml job stream -n $JOB_NAME

Clean up resources

Once you're done, delete the associated resources from the workspace:

Run the following code to delete the batch endpoint and its underlying deployment. --yes is used to confirm the deletion.

az ml batch-endpoint delete -n $ENDPOINT_NAME --yes

Next steps