PipelineJob Class
Pipeline job.
You should not instantiate this class directly. Instead, you should use the @pipeline decorator to create a PipelineJob.
] :param compute: Compute target name of the built pipeline. Defaults to None :type compute: str :param tags: Tag dictionary. Tags can be added, removed, and updated. Defaults to None :type tags: dict[str, str] :param kwargs: A dictionary of additional configuration parameters. Defaults to None :type kwargs: dict
- Inheritance
-
azure.ai.ml.entities._job.job.JobPipelineJobazure.ai.ml.entities._mixins.YamlTranslatableMixinPipelineJobazure.ai.ml.entities._job.pipeline._io.mixin.PipelineJobIOMixinPipelineJobazure.ai.ml.entities._validation.path_aware_schema.PathAwareSchemaValidatableMixinPipelineJob
Constructor
PipelineJob(*, component: str | PipelineComponent | Component | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict[str, Output] | None = None, name: str | None = None, description: str | None = None, display_name: str | None = None, experiment_name: str | None = None, jobs: Dict[str, BaseNode] | None = None, settings: PipelineJobSettings | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, compute: str | None = None, tags: Dict[str, str] | None = None, **kwargs: Any)
Parameters
Name | Description |
---|---|
component
Required
|
Pipeline component version. The field is mutually exclusive with 'jobs'. |
inputs
Required
|
Inputs to the pipeline job. |
outputs
Required
|
Outputs of the pipeline job. |
name
Required
|
Name of the PipelineJob. Defaults to None. |
description
Required
|
Description of the pipeline job. Defaults to None |
display_name
Required
|
Display name of the pipeline job. Defaults to None |
experiment_name
Required
|
Name of the experiment the job will be created under. If None is provided, the experiment will be set to the current directory. Defaults to None |
jobs
Required
|
Pipeline component node name to component object. Defaults to None |
settings
Required
|
Setting of the pipeline job. Defaults to None |
identity
Required
|
Identity that the training job will use while running on compute. Defaults to None |
Keyword-Only Parameters
Name | Description |
---|---|
component
Required
|
|
inputs
Required
|
|
outputs
Required
|
|
name
Required
|
|
description
Required
|
|
display_name
Required
|
|
experiment_name
Required
|
|
jobs
Required
|
|
settings
Required
|
|
identity
Required
|
|
compute
Required
|
|
tags
Required
|
|
Examples
Shows how to create a pipeline using this class.
from azure.ai.ml.entities import PipelineJob, PipelineJobSettings
pipeline_job = PipelineJob(
description="test pipeline job",
tags={},
display_name="test display name",
experiment_name="pipeline_job_samples",
properties={},
settings=PipelineJobSettings(force_rerun=True, default_compute="cpu-cluster"),
jobs={"component1": component_func(component_in_number=1.0, component_in_path=uri_file_input)},
)
ml_client.jobs.create_or_update(pipeline_job)
Methods
dump |
Dumps the job content into a file in YAML format. |
dump
Dumps the job content into a file in YAML format.
dump(dest: str | PathLike | IO, **kwargs: Any) -> None
Parameters
Name | Description |
---|---|
dest
Required
|
The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly. |
Exceptions
Type | Description |
---|---|
Raised if dest is a file path and the file already exists. |
|
Raised if dest is an open file and the file is not writable. |
Attributes
base_path
creation_context
The creation context of the resource.
Returns
Type | Description |
---|---|
The creation metadata for the resource. |
id
The resource ID.
Returns
Type | Description |
---|---|
The global ID of the resource, an Azure Resource Manager (ARM) ID. |
inputs
Inputs of the pipeline job.
Returns
Type | Description |
---|---|
Inputs of the pipeline job. |
jobs
log_files
Job output files.
Returns
Type | Description |
---|---|
The dictionary of log names and URLs. |
outputs
Outputs of the pipeline job.
Returns
Type | Description |
---|---|
Outputs of the pipeline job. |
settings
Settings of the pipeline job.
Returns
Type | Description |
---|---|
Settings of the pipeline job. |
status
The status of the job.
Common values returned include "Running", "Completed", and "Failed". All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
Returns
Type | Description |
---|---|
Status of the job. |
studio_url
type
Azure SDK for Python