AzureDatabricksLinkedService Class

Azure Databricks linked service.

All required parameters must be populated in order to send to server.

Inheritance
azure.mgmt.datafactory.models._models_py3.LinkedService
AzureDatabricksLinkedService

Constructor

AzureDatabricksLinkedService(*, domain: MutableMapping[str, Any], additional_properties: Dict[str, MutableMapping[str, Any]] | None = None, version: str | None = None, connect_via: _models.IntegrationRuntimeReference | None = None, description: str | None = None, parameters: Dict[str, _models.ParameterSpecification] | None = None, annotations: List[MutableMapping[str, Any]] | None = None, access_token: _models.SecretBase | None = None, authentication: MutableMapping[str, Any] | None = None, workspace_resource_id: MutableMapping[str, Any] | None = None, existing_cluster_id: MutableMapping[str, Any] | None = None, instance_pool_id: MutableMapping[str, Any] | None = None, new_cluster_version: MutableMapping[str, Any] | None = None, new_cluster_num_of_worker: MutableMapping[str, Any] | None = None, new_cluster_node_type: MutableMapping[str, Any] | None = None, new_cluster_spark_conf: Dict[str, MutableMapping[str, Any]] | None = None, new_cluster_spark_env_vars: Dict[str, MutableMapping[str, Any]] | None = None, new_cluster_custom_tags: Dict[str, MutableMapping[str, Any]] | None = None, new_cluster_log_destination: MutableMapping[str, Any] | None = None, new_cluster_driver_node_type: MutableMapping[str, Any] | None = None, new_cluster_init_scripts: MutableMapping[str, Any] | None = None, new_cluster_enable_elastic_disk: MutableMapping[str, Any] | None = None, encrypted_credential: str | None = None, policy_id: MutableMapping[str, Any] | None = None, credential: _models.CredentialReference | None = None, **kwargs: Any)

Keyword-Only Parameters

Name Description
additional_properties
dict[str, <xref:JSON>]

Unmatched properties from the message are deserialized to this collection.

version
str

Version of the linked service.

connect_via

The integration runtime reference.

description
str

Linked service description.

parameters

Parameters for linked service.

annotations
list[<xref:JSON>]

List of tags that can be used for describing the linked service.

domain
<xref:JSON>

<REGION>.azuredatabricks.net, domain name of your Databricks deployment. Type: string (or Expression with resultType string). Required.

access_token

Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).

authentication
<xref:JSON>

Required to specify MSI, if using Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

workspace_resource_id
<xref:JSON>

Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

existing_cluster_id
<xref:JSON>

The id of an existing interactive cluster that will be used for all runs of this activity. Type: string (or Expression with resultType string).

instance_pool_id
<xref:JSON>

The id of an existing instance pool that will be used for all runs of this activity. Type: string (or Expression with resultType string).

new_cluster_version
<xref:JSON>

If not using an existing interactive cluster, this specifies the Spark version of a new job cluster or instance pool nodes created for each run of this activity. Required if instancePoolId is specified. Type: string (or Expression with resultType string).

new_cluster_num_of_worker
<xref:JSON>

If not using an existing interactive cluster, this specifies the number of worker nodes to use for the new job cluster or instance pool. For new job clusters, this a string-formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 (min) to 10 (max). For instance pools, this is a string-formatted Int32, and can only specify a fixed number of worker nodes, such as '2'. Required if newClusterVersion is specified. Type: string (or Expression with resultType string).

new_cluster_node_type
<xref:JSON>

The node type of the new job cluster. This property is required if newClusterVersion is specified and instancePoolId is not specified. If instancePoolId is specified, this property is ignored. Type: string (or Expression with resultType string).

new_cluster_spark_conf
dict[str, <xref:JSON>]

A set of optional, user-specified Spark configuration key-value pairs.

new_cluster_spark_env_vars
dict[str, <xref:JSON>]

A set of optional, user-specified Spark environment variables key-value pairs.

new_cluster_custom_tags
dict[str, <xref:JSON>]

Additional tags for cluster resources. This property is ignored in instance pool configurations.

new_cluster_log_destination
<xref:JSON>

Specify a location to deliver Spark driver, worker, and event logs. Type: string (or Expression with resultType string).

new_cluster_driver_node_type
<xref:JSON>

The driver node type for the new job cluster. This property is ignored in instance pool configurations. Type: string (or Expression with resultType string).

new_cluster_init_scripts
<xref:JSON>

User-defined initialization scripts for the new cluster. Type: array of strings (or Expression with resultType array of strings).

new_cluster_enable_elastic_disk
<xref:JSON>

Enable the elastic disk on the new cluster. This property is now ignored, and takes the default elastic disk behavior in Databricks (elastic disks are always enabled). Type: boolean (or Expression with resultType boolean).

encrypted_credential
str

The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string.

policy_id
<xref:JSON>

The policy id for limiting the ability to configure clusters based on a user defined set of rules. Type: string (or Expression with resultType string).

credential

The credential reference containing authentication information.

Variables

Name Description
additional_properties
dict[str, <xref:JSON>]

Unmatched properties from the message are deserialized to this collection.

type
str

Type of linked service. Required.

version
str

Version of the linked service.

connect_via

The integration runtime reference.

description
str

Linked service description.

parameters

Parameters for linked service.

annotations
list[<xref:JSON>]

List of tags that can be used for describing the linked service.

domain
<xref:JSON>

<REGION>.azuredatabricks.net, domain name of your Databricks deployment. Type: string (or Expression with resultType string). Required.

access_token

Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).

authentication
<xref:JSON>

Required to specify MSI, if using Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

workspace_resource_id
<xref:JSON>

Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

existing_cluster_id
<xref:JSON>

The id of an existing interactive cluster that will be used for all runs of this activity. Type: string (or Expression with resultType string).

instance_pool_id
<xref:JSON>

The id of an existing instance pool that will be used for all runs of this activity. Type: string (or Expression with resultType string).

new_cluster_version
<xref:JSON>

If not using an existing interactive cluster, this specifies the Spark version of a new job cluster or instance pool nodes created for each run of this activity. Required if instancePoolId is specified. Type: string (or Expression with resultType string).

new_cluster_num_of_worker
<xref:JSON>

If not using an existing interactive cluster, this specifies the number of worker nodes to use for the new job cluster or instance pool. For new job clusters, this a string-formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 (min) to 10 (max). For instance pools, this is a string-formatted Int32, and can only specify a fixed number of worker nodes, such as '2'. Required if newClusterVersion is specified. Type: string (or Expression with resultType string).

new_cluster_node_type
<xref:JSON>

The node type of the new job cluster. This property is required if newClusterVersion is specified and instancePoolId is not specified. If instancePoolId is specified, this property is ignored. Type: string (or Expression with resultType string).

new_cluster_spark_conf
dict[str, <xref:JSON>]

A set of optional, user-specified Spark configuration key-value pairs.

new_cluster_spark_env_vars
dict[str, <xref:JSON>]

A set of optional, user-specified Spark environment variables key-value pairs.

new_cluster_custom_tags
dict[str, <xref:JSON>]

Additional tags for cluster resources. This property is ignored in instance pool configurations.

new_cluster_log_destination
<xref:JSON>

Specify a location to deliver Spark driver, worker, and event logs. Type: string (or Expression with resultType string).

new_cluster_driver_node_type
<xref:JSON>

The driver node type for the new job cluster. This property is ignored in instance pool configurations. Type: string (or Expression with resultType string).

new_cluster_init_scripts
<xref:JSON>

User-defined initialization scripts for the new cluster. Type: array of strings (or Expression with resultType array of strings).

new_cluster_enable_elastic_disk
<xref:JSON>

Enable the elastic disk on the new cluster. This property is now ignored, and takes the default elastic disk behavior in Databricks (elastic disks are always enabled). Type: boolean (or Expression with resultType boolean).

encrypted_credential
str

The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string.

policy_id
<xref:JSON>

The policy id for limiting the ability to configure clusters based on a user defined set of rules. Type: string (or Expression with resultType string).

credential

The credential reference containing authentication information.