Partager via


Interface CLI de clusters (héritée)

Important

Cette documentation a été mise hors service et peut ne pas être mise à jour.

Ces informations s’appliquent aux versions d’interface CLI Databricks héritées 0.18 et antérieures. Databricks vous recommande d’utiliser à la place la nouvelle version 0.205 ou supérieure de l’interface CLI Databricks. Consultez Qu’est-ce que l’interface CLI Databricks ?. Pour trouver votre version de l’interface CLI Databricks, exécutez databricks -v.

Pour migrer d’une version d’interface CLI Databricks 0.18 ou antérieure vers une version d’interface CLI Databricks CLI 0.205 ou ultérieure, consultez Migration de l’interface CLI Databricks.

Vous exécutez les sous-commandes CLI des clusters Databricks en les ajoutant à databricks clusters. Ces sous-commandes appellent l’API Clusters.

databricks clusters -h
Usage: databricks clusters [OPTIONS] COMMAND [ARGS]...

  Utility to interact with Databricks clusters.

Options:
  -v, --version  [VERSION]
  -h, --help     Show this message and exit.

Commands:
  create           Creates a Databricks cluster.
    Options:
      --json-file PATH  File containing JSON request to POST to /api/2.0/clusters/create.
      --json JSON       JSON string to POST to /api/2.0/clusters/create.
  delete           Removes a Databricks cluster.
    Options:
      --cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
  edit             Edits a Databricks cluster.
    Options:
      --json-file PATH  File containing JSON request to POST to /api/2.0/clusters/edit.
      --json JSON       JSON string to POST to /api/2.0/clusters/edit.
  events Gets events for a Spark cluster.
    Options:
      --cluster-id CLUSTER_ID  Can be found in the URL at https://<databricks-instance>/#/setting/clusters/$CLUSTER_ID/configuration.  [required]
      --start-time TEXT        The start time in epoch milliseconds. If
                               unprovided, returns events starting from the
                               beginning of time.
      --end-time TEXT          The end time in epoch milliseconds. If unprovided,
                               returns events up to the current time
      --order TEXT             The order to list events in; either ASC or DESC.
                               Defaults to DESC (most recent first).
      --event-type TEXT        An event types to filter on (specify multiple event
                               types by passing the --event-type option multiple
                               times). If empty, all event types are returned.
      --offset TEXT            The offset in the result set. Defaults to 0 (no
                               offset). When an offset is specified and the
                               results are requested in descending order, the
                               end_time field is required.
      --limit TEXT             The maximum number of events to include in a page
                               of events. Defaults to 50, and maximum allowed
                               value is 500.
      --output FORMAT          can be "JSON" or "TABLE". Set to TABLE by default.
  get              Retrieves metadata about a cluster.
    Options:
      --cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
  list             Lists active and recently terminated clusters.
    Options:
      --output FORMAT          JSON or TABLE. Set to TABLE by default.
  list-node-types  Lists node types for a cluster.
  list-zones       Lists zones where clusters can be created.
  permanent-delete Permanently deletes a cluster.
    Options:
      --cluster-id CLUSTER_ID  Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
  resize           Resizes a Databricks cluster given its ID.
    Options:
      --cluster-id CLUSTER_ID  Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
      --num-workers INTEGER    Number of workers. [required]
  restart          Restarts a Databricks cluster.
    Options:
      --cluster-id CLUSTER_ID  Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
  spark-versions   Lists possible Databricks Runtime versions.
  start            Starts a terminated Databricks cluster.
    Options:
      --cluster-id CLUSTER_ID  Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.

Créer un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters create --help.

databricks clusters create --json-file create-cluster.json

create-cluster.json :

{
  "cluster_name": "my-cluster",
  "spark_version": "7.3.x-scala2.12",
  "node_type_id": "Standard_D3_v2",
  "spark_conf": {
    "spark.speculation": true
  },
  "num_workers": 25
}
{
  "cluster_id": "1234-567890-batch123"
}

Suppression d'un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters delete --help.

databricks clusters delete --cluster-id 1234-567890-batch123

En cas de réussite, aucune sortie ne s’affiche.

Modifier la configuration d’un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters edit --help.

databricks clusters edit --json-file edit-cluster.json

edit-cluster.json :

{
  "cluster_id": "1234-567890-batch123",
  "num_workers": 10,
  "spark_version": "7.3.x-scala2.12",
  "node_type_id": "Standard_D3_v2"
}

En cas de réussite, aucune sortie ne s’affiche.

Répertorier les événements d’un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters events --help.

databricks clusters events \
--cluster-id 1234-567890-batch123 \
--start-time 1617238800000 \
--end-time 1619485200000 \
--order DESC \
--limit 5 \
--event-type RUNNING \
--output JSON \
| jq .
{
  "events": [
    {
      "cluster_id": "1234-567890-batch123",
      "timestamp": 1619214150232,
      "type": "RUNNING",
      "details": {
        "current_num_workers": 2,
        "target_num_workers": 2
      }
    },
    ...
    {
      "cluster_id": "1234-567890-batch123",
      "timestamp": 1617895221986,
      "type": "RUNNING",
      "details": {
        "current_num_workers": 2,
        "target_num_workers": 2
      }
    }
  ],
  "next_page": {
    "cluster_id": "1234-567890-batch123",
    "start_time": 1617238800000,
    "end_time": 1619485200000,
    "order": "DESC",
    "event_types": [
      "RUNNING"
    ],
    "offset": 5,
    "limit": 5
  },
  "total_count": 11
}

Obtenir des informations sur un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters get --help.

databricks clusters get --cluster-id 1234-567890-batch123

Ou :

databricks clusters get --cluster-name my-cluster
{
  "cluster_id": "1234-567890-batch123",
  "spark_context_id": 3124308392469747564,
  "cluster_name": "my-cluster",
  "spark_version": "7.5.x-scala2.12",
  "spark_conf": {
    "spark.databricks.delta.preview.enabled": "true"
  },
  "node_type_id": "Standard_DS3_v2",
  "driver_node_type_id": "Standard_DS3_v2",
  "spark_env_vars": {
    "PYSPARK_PYTHON": "/databricks/python3/bin/python3"
  },
  "autotermination_minutes": 0,
  "enable_elastic_disk": true,
  "disk_spec": {},
  "cluster_source": "JOB",
  "enable_local_disk_encryption": false,
  "azure_attributes": {
    "first_on_demand": 1,
    "availability": "ON_DEMAND_AZURE",
    "spot_bid_max_price": -1.0
  },
  "instance_source": {
    "node_type_id": "Standard_DS3_v2"
  },
  "driver_instance_source": {
    "node_type_id": "Standard_DS3_v2"
  },
  "state": "TERMINATED",
  "state_message": "",
  "start_time": 1619563745373,
  "terminated_time": 1619563822867,
  "last_state_loss_time": 0,
  "num_workers": 8,
  "default_tags": {
    "Vendor": "Databricks",
    "Creator": "someone@example.com",
    "ClusterName": "my-cluster",
    "ClusterId": "1234-567890-batch123",
    "JobId": "1268284",
    "RunName": "Normal job"
  },
  "creator_user_name": "someone@example.com",
  "termination_reason": {
    "code": "JOB_FINISHED",
    "type": "SUCCESS"
  },
  "init_scripts_safe_mode": false
}

Répertorier les informations sur tous les clusters disponibles

Pour afficher la documentation d’utilisation, exécutez databricks clusters list --help.

databricks clusters list --output JSON | jq .
{
  "clusters": [
    {
      "cluster_id": "1234-567890-batch123",
      "spark_context_id": 3124308392469747564,
      "cluster_name": "my-cluster",
      "spark_version": "7.5.x-scala2.12",
      "spark_conf": {
        "spark.databricks.delta.preview.enabled": "true"
      },
      "node_type_id": "Standard_DS3_v2",
      "driver_node_type_id": "Standard_DS3_v2",
      "spark_env_vars": {
        "PYSPARK_PYTHON": "/databricks/python3/bin/python3"
      },
      "autotermination_minutes": 0,
      "enable_elastic_disk": true,
      "disk_spec": {},
      "cluster_source": "JOB",
      "enable_local_disk_encryption": false,
      "azure_attributes": {
        "first_on_demand": 1,
        "availability": "ON_DEMAND_AZURE",
        "spot_bid_max_price": -1.0
      },
      "instance_source": {
        "node_type_id": "Standard_DS3_v2"
      },
      "driver_instance_source": {
        "node_type_id": "Standard_DS3_v2"
      },
      "state": "TERMINATED",
      "state_message": "",
      "start_time": 1619563745373,
      "terminated_time": 1619563822867,
      "last_state_loss_time": 0,
      "num_workers": 8,
      "default_tags": {
        "Vendor": "Databricks",
        "Creator": "someone@example.com",
        "ClusterName": "my-cluster",
        "ClusterId": "1234-567890-batch123",
        "JobId": "1268284",
        "RunName": "Normal job"
      },
      "creator_user_name": "someone@example.com",
      "termination_reason": {
        "code": "JOB_FINISHED",
        "type": "SUCCESS"
      },
      "init_scripts_safe_mode": false
    },
    ...
  ]
}

Répertorier les types de nœuds de cluster disponibles

Pour afficher la documentation d’utilisation, exécutez databricks clusters list-node-types --help.

databricks clusters list-node-types
{
  "node_types": [
    {
      "node_type_id": "Standard_L80s_v2",
      "memory_mb": 655360,
      "num_cores": 80.0,
      "description": "Standard_L80s_v2",
      "instance_type_id": "Standard_L80s_v2",
      "is_deprecated": false,
      "category": "Storage Optimized",
      "support_ebs_volumes": true,
      "support_cluster_tags": true,
      "num_gpus": 0,
      "node_instance_type": {
        "instance_type_id": "Standard_L80s_v2",
        "local_disks": 1,
        "local_disk_size_gb": 800,
        "instance_family": "Standard LSv2 Family vCPUs",
        "local_nvme_disk_size_gb": 1788,
        "local_nvme_disks": 10,
        "swap_size": "10g"
      },
      "is_hidden": false,
      "support_port_forwarding": true,
      "display_order": 0,
      "is_io_cache_enabled": true,
      "node_info": {
        "available_core_quota": 350.0,
        "total_core_quota": 350.0
      }
    },
    ...
  ]
}

Répertorier les zones disponibles pour la création de clusters

Notes

Cette commande ne fonctionne pas avec Azure Databricks.

Pour afficher la documentation d’utilisation, exécutez databricks clusters list-zones --help.

databricks clusters list-zones

Supprimer définitivement un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters permanent-delete --help.

databricks clusters permanent-delete --cluster-id 1234-567890-batch123

En cas de réussite, aucune sortie ne s’affiche.

Redimensionner un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters resize --help.

databricks clusters resize --cluster-id 1234-567890-batch123 --num-workers 10

En cas de réussite, aucune sortie ne s’affiche.

Redémarrer un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters restart --help.

databricks clusters restart --cluster-id 1234-567890-batch123

En cas de réussite, aucune sortie ne s’affiche.

Répertorier les versions du runtime Spark disponibles

Pour afficher la documentation d’utilisation, exécutez databricks clusters spark-versions --help.

databricks clusters spark-versions
{
  "versions": [
    {
      "key": "8.2.x-scala2.12",
      "name": "8.2 (includes Apache Spark 3.1.1, Scala 2.12)"
    },
    ...
  ]
}

Démarrer un cluster

Pour afficher la documentation d’utilisation, exécutez databricks clusters start --help.

databricks clusters start --cluster-id 1234-567890-batch123

En cas de réussite, aucune sortie ne s’affiche.