Connect to Preset

Preset provides modern business intelligence for your entire organization. Preset provides a powerful, easy to use data exploration and visualization platform, powered by open source Apache Superset.

You can integrate your Databricks SQL warehouses (formerly Databricks SQL endpoints) and Azure Databricks clusters with Preset.

Connect to Preset using Partner Connect

To connect your Azure Databricks workspace to Preset using Partner Connect, see Connect to BI partners using Partner Connect.

Note

Partner Connect only supports Databricks SQL warehouses for Preset. To connect a cluster in your Azure Databricks workspace to Preset, connect to Preset manually.

Connect to Preset manually

In this section, you connect an existing SQL warehouse or cluster in your Azure Databricks workspace to Preset.

Note

For SQL warehouses, you can use Partner Connect to simplify the connection process.

Requirements

Before you integrate with Preset manually, you must have the following:

  • A cluster or SQL warehouse in your Azure Databricks workspace.

  • The connection details for your cluster or SQL warehouse, specifically the Server Hostname, Port, and HTTP Path values.

  • An Azure Databricks personal access token or a Microsoft Entra ID (formerly Azure Active Directory) token.. To create a personal access token, do the following:

    1. In your Azure Databricks workspace, click your Azure Databricks username in the top bar, and then select Settings from the drop down.
    2. Click Developer.
    3. Next to Access tokens, click Manage.
    4. Click Generate new token.
    5. (Optional) Enter a comment that helps you to identify this token in the future, and change the token’s default lifetime of 90 days. To create a token with no lifetime (not recommended), leave the Lifetime (days) box empty (blank).
    6. Click Generate.
    7. Copy the displayed token to a secure location, and then click Done.

    Note

    Be sure to save the copied token in a secure location. Do not share your copied token with others. If you lose the copied token, you cannot regenerate that exact same token. Instead, you must repeat this procedure to create a new token. If you lose the copied token, or you believe that the token has been compromised, Databricks strongly recommends that you immediately delete that token from your workspace by clicking the trash can (Revoke) icon next to the token on the Access tokens page.

    If you are not able to create or use tokens in your workspace, this might be because your workspace administrator has disabled tokens or has not given you permission to create or use tokens. See your workspace administrator or the following topics:

    Note

    As a security best practice, when you authenticate with automated tools, systems, scripts, and apps, Databricks recommends that you use personal access tokens belonging to service principals instead of workspace users. To create tokens for service principals, see Manage tokens for a service principal.

Steps to connect

To connect to Preset manually, do the following:

  1. Create a new Preset account, or sign in to your existing Preset account.

  2. Click + Workspace.

  3. In the Add New Workspace dialog, enter a name for the workspace, select the workspace region that is nearest to you, and then click Save.

  4. Open the workspace by clicking the workspace tile.

  5. On the toolbar, click Catalog > Databases.

  6. Click + Database.

  7. In the Connect a database dialog, in the Supported Databases list, select one of the following:

    • For a SQL warehouse, select Databricks SQL Warehouse.
    • For a cluster, select Databricks Interactive Cluster.
  8. For SQLAlchemy URI, enter the following value:

    For a SQL warehouse:

    databricks+pyodbc://token:{access token}@{server hostname}:{port}/{database name}
    

    For a cluster:

    databricks+pyhive://token:{access token}@{server hostname}:{port}/{database name}
    

    Replace:

    • {access token} with the Azure Databricks personal access token value<!– or Azure Active Directory token value –> from the requirements.
    • {server hostname} with the Server Hostname value from the requirements.
    • {port} with the Port value from the requirements.
    • {database name} with the name of the target database in your Azure Databricks workspace.

    For example, for a SQL warehouse:

    databricks+pyodbc://token:dapi...@adb-1234567890123456.7.azuredatabricks.net:443/default
    

    For example, for a cluster:

    databricks+pyhive://token:dapi...@adb-1234567890123456.7.azuredatabricks.net:443/default
    
  9. Click the Advanced tab, and expand Other.

  10. For Engine Parameters, enter the following value:

    For a SQL warehouse:

    {"connect_args": {"http_path": "sql/1.0/warehouses/****", "driver_path": "/opt/simba/spark/lib/64/libsparkodbc_sb64.so"}}
    

    For a cluster:

    {"connect_args": {"http_path": "sql/protocolv1/o/****"}}
    

    Replace sql/protocolv1/o/**** with the HTTP Path value from the requirements.

    For example, for a SQL warehouse:

    {"connect_args": {"http_path": "sql/1.0/warehouses/ab12345cd678e901", "driver_path": "/opt/simba/spark/lib/64/libsparkodbc_sb64.so"}}
    

    For example, for a cluster:

    {"connect_args": {"http_path": "sql/protocolv1/o/1234567890123456/1234-567890-buyer123"}}
    
  11. Click the Basic tab, and then click Test Connection.

    Note

    For connection troubleshooting, see Database Connection Walkthrough for Databricks on the Preset website.

  12. After the connection succeeds, click Connect.

Next steps

Explore one or more of the following resources on the Preset website: