Load data with data pipelines into SQL database in Microsoft Fabric

Applies to:SQL database in Microsoft Fabric

In this tutorial, you create a new pipeline that loads sample data from an Azure SQL Database into a SQL database in Fabric.

A data pipeline is a logical grouping of activities that together perform a data ingestion task. Pipelines allow you to manage extract, transform, and load (ETL) activities instead of managing each one individually.

Prerequisites

  • You need an existing Fabric capacity. If you don't, start a Fabric trial.
  • Create a new workspace or use an existing Fabric workspace.
  • Create or use an existing SQL database in Fabric. If you don't have one already, create a new SQL database in Fabric.
  • Create or use an existing Azure SQL Database, with data.
  • For this walkthrough, we rely on opening ports on the source Azure SQL Database using the "Allow Azure services and resources to access this server" setting.

Create data pipeline

  1. In your workspace, select + New, then More options.
  2. Under Data Factory, select Data pipeline.
  3. Once the data pipeline is created, under Start with guidance, choose Copy data assistant.
  4. In the Choose data source page, select Azure SQL Database.
  5. Provide authentication for the connection to the source Azure SQL Database.
  6. For the destination, choose Fabric SQL database from the list in the OneLake catalog.
  7. Select Next.

Load data

  1. On the Connect to data destination page, select Load to new table for each table. Verify the mappings for each table. Select Next.
  2. Review Source and Destination details.
  3. Check the box next to Start data transfer immediately.
  4. Select Save + Run.
  5. In the Activity runs pane, you should see all green checkmarks for successful copy activities. If there are any errors, troubleshooting information is available in the failed row.