The attached pipeline isolates the problem. The leading slash and underbars are not the cause. It looks like having a dataflow where the source and sink have the same path is the cause. This is confusing because I thought the path was just part of the blob name and had little significance otherwise.
To run:
- Upload test.csv to <your storage>/<your container>/test_in/test.csv
- You will need to swap in your own Azure blob storage linked service for the datasets and dataflow.
- Swap in your container name for the datasets and the pipeline.
- Run the pipeline.
Expected result: The pipeline says it ran successfully and wrote a record, but <your storage>/<your container>/test_in/out.csv was not written.
To show that the problem is related to the sink path being the same as the source path...
- Go into sink1 in the dataflow. Change the Column data from fileOutSamePath to fileOutDiffPath.
- Rerun.
New expected result: <your storage>/<your container>/test_out/out.csv was written.