Copy data from multiple csv files into one csv fil

2019-09-20 23:29发布

问题:

I have multiple csv files in my azure blob storage which I wish to append into one csv file also stored in azure blob storage using the azure data factory pipeline. The problem is that all the columns of the source files are not present in the sink file and vice versa and also all the source files are not identical. I just want to map the columns I need from source files to the columns in sink file. The copy activity in the data factory is not allowing me to do so.

回答1:

According my experience, if your csv files don't have the same format or columns with sink file, you can not merge or append to one csv file. During the Mapping step, we can not import the schema and mapping the columns you need.

Azure Data Factory doesn't support to do that.



回答2:

As @LeonYue said, it doesn't support on Azure Data Factory now. However, per my experience, as a workaround solution, you can consider to create a Python script using pandas to do that and run as WebJob of Azure App Service or on Azure VM for acceleration between Azure Storage and other Azure services.

The steps of the workaround solution is like below.

  1. Maybe these csv files are all in a container of Azure Blob Storage, so you need to list them in container via list_blob_names and generate their urls with sas token for pandas read_csv function, the code as below.

    from azure.storage.blob.baseblobservice import BaseBlobService
    from azure.storage.blob import ContainerPermissions
    from datetime import datetime, timedelta
    
    account_name = '<your account name>'
    account_key = '<your account key>'
    container_name = '<your container name>'
    
    service = BaseBlobService(account_name=account_name, account_key=account_key)
    token = service.generate_container_shared_access_signature(container_name, permission=ContainerPermissions.READ, expiry=datetime.utcnow() + timedelta(hours=1),)
    
    blob_names = service.list_blob_names(container_name)
    blob_urls_with_token = (f"https://{account_name}.blob.core.windows.net/{container_name}/{blob_name}?{token}" for blob_name in blob_names)
    
    #print(list(blob_urls_with_token))
    
  2. To directly read csv file by read_csv function to get a pandas dataframe.

    import pandas as pd
    
    for blob_url_with_token in blob_urls_with_token:
        df = pd.read_csv(blob_url_with_token)
    
  3. You can follow your want to operate these dataframe by pandas, and then write to Azure Blob Storage as a single csv file by using Azure Storage SDK for Python.

Hope it helps.