Transform a folder of CSV files the same way, then

2019-08-03 16:36发布

问题:

I've got a folder of csv files that I need to transform and manipulate/clean up, outputting a dataframe that I can then continue to work with. I'd like one dataframe uniquely titled per CSV file that I have. I wrote the code to be able to manipulate just one of the csv files the way that I'd like to, with a clean dataframe at the end, but I'm getting tripped up on attempting to iterate through the folder and transform all of the csv files, ending with a dataframe per csv.

Here's the code I've been working with:

import pandas as pd
import numpy as np
import os
from os import listdir
import glob
import win32com.client

filepath = 'C:/Users/me/BMI'

xl = win32com.client.gencache.EnsureDispatch('Excel.Application')

for f in glob.glob(filepath+'/*.xls'):
    fullname = os.path.abspath(f)
    xl.Workbooks.Open(fullname)
    xl.ActiveWorkbook.SaveAs(Filename=fullname.replace('.xls','.csv'),
        FileFormat=win32com.client.constants.xlCSVMSDOS,
        CreateBackup=False)
    xl.ActiveWorkbook.Close(SaveChanges=False)

os.listdir('C:/Users/me/BMI')

def find_csv_filenames( path_to_dir, suffix=".csv" ):
    filenames = listdir(path_to_dir)
    return [ filename for filename in filenames if filename.endswith( suffix ) ]

filenames = list(find_csv_filenames(filepath))

for i in filenaames:
   df = pd.read_csv(filepath+'/'+i)
   del df['Unnamed: 0']

# Extract by rows - create list of rows that are blank
nul_rows = list(df[df.isnull().all(axis=1)].index)

list_of_dataframes = []
list_of_dataframes.append(df.iloc[:nul_rows[0] - 1,:])
for i in range(len(nul_rows) - 1):
    list_of_dataframes.append(df.iloc[nul_rows[i]+1:nul_rows[i],:])

list_of_dataframes.append(df.iloc[nul_rows[4] - 1::])

# Remove null columns
cleaned_tables = []
for _df in list_of_dataframes:
    cleaned_tables.append(_df.dropna(axis=1, how='all'))

# cleaned_tables is a list of the dataframes
print(cleaned_tables[0])

# drop second row of data frame (subtitle)
df = df.drop(df.index[0])

#set up headers in row 1
df=df.set_value(1, df.columns[0], 'brands')

# change column names to proper headers (brand as first column, years of data following)
for i in list(range(len(df.columns))):
    df = df.rename(columns={df.columns[i]: df.iloc[0][df.columns[i]]})

#get rid of double headers (row 1)
df = df.drop(df.index[0])

回答1:

If you have code that transforms a single .csv, and you want to perform the same actions on all .csvs in a directory, and end up with a unique DataFrame for each .csv, could you do something like this?

import pandas as pd
import numpy as np
import os

filepath = r'C:\Users\me\BMI'

df_list = []
for file in os.listdir():
  if file.endswith('.csv'):
    temp_df = pd.read_csv(os.path.join(filepath, file), encoding='utf-8')
    # transformation and clean-up steps here...

    # ...
    df_list.append(temp_df)

Each temp_df would be stored in the df_list.

You could concatenate them into one final DataFrame at the end like this:

one_df = pd.concat(df_list, ignore_index=True)

If you run into issues reading the file into the dataframe, try defining the separator sep (if your data is not comma-delimited) and encoding if your csv files are encoded in a specific encoding. Ex:

pd.read_csv(..., sep='\t', encoding='cp1252')