This might be considered as a duplicate of a thorough explanation of various approaches, however I can't seem to find a solution to my problem there due to a higher number of Data Frames.
I have multiple Data Frames (more than 10), each differing in one column VARX
. This is just a quick and oversimplified example:
import pandas as pd
df1 = pd.DataFrame({'depth': [0.500000, 0.600000, 1.300000],
'VAR1': [38.196202, 38.198002, 38.200001],
'profile': ['profile_1', 'profile_1','profile_1']})
df2 = pd.DataFrame({'depth': [0.600000, 1.100000, 1.200000],
'VAR2': [0.20440, 0.20442, 0.20446],
'profile': ['profile_1', 'profile_1','profile_1']})
df3 = pd.DataFrame({'depth': [1.200000, 1.300000, 1.400000],
'VAR3': [15.1880, 15.1820, 15.1820],
'profile': ['profile_1', 'profile_1','profile_1']})
Each df
has same or different depths for the same profiles, so
I need to create a new DataFrame which would merge all separate ones, where the key columns for the operation are depth
and profile
, with all appearing depth values for each profile.
The VARX
value should be therefore NaN
where there is no depth measurement of that variable for that profile.
The result should be a thus a new, compressed DataFrame with all VARX
as additional columns to the depth
and profile
ones, something like this:
name_profile depth VAR1 VAR2 VAR3
profile_1 0.500000 38.196202 NaN NaN
profile_1 0.600000 38.198002 0.20440 NaN
profile_1 1.100000 NaN 0.20442 NaN
profile_1 1.200000 NaN 0.20446 15.1880
profile_1 1.300000 38.200001 NaN 15.1820
profile_1 1.400000 NaN NaN 15.1820
Note that the actual number of profiles is much, much bigger.
Any ideas?
Consider setting index on each data frame and then run the horizontal merge with
pd.concat
:I would use append.
Obviously if you have a lot of dataframes, just make a list and loop through them.
A simple way is with a combination of
functools.partial
/reduce
.Firstly
partial
allows to "freeze" some portion of a function’s arguments and/or keywords resulting in a new object with a simplified signature. Then withreduce
we can apply cumulatively the new partial object to the items of iterable (list of dataframes here):Why not concatenate all the Data Frames, melt, then reform them using your ids? There might be a more efficient way to do this, but this works.
Where
df_pivot
will beYou can also use: