Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 5 months ago.
I have big dataset that has the next columns:
cols=['plant', 'time','date','hour','NDVI','Treatment','Line','397.01', '398.32', '399.63', '400.93', '402.24', '403.55'...,'1005']
I want to create new database which will contain the 7 first columns, then skip 10 and then have all the others.
I have done something like this:
df2=df_plants.iloc[:,10:]
df2.head()
but this cut the first columns and I need them as well.
I friend had reccomend me to do something like this:
#convert the ''numeric'' columns into float
float_cols = [float(i) for i in df_plants.columns.tolist()[4:] if type(i)==str]
df_plants.columns.values[4:] = float_cols
#detector edges removal
idx1 = (np.abs(df_plants.loc[:,float_cols].columns.values - 420))
#np.argmin(idx1)
idx2 = np.argmin(np.abs(df_plants.loc[:,float_cols].columns.values - 1005.0))
but when I apply it nothing happen and also i'm not sure I understand his idea in the detector edge part.
My end goal is to create new database that will contain the next columns: plant.line.treatment.time and then all the numeric columns that are greater than 410 .
Edit: the best thing for me is if I could tell python somehow that if in a numerical column there are negative values, remove it.