Accelerate a slow loop in Abaqus-python code for e

2020-06-25 04:38发布

I have a .odb file, named plate2.odb, that I want to extract the strain data from. To do this I built the simple code below that loops through the field output E (strain) for each element and saves it to a list.

from odbAccess import openOdb
import pickle as pickle

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a list
E = []
for i in range(1000):
    E.append(odb.steps['Step-1'].frames[0].fieldOutputs['E'].values[i].data)   

# save the data
with open("mises.pickle", "wb") as input_file:
    pickle.dump(E, input_file)

odb.close()

The issue is the for loop that loads the strain values into a list is taking a long time (35 seconds for 1000 elements). At this rate (0.035 queries/second), it would take me 2 hours to extract the data for my model with 200,000 elements. Why is this taking so long? How can I accelerate this?

If I do a single strain query outside any Python loop it takes 0.04 seconds, so I know this is not an issue with the Python loop.

标签: python abaqus
3条回答
混吃等死
2楼-- · 2020-06-25 05:07

I found out that I was having to reopen the subdirectories in the odb dictionary every time I wanted a strain. Therefore, to fix the problem I saved the odb object as a smaller object. My updated code that takes a fraction of a second to solve is below.

from odbAccess import openOdb
import pickle as pickle

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a list
E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
    E.append(EE.values[i].data)  

# save the data
with open("mises.pickle", "wb") as input_file:
    pickle.dump(E, input_file)

odb.close()
查看更多
乱世女痞
3楼-- · 2020-06-25 05:13

I would use bulkDataBlocks here. This is much faster than using the value method. Also using Pickle is usually slow and not necessary. Take a look in the C++ Manual http://abaqus.software.polimi.it/v6.14/books/ker/default.htm at the FieldBulkData object. The Python method is the same but at least in Abaqus 6.14 it is not documented in the Python-Scripting-Reference (it is available since 6.13).

For example:

from odbAccess import openOdb
import numpy as np

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a numpy array
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']

# get a numpy array with your data 
# Not using np.copy here may work also, but sometimes I encountered some weird bugs
Strains=np.copy(EE.bulkDataBlocks[0].data)

# save the data
np.save('OutputPath',Strains)

odb.close()

Keep in mind, that if you have multiple Element Types there may be more than one bulkDataBlock.

查看更多
欢心
4楼-- · 2020-06-25 05:18

Little late to the party, but I find using operator.attrgetter to be much faster than a for loop or list comprehension in this case

So instead of @AustinDowney

E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
    E.append(EE.values[i].data) 

do this:

from operator import attrgetter
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
E = map(attrgetter('data'), EE.values)

This is about the same speed as list comprehension, but is much better if you have multiple attributes you want to extract at once (say coordinates or elementId)

查看更多
登录 后发表回答