Cobb-Douglas functions slows running tremendously.

2019-09-06 04:56发布

I have a working microeconomic model running with 10 modules 2000 agents, for up to 10 years. The program was running fast, providing results, output and graphics in a matter of seconds.

However, when I implemented a non-linear Cobb-Douglas production function to update the quantity to be produced in the firms, the program slowed to produce results in 3 minutes, depending on the parameters.

Does anybody know how I could expedite the calculation and get back to fast results?

Here is the code of the function: ALPHA = 0.5

def update_product_quantity(self):
    if len(self.employees) > 0 and self.total_balance > 0:
        dummy_quantity = self.total_balance ** parameters.ALPHA * \
                         self.get_sum_qualification() ** (1 - parameters.ALPHA)
        for key in self.inventory.keys():
            while dummy_quantity > 0:
                self.inventory[key].quantity += 1
                dummy_quantity -= 1

The previous linear function that was working fast was:

def update_product_quantity(self):
    if len(self.employees) > 0:
        dummy_quantity = self.get_sum_qualification()
        for key in self.inventory.keys():   
            while dummy_quantity > 0:
                self.inventory[key].quantity += 1
                dummy_quantity -= 1

1条回答
姐就是有狂的资本
2楼-- · 2019-09-06 05:32

It's hard to say how to fix it without seeing the context in the rest of your code; but one thing that might speed it up is pre-computing dummy quantities with numpy. For example, you could make a numpy array of each agent's total_balance and sum_qualification, compute a corresponding array of dummy_quantities and then assign that back to the agents.

Here's a highly-simplified demonstration of the speedup:

%%timeit
vals = range(100000)
new_vals = [v**0.5 for v in vals]
> 100 loops, best of 3: 15 ms per loop

Now, with numpy:

%%timeit
vals = np.array(range(100000))
new_vals = np.sqrt(vals)
> 100 loops, best of 3: 6.3 ms per loop

However, a slow-down from a few seconds to 3 minutes seems extreme for the difference in calculation. Is the model behaving the same way with the C-D function, or is that driving changes in the model dynamics which are the real reason for the slowdown? If the latter, then you might need to look elsewhere for the bottleneck to optimize.

查看更多
登录 后发表回答