I am trying to solve the following equation in python using the scipy.odeint function.
Currently I am able to implement this form of the equation
in python using the following script:
def dY(y1, x):
a = 0.001
yin = 1
C = 0.01
N = 1
dC = C/N
b1 = 0
return (a/dC)*(yin-y1)+b1*dC
x = np.linspace(0,20,1000)
y0 = 0
res = odeint(dY, y0, x)
plt.plot(t,res, '-')
plt.show()
My problem with the first equation is 'i'. I don't know how to integrate the equation and still be able to provide the current and previous 'y'(yi-1 and yi) values. 'i' is simply a sequence number that is within a range of 0..100.
Edit 1:
The original equation is:
Which I rewrote using y,x,a,b and C
Edit2: I edited Pierre de Buyl' code and changed the N value. Luckily I have a validation table to validate the outcome against. Unfortunately, the results are not equal.
Here is my validation table:
and here is the numpy output:
Used code:
def dY(y, x):
a = 0.001
yin = 1
C = 0.01
N = 3
dC = C/N
b1 = 0.01
y_diff = -np.copy(y)
y_diff[0] += yin
y_diff[1:] += y[:-1]
return (a/dC)*(y_diff)+b1*dC
x = np.linspace(0,20,11)
y0 = np.zeros(3)
res = odeint(dY, y0, x)
plt.plot(x,res, '-')
as you can see the values are different by an offset of 0.02..
Am I missing something that results in this offset?