I'm getting some strange behavior in a simple co-simulation that I'm trying to configure. I setup a building energy model in EnergyPlus to test out an FMU generated from JModelica. However, the building energy model would get hung up at the co-simulation step. I then ran the FMU in JModelica and got some very strange results.
The Modelica code is:
model CallAdd
input Real FirstInput(start=0);
input Real SecondInput(start=0);
output Real FMUOutput(start=0);
function CAdd
input Real x(start=0);
input Real y(start=0);
output Real z(start=0);
external "C" annotation(Library = "CAdd", LibraryDirectory = "modelica://CallAdd");
end CAdd;
equation
FMUOutput = CAdd(FirstInput,SecondInput);
annotation(uses(Modelica(version = "3.2.1")));
end CallAdd;
The above code references "CAdd" which is library file for the c code "CAdd.c":
double CAdd(double x, double y){
double answer;
answer = x + y;
return answer;
}
which is compiled into a library file with the below two commands in CMD:
gcc -c CAdd.c -o CAdd.o
ar rcs libCAdd.a CAdd.o
I can run the above example in OpenModelica with a wrapper and it works great.
I then used JModelica to compile the above as an FMU for co-simulation. The JModelica compile code is:
# Import the compiler function
from pymodelica import compile_fmu
# Specify Modelica model and model file (.mo or .mop)
model_name = "CallAdd"
mo_file = "CallAdd.mo"
# Compile the model and save the return argument, for use later if wanted
my_fmu = compile_fmu(model_name, mo_file, target="cs")
I then simulated the FMU and got the strange results with the JModelica Python code:
from pyfmi import load_fmu
import numpy as np
import matplotlib.pyplot as plt
modelName = 'CallAdd'
numSteps = 100
timeStop = 20
# Load FMU created with the last script
myModel = load_fmu(modelName+'.fmu')
# Load options
opts = myModel.simulate_options()
# Set number of timesteps
opts['ncp'] = numSteps
# Set up input, needs more than one value to interpolate the input over time.
t = np.linspace(0.0,timeStop,numSteps)
u1 = np.sin(t)
u2 = np.empty(len(t)); u2.fill(5.0)
u_traj = np.transpose(np.vstack((t,u1,u2)))
input_object = (['FirstInput','SecondInput'],u_traj)
# Internalize results
res = myModel.simulate(final_time=timeStop, input = input_object, options=opts)
# print 'res: ', res
# Internalize individual results
FMUTime = res['time']
FMUIn1 = res['FirstInput']
FMUIn2 = res['SecondInput']
FMUOut = res['FMUOutput']
plt.figure(2)
FMUIn1Plot = plt.plot(t,FMUTime[1:],label='FMUTime')
# FMUIn1Plot = plt.plot(t,FMUIn1[1:],label='FMUIn1')
# FMUIn2Plot = plt.plot(t,FMUIn2[1:],label='FMUIn2')
# FMUOutPlot = plt.plot(t,FMUOut[1:],label='FMUOut')
plt.grid(True)
plt.legend()
plt.ylabel('FMU time [s]')
plt.xlabel('time [s]')
plt.show()
which resulted in the plot for result "FMUTime" vs python "t":
In addition to seeing this strange behavior the input "FirstInput" and "SecondInput" in the FMU results do not match the u1 and u2 specified in the python code. I'm hoping that someone can help me better understand what is going on.
Best,
Justin