I am running a python 2.7 script on a p2.xlarge AWS server through Jupyter (Ubuntu 14.04). I would like to be able to render my simulations.
Minimal working example
import gym
env = gym.make('CartPole-v0')
env.reset()
env.render()
env.render()
makes (among other things) the following errors:
...
HINT: make sure you have OpenGL install. On Ubuntu, you can run
'apt-get install python-opengl'. If you're running on a server,
you may need a virtual frame buffer; something like this should work:
'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'")
...
NoSuchDisplayException: Cannot connect to "None"
I would like to some how be able to see the simulations. It would be ideal if I could get it inline, but any display method would be nice.
Edit: This is only an issue with some environments, like classic control.
Update I
Inspired by this I tried the following, instead of the xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>
(which I couldn't get to work).
xvfb-run -a jupyter notebook
Running the original script I now get instead
GLXInfoException: pyglet requires an X server with GLX
Update II
Issue #154 seems relevant. I tried disabling the pop-up, and directly creating the RGB colors
import gym
env = gym.make('CartPole-v0')
env.reset()
img = env.render(mode='rgb_array', close=True)
print(type(img)) # <--- <type 'NoneType'>
img = env.render(mode='rgb_array', close=False) # <--- ERROR
print(type(img))
I get ImportError: cannot import name gl_info
.
Update III
With inspiration from @Torxed I tried creating a video file, and then rendering it (a fully satisfying solution).
Using the code from 'Recording and uploading results'
import gym
env = gym.make('CartPole-v0')
env.monitor.start('/tmp/cartpole-experiment-1', force=True)
observation = env.reset()
for t in range(100):
# env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.monitor.close()
I tried following your suggestions, but got ImportError: cannot import name gl_info
from when running env.monitor.start(...
.
From my understanding the problem is that OpenAI uses pyglet
, and pyglet
'needs' a screen in order to compute the RGB colors of the image that is to be rendered. It is therefore necessary to trick python to think that there is a monitor connected
Update IV
FYI there are solutions online using bumblebee that seem to work. This should work if you have control over the server, but since AWS run in a VM I don't think you can use this.
Update V
Just if you have this problem, and don't know what to do (like me) the state of most environments are simple enough that you can create your own rendering mechanism. Not very satisfying, but.. you know.
Referencing my other answer here: Display OpenAI gym in Jupyter notebook only
I made a quick working example here which you could fork: https://kyso.io/eoin/openai-gym-jupyter with two examples of rendering in Jupyter - one as an mp4, and another as a realtime gif.
The .mp4 example is quite simple.
Then in a new cell Jupyter cell, or download it from the server onto some place where you can view the video.
If your on a server with public access you could run
python -m http.server
in the gym-results folder and just watch the videos there.This GitHub issue gave an answer that worked great for me. It's nice because it doesn't require any additional dependencies (I assume you already have
matplotlib
) or configuration of the server.Just run, e.g.:
Using
mode='rgb_array'
gives you back anumpy.ndarray
with the RGB values for each position, andmatplotlib
'simshow
(or other methods) displays these nicely.Note that if you're rendering multiple times in the same cell, this solution will plot a separate image each time. This is probably not what you want. I'll try to update this if I figure out a good workaround for that.
Update to render multiple times in one cell
Based on this StackOverflow answer, here's a working snippet (note that there may be more efficient ways to do this with an interactive plot; this way seems a little laggy on my machine):
Update to increase efficiency
On my machine, this was about 3x faster. The difference is that instead of calling
imshow
each time we render, we just change the RGB data on the original plot.I managed to run and render openai/gym (even with mujoco) remotely on a headless server.
Usage:
Example:
In my IPython environment, Andrew Schreiber's solution can't plot image smoothly. The following is my solution:
If on a linux server, open jupyter with
In Jupyter
Display iteration:
I think we should just capture renders as video by using OpenAI Gym
wrappers.Monitor
and then display it within the Notebook.Example:
Dependencies
Capture as video
Display within Notebook
I hope it helps. ;)