An OpenAI gym third party environment for CARLA simulator.
- Ubuntu 16.04
- Setup conda environment
$ conda create -n env_name python=3.6
$ conda activate env_name
- Clone this git repo in an appropriate folder
$ git clone https://github.com/cjy1992/gym-carla.git
- Enter the repo root folder and install the packages:
$ pip install -r requirements.txt
$ pip install -e .
- Download CARLA_0.9.6, extract it to some folder, and add CARLA to
PYTHONPATH
environment variable:
$ export PYTHONPATH=$PYTHONPATH:$YourFolder$/CARLA_0.9.6/PythonAPI/carla/dist/carla-0.9.6-py3.5-linux-x86_64.egg
- Enter the CARLA root folder and launch the CARLA server by:
$ ./CarlaUE4.sh -windowed -carla-port=2000
You can use Alt+F1
to get back your mouse control.
Or you can run in non-display mode by:
$ DISPLAY= ./CarlaUE4.sh -opengl -carla-port=2000
- Run the test file:
$ python test.py
See details of test.py
about how to use the CARLA gym wrapper.
- We provide a dictionary observation including front view camera (obs['camera']), birdeye view lidar point cloud (obs['lidar']) and birdeye view semantic representation (obs['birdeye']):
-
The termination condition is either the ego vehicle collides, runs out of lane, reaches a destination, or reaches the maximum episode timesteps. Users may modify function _terminal in carla_env.py to enable customized termination condition.
-
The reward is a weighted combination of longitudinal speed and penalties for collision, exceeding maximum speed, out of lane, large steering and large lateral accleration. Users may modify function _get_reward in carla_env.py to enable customized reward function.
- See https://github.com/cjy1992/interp-e2e-driving, which provides implementations for the paper Interpretable End-to-end Urban Autonomous Driving with Latent Deep Reinforcement Learning and paper Model-free Deep Reinforcement Learning for Urban Autonomous Driving, as well as several deep RL baselines for autonomous driving on CARLA.
- See paper Deep Imitation Learning for Autonomous Driving in Generic Urban Scenarios with Enhanced Safety.
- See https://github.com/cjy1992/detect-loc-map, which provides implementations for the paper End-to-end Autonomous Driving Perception with Sequential Latent Representation Learning.