Getting Started
To get started, please follow the installation instructions to set up SOFA and SofaPython3.
After that, either look through the implemented Scenes, or have a look at the workflow below to directly get started with implementing your own reinforcement learning environment.
Environment and Scene Development Workflow
Create a new directory in
sofa_env/sofa_env/scenes
for your sceneAdd a
scene_description.py
file that contains acreateScene
function. Look intosofa_env/sofa_env/scenes/controllable_object_example/scene_description.py
for an example.Add your SOFA components to the scene.
sofa_env.sofa_templates
contains a few standard components.Iteratively test your
createScene
function by passing it to the SOFA binary.$SOFA_ROOT/bin/runSofa <path_to_your_scene_description.py>
Let
createScene
return theroot_node
, your camera (if you want to render images), and additional interactive components from your scene as adict
.Create your
SofaEnv
class that describes interactions with the scene. Have a look at Interacting with the Simulation andsofa_env/sofa_env/scenes/controllable_object_example/controllable_env.py
for an example.Implement your environment’s
_do_action
,step
, andreset
functions.Iteratively test your environment directly through python.
python3 <path_to_your_env.py>