Documentation


If you want to know how BlenSor is used and don't want to read the documentation, head right on to the Tutorials. This software is currently under development. The feature set is not yet complete. The simulation of the sensors will get more accurate as the development continues. If data from an earlier version is used in research, some of your algorithms might provide lead to different results depending on the development stage. In order to identify (and name) the type of simulation used in a research article, we identify names particular development stages each sensor currently implements. The name is of the form BSS-, where BSS is an abbreviation for BlenSor Simulation Stage. We identify five main stages which are described next:
  • BSS-0 The sensor is only a stub. The output (if any) need not be used.
  • BSS-1 The sensor is implemented. The basic working principle is captured, however there might still be serious simplifications and many side effects are not implemented. No evaluation of the data has been done so far.
  • BSS-2 Visual checking of the noise model and the physical characteristics has been done. The sensor produces output very similar to the real sensor in many scenarios. Special surfaces (reflections, glass, ...) are not simulated.
  • BSS-3 The accuracy of the noise models has been statistically verified. Special surfaces (reflections, glass, ...) are not simulated.
  • BSS-4 Several side effects of the sensor have been implemented (i.e. special surfaces, environmental conditions)

Current Feature Set:

  • BSS-2 implementation of a Velodyne HDL-64E S2 sensor
  • BSS-1 implementation of an Ibeo LUX line scanner
  • BSS-1 implementation of a SwissRanger Time-of-Flight camera
  • BSS-1 implementation of a Kinect Sensor
  • BSS-1 implementation of a Genera LIDAR scanner
  • Support to obtain accurate depth map information
  • Support to export motion data per object
  • Direct visualization of a single scan within Blender
  • Direct visualization of a range of scans within Blender
  • Unique Object identifiers (IDs) per ray
  • Physical simulation through the Blender game engine
  • Support for reflecting surfaces
  • Support for transparent surfaces (glass)
  • Back-folding for the time-of-flight camera
  • Support the output to the PCD format from the PCL (Point cloud library)
  • Starting with version 1.0.10 the RGB values of the material are stored in the PCD and EVD file

Upcoming Features:

  • Improve the sensor output to produce more physically accurate data.
    1. Back-folding for the time-of-flight camera
    2. Support to add material parameters which influence the probability of a ray being reflected or even absorbed
    3. Lens-distortion for the time-of-flight camera
    4. Support for refraction
  • Importing from a range of scans
  • Support the output to the PCD format from the PCL (Point cloud library)
  • Replay evd files via UDP to simulate a real Velodyne
  • Full RGBD support. The current time of flight implementation does not provide the value of the actual pixel only its position