1. Home
  2. Docs
  3. Yonohub
  4. nuScenes Package
  5. Visualize the nuScenes Dataset

Visualize the nuScenes Dataset

In this tutorial, we illustrate how to visualize the nuScenes raw data combined with the ground truth annotations. The dataset contains different types of raw data extracted from various sensors. Consequently, it was beneficial to create a subset of the package deals with the visualization process.

The tutorial goes through the following:

  • Create a YonoArc pipeline which aims to,
    • Draw the 3D Ground Truth Boxes on the raw images captured by Camera sensors.
    • Visualize the Radar/Lidar point cloud in Rviz with a different number of sweeps.
    • Convert the 3D Ground Truth Boxes to Markers to visualize it, in Rviz, on the top of the Lidar point cloud.

Constructing the Visualization Pipeline

Draw the 3D Ground Truth Boxes on Raw Images

  • Click on the YonoArc icon from Yonohub’s Main View. You can follow this tutorial to get familiar with the YonoArc interface.
  • Click the button in the left upper corner, then click the Input section and choose Dataset Player – nuScenes.
  • You can use the search engine to find the YonoArc blocks, click on the search field and write the following YonoArc block’s names and place them respectively,
    • Sample Annotations to Eval Boxes – nuScenes
    • Boxes Frame Transformer – nuScenes
    • Draw 3D Boxes – nuScenes
    • Video Viewer
  • Configure the following YonoArc blocks by clicking the settings icon in the upper left corner of each block, you can learn more about the block settings, functionality, and input/output types from the Help tab.
    • Dataset Player – nuScenes
      • First, you need to insert the path of the dataset. Under the Properties tab, you can browse to the path of the nuScenes Dataset in the Dataset Directory property. Click on Browse -> YonoStoreDatasets -> nuScenesDataset-v1.0-Full. You find three different dataset folders, we deal with the v1.0-trainval dataset version in this tutorial. Thus, select v1.0-trainval.
      • Second, select the dataset version from the Dataset Version property. Click on the drop list, and choose the train split version.
      • The nuScenes dataset contains different raw data collected from different types of sensors. The Dataset Player block gives you the freedom to stream the sensory data of a specific type(s). It is recommended to choose the sensor(s) you work on to increase the maximum publishing rate you can achieve. For the sack of this tutorial, we check the Front Camera Output.
      • NOTE: the sensor output means it contains the raw data (images, point cloud), transforms, and intrinsic matrices (for camera sensors).
      • Then, choose the appropriate value of the publishing rate you want to stream the data with. Insert the value of 5 in the Publishing Rate property.
      • In this Dataset Player, you have two publishing modes. Continuous Mode streams the data continuously with only the ability to pause/reset the streaming through the corresponding buttons or through the control signal port. On the other hand, Step Mode gives you full control of the streaming process. We will discuss this mode further in the upcoming tutorials. Select the Continous option from the Publishing Mode drop list property.
    • Sample Annotations to Eval Boxes – nuScenes
      • The block is used to convert the sample annotations format to eval boxes format. You can learn more about the different formats from the nuscenes messages repository.
      • Under the Properties tab, select, from the Evaluation Type droplist, Detection type.
    • Boxes Frame Transformer – nuScenes
      • The block is used to transform the current reference frame of the input bounding boxes to the desired reference frame.
      • Under the Properties tab, select, from the Desired Frame droplist, Front Camera frame. You can learn more about the settings from the Help tab.
    • Draw 3D Boxes – nuScenes
      • The block is used to draw the input bounding boxes over the input raw image.
    • Video Viewer
      • In the Title text field, write “Front Camera Annotated Image”.
      • You can change the quality of the video as well from the Quality droplist.
  • You can enlarge the Dataset Player block by clicking the enlarging icon in the upper right corner of the block.
  • Connect all the blocks as shown below. You can connect several blocks by select them and click “Ctrl + E” or by selecting the source port and connect it to the destination port.
  • Launch the pipeline and wait a while till you get all the blocks running.
  • Wait until there is an INFO alert produced in the Alerts tab of the Dataset Player block which says “Dataset has been loaded“.
  • Click the Play button to start the streaming process.
  • Open the dashboard by clicking on the Dashboard button in the bottom left corner.
  • Then, you can see the annotated frame in the Dashboard window tab as shown below.
  • Check the running scene from the Alerts tab of the Dataset Player block.

Visualize the Radar/Lidar Point Cloud in Rviz

  • In the live mode,  click the button in the left upper corner, then click the output section and choose Rviz.
  • Click the launch button in the Rviz block settings and wait until the block color turns to green.
  • Return back to the Dataset Player block settings, check the Front Radar Output box.
  • Click the Open Rviz button in the Rviz block settings.
  • Now, you have a running Rviz program in a different window tab.
  • Change the Fixed Frame from map to front_radar frame. The names of the different frames are: global, front_camera, front_left_camera, front_right_camera, back_camera, back_left_camera, back_right_camera, lidar, front_radar, front_left_radar, front_right_radar, back_left_radar, back_right_radar.
  • Click the Add button at the bottom left, then select the PointCloud2 topic type.
  • Click on the created PointCloud2 topic, change the style from the droplist to Points, select the RADAR_FRONT topic from the Topic droplist.
  • Now, you find a visualized Radar point cloud in the Rviz space.
  • Return to the pipeline window tab, check the Lidar Output box in the Dataset Player block settings.
  • Back to the Rviz window tab, change the Fixed Frame from front_radar to lidar frame.
  • Click on the previously created PointCloud2 topic, select the LIDAR_TOP topic from the Topic droplist. 
  • Now, you find a visualized Lidar point cloud in the Rviz space. You can decrease the size of the points from the Size (Pixels) field by typing 1.5 instead.
  • EXTRA: Return to the pipeline window tab, change the value of the Lidar/ Radar Number of Sweeps property to be 10.
  • Now, you notice the change happened to the Lidar Point Cloud which becomes denser.


Convert the 3D Ground Truth Boxes to Markers

  • Back to the YonoArc pipeline, click the button in the left upper corner and search for Boxes to Markers – nuScenes. Then, place it on the canvas as well as another Boxes Frame Transformer – nuScenes block.
  • Change the desired frame, in the Boxes Frame Transformer block settings, to the Lidar frame from the Desired Frame droplist property.
  • Connect the recently inserted blocks with the rest of the pipeline as shown below. Then, click the launch button for both blocks. To organize your pipeline connections, you can use the feature of converting any connection to a tunnel. Select the desired connection, then click “Ctrl + Y”.
  • Back to Rviz, click the Add button and select MarkersArray topic type.
  • Click on the created topic and select markers topic from the Marker Topic droplist.
  • Now, you can see the visualization of the ground truth bounding boxes in the 3D Lidar point cloud map.


You can follow the below visual tutorial to see the corresponding output of the above sequence of steps.