1. Home
  2. Docs
  3. Yonohub
  4. nuScenes Package
  5. Benchmark Your Tracking Algorithm

Benchmark Your Tracking Algorithm

In this tutorial, we demonstrate the evaluation/benchmarking loop of your tracking results against the nuScenes Dataset. This tutorial is beneficial to participate in the nuScenes Tracking Challenge.

The tutorial goes through the following:

  • Load your own tracking results from the previous tutorial.
  • Benchmark your algorithm against the nuScenes Dataset.

Create the Bench-marking YonoArc Pipeline

  • Click on the YonoArc icon from Yonohub’s Main View. You can follow this tutorial to get familiar with the YonoArc interface.
  • Click the button in the left upper corner, then click the Input section and choose Dataset Player – nuScenes.
  • You can use the search engine to find the YonoArc blocks, click on the search field and write the following YonoArc block’s names and place them respectively on the canvas,
    • Predictions Loader – nuScenes
    • Sample Annotations to Eval Boxes – nuScenes
    • Eval Boxes Preprocessing – nuScenes
    • Tracking Benchmark – nuScenes
  • Configure the following YonoArc blocks by clicking the settings icon in the upper left corner of each block, you can learn more about the block settings, functionality, and input/output types from the Help tab.
    • Dataset Player – nuScenes
      • First, you need to insert the path of the dataset. Under the Properties tab, you can browse to the path of the nuScenes Dataset in the Dataset Directory property. Click on Browse -> YonoStoreDatasets -> nuScenesDataset-v1.0-Full. You find three different dataset folders, we deal with the v1.0-trainval dataset version in this tutorial. Thus, select v1.0-trainval.
      • Second, select the dataset version from the Dataset Version property. Click on the drop list, and choose the val split version.
      • The nuScenes dataset contains different raw data collected from different types of sensors. The Dataset Player block gives you the freedom to stream the sensory data of a specific type(s). It is recommended to choose the sensor(s) you work on to increase the maximum publishing rate you can achieve. For the sack of this tutorial, we check the Lidar Output as we will use its transforms in the Eval Boxes Preprocessing block.
      • NOTE: the sensor output means it contains the raw data (images, point cloud), transforms, and intrinsic matrices (for camera sensors).
      • In this Dataset Player, you have two publishing modes. Continuous Mode streams the data continuously with only the ability to pause/reset the streaming through the corresponding buttons or through the control signal port. On the other hand, Step Mode gives you full control of the streaming process. Select the Step option from the Publishing Mode drop list property. This gives the authority to the Predictions Loader block, by sending True value each time, to publish the dataset.
    • Sample Annotations to Eval Boxes – nuScenes
      • The block is used to convert the sample annotations format to eval boxes format. You can learn more about the different formats from the nuscenes messages repository.
      • Under the Properties tab, select, from the Evaluation Type droplist, Tracking type.
    • Eval Boxes Preprocessing
      • In this tutorial, the block is used to perform the preprocessing stage of the evaluation by filtering the ground truth bounding boxes according to:
        • The distance between the boxes to the ego vehicle.
        • The number of lidar/radar points in the boxes
        • The object type if it is a bicycle in the racks or not.
      • Under the Properties tab, select, from the Evaluation Type droplist, Tracking evaluation type.
      • Leave the Configuration File Path property empty to have the official configuration file by default. You can change the path to your custom configuration file as well.
    • Predictions Loader – nuScenes 
      • In this tutorial, the block is used to extract the tracking results of the previous tutorial.
      • Under the Properties tab, select, from the Predictions Type droplist, Tracking predictions type.
      • Browse to the file path of the previously saved tracking results, from the previous tutorial, using the Results File Path property.
      • Change the Publishing Rate value to 2. The loader block has the upper hand in the cycle during the step Publishing Mode of the Dataset Player. The Publishing Rate value, in the Dataset Player block settings, is useless.
      • Leave the Configuration File Path property empty to have the official configuration file by default. You can change the path to your custom configuration file as well.
    • Tracking Benchmark – nuScenes
      • The block is used to evaluate your algorithm against the nuScenes Dataset by inputting both the ground truth as well as the algorithm predictions.
      • Under the Properties tab, browse to your desired output directory using the Output Directory property. The output of the evaluation process will be saved as two JSON files: metrics_details.json, metrics_summary.json.
      • Leave the Configuration File Path property empty to have the official configuration file by default. You can change the path to your custom configuration file as well.
      • The block needs the sample information as well as the Meta information as inputs from the Dataset Player for the evaluation process.
  • Connect all the blocks as shown below. You can connect several blocks by select them and click “Ctrl + E” or by selecting the source port and connect it to the destination port. You can convert any connection to a tunnel by selecting the connection, then click “Ctrl + Y”.
  • Launch the pipeline and wait a while till you get all the blocks running.
  • Wait for an INFO alert produced in the Alerts tab of the predictions loader block which says “The results have been loaded!“.
  • Wait until there is an INFO alert produced in the Alerts tab of the Dataset Player block which says “Dataset has been loaded“.
  • Click the Play button to start the streaming process.
  • Check the running scene from the Alerts tab of the Dataset Player block.
  • You can instant evaluate the current batched bounding boxes to the Tracking Benchmark block by clicking the Evaluate button in the block settings. An INFO alert will be produced in the Alerts tab in the block which says “Evaluation process is started“. Wait for two INFO alerts which say “Evaluation process is finished” as well as “The evaluation results have been saved!“.
  • Now, check your output directory to review your evaluation results.
  • You can add the visualization blocks from the second tutorial to visualize the predicted vs. the ground truth bounding boxes.

You can follow the below visual tutorial to see the corresponding output of the above sequence of steps.