1. Home
  2. Docs
  3. Yonohub
  4. nuScenes Package
  5. Predict and Save Your Results

Predict and Save Your Results

In this tutorial, we discuss the process of extracting your algorithm’s predictions against the nuScenes Dataset. Then, we demonstrate the method of saving these predictions as a JSON file ready for the submission.

The tutorial goes through the following:

  • Load the detections results of one of the supported algorithms by the nuScenes Challenge, for example, the MEGVII algorithm.
  • Use a pre-implemented tracking algorithm like AB3DMOT.
  • Append your results after doing the preprocessing stage as a JSON file with the same nuScenes Tracking Challenge format.

Create the Appending YonoArc Pipeline

  • Click on the YonoArc icon from Yonohub’s Main View. You can follow this tutorial to get familiar with the YonoArc interface.
  • Click the button in the left upper corner, then click the Input section and choose Dataset Player – nuScenes.
  • You can use the search engine to find the YonoArc blocks, click on the search field and write the following YonoArc block’s names and place them respectively on the canvas,
    • Predictions Loader – nuScenes
    • Eval Boxes Preprocessing – nuScenes
    • Predictions Appender
  • In this tutorial, we use a pre-implemented 3D tracking algorithm which is called AB3DMOT. You can implement your algorithm as a YonoArc block by following this tutorial.
  • Search for AB3DMOT – 3D Object Tracking in the search field. Drag and drop the block on the canvas.
  • Configure the following YonoArc blocks by clicking the settings icon in the upper left corner of each block, you can learn more about the block settings, functionality, and input/output types from the Help tab.
    • Dataset Player – nuScenes
      • First, you need to insert the path of the dataset. Under the Properties tab, you can browse to the path of the nuScenes Dataset in the Dataset Directory property. Click on Browse -> YonoStoreDatasets -> nuScenesDataset-v1.0-Full. You find three different dataset folders, we deal with the v1.0-trainval dataset version in this tutorial. Thus, select v1.0-trainval.
      • Second, select the dataset version from the Dataset Version property. Click on the drop list, and choose the val split version.
      • The nuScenes dataset contains different raw data collected from different types of sensors. The Dataset Player block gives you the freedom to stream the sensory data of a specific type(s). It is recommended to choose the sensor(s) you work on to increase the maximum publishing rate you can achieve. For the sack of this tutorial, we check the Lidar Output as we will use its transforms in the Eval Boxes Preprocessing block.
      • NOTE: the sensor output means it contains the raw data (images, point cloud), transforms, and intrinsic matrices (for camera sensors).
      • In this Dataset Player, you have two publishing modes. Continuous Mode streams the data continuously with only the ability to pause/reset the streaming through the corresponding buttons or through the control signal port. On the other hand, Step Mode gives you full control of the streaming process. Select the Step option from the Publishing Mode drop list property. This gives the authority to the Predictions Loader block, by sending True value each time, to publish the dataset.
    • Predictions Loader – nuScenes 
      • In this tutorial, the block is used to extract the detection results of any 3D object detection algorithm with the same format as the precious nuScenes Detection Challenge. You can load the detection results of one of the supported algorithms by the nuScenes Challenge
      • Under the Properties tab, select, from the Predictions Type droplist, Detection predictions type.
      • Browse to the downloaded JSON file of the chosen algorithm results, using the Results File Path property. We choose the MEGVII algorithm and load its val results (The detections corresponding to the validation split).
      • Change the Publishing Rate value to 2. The loader block has the upper hand in the publishing cycle during the step Publishing Mode of the Dataset Player. The Publishing Rate value, in the Dataset Player block settings, is useless.
      • Leave the Configuration File Path property empty to have the official configuration file by default. You can change the path to your custom configuration file as well.
    • AB3DMOT – 3D Object Tracking
      • The nuScenes Tracking Challenge is limited to a number of classes. Then, you need to track only the required classes using the algorithm block.
      • Under the Properties tab, uncheck the All checkbox and check all the following checkboxes: Bicycle, Bus, Car, Motorcycle, Pedestrian, Trailer, and Truck.
      • Leave the parameters of the algorithm the same. You can the values if you are aware to get different performances.
    • Eval Boxes Preprocessing
      • The block is used to perform the preprocessing stage of the evaluation by filtering the bounding boxes according to:
        • The distance between the boxes to the ego vehicle.
        • The number of lidar/radar points in the boxes
        • The object type if it is a bicycle in the racks or not.
      • Under the Properties tab, select, from the Evaluation Type droplist, Tracking evaluation type.
      • Leave the Configuration File Path property empty to have the official configuration file by default. You can change the path to your custom configuration file as well.
    • Predictions Appender – nuScenes 
      • The block is used to save your algorithm results as a JSON file with the same format as the nuScenes challenge.
      • Under the Properties tab, select, from the Predictions Type droplist, Evaluation predictions type.
      • Browse to the desired saving path of your results and add the results’ filename to the path, if you want. The default filename is “results.json“.
      • For the Meta information about the submission, check the Use Lidar checkbox for the AB3DMOT. You can choose your Meta information according to your algorithm. The Meta information is an indication of which types of inputs your algorithm.
  • Connect all the blocks as shown below. You can connect several blocks by select them and click “Ctrl + E” or by selecting the source port and connect it to the destination port. You can convert any connection to a tunnel by selecting the connection, then click “Ctrl + Y”.
  • Launch the pipeline and wait a while till you get all the blocks running.
  • Wait for an INFO alert produced in the Alerts tab of the predictions loader block which says “The results have been loaded!“.
  • Wait until there is an INFO alert produced in the Alerts tab of the Dataset Player block which says “Dataset has been loaded“.
  • Click the Play button to start the streaming process.
  • Check the running scene from the Alerts tab of the Dataset Player block.
  • You can save the intermediate results of your tracking algorithm by clicking the Save Results button in the Predictions Appender settings.

You can follow the below visual tutorial to see the corresponding output of the above sequence of steps.