1. Home
  2. Docs
  3. Yonohub
  4. Tutorials
  5. Custom Python 3 Block

Custom Python 3 Block

In this tutorial, you will be able to create a custom YonoArc block. You will simply replace the Edge Detection block from the Hello YonoArc tutorial with your own custom edge detection block. You will use Jupyter Notebook to develop the block.

  1. Launch Jupyter Notebook.
    • Click the Jupyter Notebook icon on Yonohub’s main view.
    • Select the YonoCommons – CPU environment since it has OpenCV, the only package we need for the new block.
    • Under the General tab, select the C1 resource model since we only need Jupyter Notebook to develop the block.
    • Click Launch or Express Launch. Wait until Jupyter Notebook finishes loading.
    • Once Jupyter Notebook is running, click its icon then click the Web UI URL.
  2. Create a folder for the new block.
    • Navigate into MyDrive directory by clicking on MyDrive.
    • Click New >> Folder at the upper-right corner. A folder named Untitled Folder is created.
    • Rename the folder to EdgeDetectionCustomBlock by selecting the folder and clicking Rename.
    • Navigate into the folder by clicking its name.
  3. Develop the Edge Detection block.
    • Click New >> Python 3 to create a new ipynb file for Python 3 code. The file opens in a new tab.
    • Rename the file from Untitled to edge_detection by clicking its name at the top.
    • Paste the following source code into the the first cell in the notebook. Note that this code can only be executed as a YonoArc block, not within Jupyter Notebook. That’s because only YonoArc can set up your dependencies such as the yonoarc_utils package, provide your input data coming from other blocks to on_new_messages, and deliver your output data sent using self.publish to other blocks.
      import cv2
      from yonoarc_utils.image import to_ndarray, from_ndarray
      
      
      class EdgeDetection:
      
          def on_new_messages(self, messages):
      
              # Get the incoming image message and convert it to an ndarray.
              image = to_ndarray(messages['image'])
      
              # Use get_property to get its value even in case of live updates.
              min_threshold = self.get_property('min_threshold')
              max_threshold = self.get_property('max_threshold')
      
              # Detect edges in the image using Canny Edge Detection.
              edges = cv2.Canny(image, min_threshold, max_threshold)
      
              # Create the output message from the edges image and the incoming message header.
              edges = from_ndarray(edges, messages['image'].header)
      
              # Publish the output image message.
              self.publish('edges', edges)
  4. Create the custom block.
    • Start YonoArc by clicking its icon on Yonohub’s main view.
    • Open the YonoArc pipeline you previously created in the Hello YonoArc tutorial by clicking File >> Open and selecting its .arc file.
    • Remove the Edge Detection block that you are going to replace.
    • Click the + button on the upper-left corner and drag the Custom Block from the Others toolbox to the canvas.
    • Click the settings icon on the custom block. Set the attributes of the block as follows:
      • Name: Canny Edge Detection
      • Description: This block detects edges in the input image using cv2.Canny.
      • Language: Python 3
      • Folder Path: Click Browse then select the EdgeDetectionCustomBlock folder and click Open.
      • File Path: Click Browse then select the EdgeDetectionCustomBlock/edge_detection.ipynb file and click Open. YonoArc will automatically convert this file to a .py file before being executed.
      • Class Name: EdgeDetection. This is the name of the class representing the block.
      • Input Ports: A single port with the following attributes:
        • Name: Input Image
        • Key: image. This key is used to get the image message in on_new_messages: messages[‘image’].
        • Message: sensor_msgs/Image.
      • Output Ports: A single port with the following attributes:
        • Name: Edges Image
        • Key: edges. This key is used to publish the edges image: self.publish(‘edges’, edges).
        • Message: sensor_msgs/Image.
      • Properties: Add the first property with the following attributes:
        • Type: Number
        • Name: Min Threshold
        • Key: min_threshold. This key is used to get the property value: self.get_property(‘min_threshold’).
        • Description: Any edges with intensity gradient more than Max Threshold are sure to be edges and those below Min Threshold are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to “sure-edge” pixels, they are considered to be part of edges. Otherwise, they are also discarded.
        • Click Create property. The property will be added below the block description.
        • Set its value to 100.
      • Properties: Add the second property with the following attributes:
        • Type: Number
        • Name: Max Threshold
        • Key: max_threshold. This key is used to get the property value: self.get_property(‘max_threshold’).
        • Description: Any edges with intensity gradient more than Max Threshold are sure to be edges and those below Min Threshold are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to “sure-edge” pixels, they are considered to be part of edges. Otherwise, they are also discarded.
        • Click Create property. The property will be added below the block description.
        • Set its value to 200.
      • Git Repositories of Messages: You can leave the default message repos because the ros-common-msgs repository has the sensor_msgs package which contains the Image message. The ros-std-msgs repository has the std_msgs package which contains the Header message that is used in the sensor_msgs/Image message.
        • URL: https://gitlab.yonohub.com/YonoTeam/ros-common-msgs.git – Branch/Tag: 1.12.6
        • URL: https://gitlab.yonohub.com/YonoTeam/ros-std-msgs.git – Branch/Tag: 0.5.11
      • Execution Mode: Async
      • Environment: YonoCommons – CPU. It has OpenCV, the only package we needed in the code.
      • Minimum Resources: C0.1 x 3.
  5. Connect the custom block with the other blocks in the same way the old Edge Detection block was connected.
  6. Click Launch or Express Launch and wait until the blocks are running. Click the dashboard button on the lower-left corner after the button gets enabled. The dashboard opens in a new tab and you can see the original video along with the edge detection results.
  7. Use live updates to tune the results while the pipeline is running.
    • Click the settings icon on the custom block.
    • Configure the values of the Min Threshold and the Max Threshold as needed. Note that after changing the value of a property, you should move the focus away from the field in order for the live update to take effect. 
  8. Click Terminate to terminate the pipeline and release the resources.