项目作者: knorth55

项目描述 :
ROS package for Coral Edge TPU USB Accelerator
高级语言: Python
项目地址: git://github.com/knorth55/coral_usb_ros.git
创建时间: 2019-09-14T08:05:34Z
项目社区:https://github.com/knorth55/coral_usb_ros

开源协议:Other

下载


coral_usb_ros

main
linter
GitHub tag (latest by date)
Docker Stars
Docker Pulls
Docker Automated
Docker Build Status

ROS package for Coral Edge TPU USB Accelerator

Environment

  • Ubuntu 16.04 + Kinetic
  • Ubuntu 18.04 + Melodic
  • Ubuntu 20.04 + Noetic

If you want to run this on Ubuntu 14.04 + Indigo, please see indigo branch.

If you want to run this on PR2, please see pr2 branch.

Notice

We need python3.5 and above to run this package.

ROS node list

Object detector: edgetpu_object_detector.py

edgetpu_object_detector

For more information, please see here.

Face detector: edgetpu_face_detector.py

edgetpu_face_detector

For more information, please see here.

Human pose estimator: edgetpu_human_pose_estimator.py

edgetpu_human_pose_estimator

For more information, please see here.

Semantic segmenter: edgetpu_semantic_segmenter.py

edgetpu_semantic_segmenter

For more information, please see here.

Panorama object detector: edgetpu_panorama_object_detector.py

edgetpu_panorama_object_detector

For more information, please see here.

Panorama face detector: edgetpu_panorama_face_detector.py

edgetpu_panorama_face_detector

For more information, please see here.

Panorama human pose estimator: edgetpu_panorama_human_pose_estimator.py

edgetpu_panorama_human_pose_estimator

For more information, please see here.

Panorama semantic segmenter: edgetpu_panorama_semantic_segmenter.py

For more information, please see here.

Node manager: edgetpu_node_manager.py

For more information, please see here.

Setup

Edge TPU dependencies installation

Install the Edge TPU runtime

  1. echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
  2. curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  3. sudo apt-get update
  4. # If you do not have USB3, install libedgetpu1-legacy-std
  5. sudo apt-get install libedgetpu1-legacy-max # Choose <YES> when asked
  6. sudo apt-get install python3-edgetpu

Install just the TensorFlow Lite interpreter (Kinetic)

  1. sudo apt-get install python3-pip
  2. wget https://dl.google.com/coral/python/tflite_runtime-1.14.0-cp35-cp35m-linux_x86_64.whl
  3. pip3 install tflite_runtime-1.14.0-cp35-cp35m-linux_x86_64.whl

Install just the TensorFlow Lite interpreter (Melodic)

  1. sudo apt-get install python3-pip
  2. wget https://dl.google.com/coral/python/tflite_runtime-1.14.0-cp36-cp36m-linux_x86_64.whl
  3. pip3 install tflite_runtime-1.14.0-cp36-cp36m-linux_x86_64.whl

Install just the TensorFlow Lite interpreter (Noetic)

  1. sudo apt-get install python3-tflite-runtime

For more information, please see here.

Workspace build

Workspace build (Kinetic)

  1. sudo apt-get install python3-catkin-pkg-modules python3-rospkg-modules python3-venv python3-empy
  2. sudo apt-get install ros-kinetic-opencv3
  3. sudo apt-get install ros-kinetic-catkin
  4. pip3 install --user opencv-python==4.2.0.32 numpy\<1.19.0
  5. source /opt/ros/kinetic/setup.bash
  6. mkdir -p ~/coral_ws/src
  7. cd ~/coral_ws/src
  8. git clone https://github.com/knorth55/coral_usb_ros.git
  9. wstool init
  10. wstool merge coral_usb_ros/fc.rosinstall
  11. wstool merge coral_usb_ros/fc.rosinstall.kinetic
  12. wstool update
  13. rosdep install --from-paths . --ignore-src -y -r
  14. cd ~/coral_ws
  15. catkin init
  16. catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5m -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.5m.so
  17. catkin build

Workspace build (Melodic)

  1. sudo apt-get install python3-catkin-pkg-modules python3-rospkg-modules python3-venv python3-empy
  2. sudo apt-get install python3-opencv
  3. sudo apt-get install ros-melodic-catkin
  4. source /opt/ros/melodic/setup.bash
  5. mkdir -p ~/coral_ws/src
  6. cd ~/coral_ws/src
  7. git clone https://github.com/knorth55/coral_usb_ros.git
  8. wstool init
  9. wstool merge coral_usb_ros/fc.rosinstall
  10. wstool merge coral_usb_ros/fc.rosinstall.melodic
  11. wstool update
  12. rosdep install --from-paths . --ignore-src -y -r
  13. cd ~/coral_ws
  14. catkin init
  15. catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so
  16. catkin build

Workspace build (Noetic)

  1. sudo apt-get install ros-noetic-catkin
  2. source /opt/ros/noetic/setup.bash
  3. mkdir -p ~/coral_ws/src
  4. cd ~/coral_ws/src
  5. git clone https://github.com/knorth55/coral_usb_ros.git
  6. wstool init
  7. wstool merge coral_usb_ros/fc.rosinstall
  8. wstool update
  9. rosdep install --from-paths . --ignore-src -y -r
  10. cd ~/coral_ws
  11. catkin init
  12. catkin build

Model download

  1. source ~/coral_ws/devel/setup.bash
  2. roscd coral_usb/scripts
  3. rosrun coral_usb download_models.py

Model training with your dataset

Please see here for more detailed information.

Add Device Access Permission

You need to your accout to plugdev group. To enable this feature, you need to re-loggin or run exec su -l $(whoami).

  1. sudo adduser $(whoami) plugdev

Demo

Please choose one of these demo.

USB or Laptop camera demo

Run all following commands in different terminals in parallel.

Run roscore

  1. roscore

Run usb_cam for normal image

  1. # source normal workspace, not edge tpu workspace
  2. source /opt/ros/${ROS_DISTRO}/setup.bash
  3. rosrun usb_cam usb_cam_node

Run Edge TPU launch

  1. # source edge tpu workspace
  2. # THIS IS VERY IMPORTANT FOR MELODIC to set /opt/ros/${ROS_DISTRO}/lib/python2.7/dist-packages in $PYTHONPATH
  3. source /opt/ros/${ROS_DISTRO}/setup.bash
  4. # THIS PUT devel/lib/python3/dist-packages in fornt of /opt/ros/${ROS_DISTRO}/lib/python2.7/dist-package
  5. source ~/coral_ws/devel/setup.bash
  6. # object detector
  7. roslaunch coral_usb edgetpu_object_detector.launch INPUT_IMAGE:=/usb_cam/image_raw
  8. # face detector
  9. roslaunch coral_usb edgetpu_face_detector.launch INPUT_IMAGE:=/usb_cam/image_raw
  10. # human pose estimator
  11. roslaunch coral_usb edgetpu_human_pose_estimator.launch INPUT_IMAGE:=/usb_cam/image_raw
  12. # semantic segmenter
  13. roslaunch coral_usb edgetpu_semantic_segmenter.launch INPUT_IMAGE:=/usb_cam/image_raw

To subscribe compressed input image, use IMAGE_TRANSPORT:=compressed

  1. roslaunch edgetpu_object_detector.launch INPUT_IMAGE:=/image_publisher/output IMAGE_TRANSPORT:=compressed

Run image_view

  1. # source normal workspace, not edge tpu workspace
  2. source /opt/ros/${ROS_DISTRO}/setup.bash
  3. # object detector
  4. rosrun image_view image_view image:=/edgetpu_object_detector/output/image
  5. # face detector
  6. rosrun image_view image_view image:=/edgetpu_face_detector/output/image
  7. # human pose estimator
  8. rosrun image_view image_view image:=/edgetpu_human_pose_estimator/output/image
  9. # semantic segmenter
  10. rosrun image_view image_view image:=/edgetpu_semantic_segmenter/output/image
  11. # panorama object detector
  12. rosrun image_view image_view image:=/edgetpu_panorama_object_detector/output/image
  13. # panorama face detector
  14. rosrun image_view image_view image:=/edgetpu_panorama_face_detector/output/image
  15. # panorama human pose estimator
  16. rosrun image_view image_view image:=/edgetpu_panorama_human_pose_estimator/output/image
  17. # panorama semantic segmenter
  18. rosrun image_view image_view image:=/edgetpu_panorama_semantic_segmenter/output/image

To subscribe compressed output image, set ~image_transport param to compressed

  1. rosrun image_view image_view image:=/edgetpu_object_detector/output/image _image_transport:=compressed

No camera demo

Run all following commands in different terminals in parallel.

Run roscore

  1. roscore

Run image_publisher for virtual camera

  1. # source normal workspace, not edge tpu workspace
  2. source /opt/ros/${ROS_DISTRO}/setup.bash
  3. rosrun jsk_perception image_publisher.py _file_name:=$(rospack find jsk_perception)/sample/object_detection_example_1.jpg

Run Edge TPU launch

  1. # source edge tpu workspace
  2. # THIS IS VERY IMPORTANT FOR MELODIC to set /opt/ros/${ROS_DISTRO}/lib/python2.7/dist-packages in $PYTHONPATH
  3. source /opt/ros/${ROS_DISTRO}/setup.bash
  4. # THIS PUT devel/lib/python3/dist-packages in fornt of /opt/ros/${ROS_DISTRO}/lib/python2.7/dist-package
  5. source ~/coral_ws/devel/setup.bash
  6. # object detector
  7. roslaunch coral_usb edgetpu_object_detector.launch INPUT_IMAGE:=/image_publisher/output
  8. # face detector
  9. roslaunch coral_usb edgetpu_face_detector.launch INPUT_IMAGE:=/image_publisher/output
  10. # human pose estimator
  11. roslaunch coral_usb edgetpu_human_pose_estimator.launch INPUT_IMAGE:=/image_publisher/output
  12. # semantic segmenter
  13. roslaunch coral_usb edgetpu_semantic_segmenter.launch INPUT_IMAGE:=/image_publisher/output

Run image_view

  1. # source normal workspace, not edge tpu workspace
  2. source /opt/ros/${ROS_DISTRO}/setup.bash
  3. # object detector
  4. rosrun image_view image_view image:=/edgetpu_object_detector/output/image
  5. # face detector
  6. rosrun image_view image_view image:=/edgetpu_face_detector/output/image
  7. # human pose estimator
  8. rosrun image_view image_view image:=/edgetpu_human_pose_estimator/output/image
  9. # semantic segmenter
  10. rosrun image_view image_view image:=/edgetpu_semantic_segmenter/output/image
  11. # panorama object detector
  12. rosrun image_view image_view image:=/edgetpu_panorama_object_detector/output/image
  13. # panorama face detector
  14. rosrun image_view image_view image:=/edgetpu_panorama_face_detector/output/image
  15. # panorama human pose estimator
  16. rosrun image_view image_view image:=/edgetpu_panorama_human_pose_estimator/output/image
  17. # panorama semantic segmenter
  18. rosrun image_view image_view image:=/edgetpu_panorama_semantic_segmenter/output/image

Insta360 air camera demo

Run all following commands in different terminals in parallel.

Run roscore

  1. roscore

Run insta360 air for panorama image

  1. # source normal workspace, not edge tpu workspace
  2. source /opt/ros/${ROS_DISTRO}/setup.bash
  3. roslaunch jsk_perception sample_insta360_air.launch gui:=false

Run Edge TPU launch

For insta360 air panorama image

  1. # source edge tpu workspace
  2. # THIS IS VERY IMPORTANT FOR MELODIC to set /opt/ros/${ROS_DISTRO}/lib/python2.7/dist-packages in $PYTHONPATH
  3. source /opt/ros/${ROS_DISTRO}/setup.bash
  4. # THIS PUT devel/lib/python3/dist-packages in fornt of /opt/ros/${ROS_DISTRO}/lib/python2.7/dist-package
  5. source ~/coral_ws/devel/setup.bash
  6. # panorama object detector
  7. roslaunch coral_usb edgetpu_panorama_object_detector.launch INPUT_IMAGE:=/dual_fisheye_to_panorama/output
  8. # panorama face detector
  9. roslaunch coral_usb edgetpu_panorama_face_detector.launch INPUT_IMAGE:=/dual_fisheye_to_panorama/output
  10. # panorama human pose estimator
  11. roslaunch coral_usb edgetpu_panorama_human_pose_estimator.launch INPUT_IMAGE:=/dual_fisheye_to_panorama/output
  12. # panorama semantic segmenter
  13. roslaunch coral_usb edgetpu_panorama_semantic_segmenter.launch INPUT_IMAGE:=/dual_fisheye_to_panorama/output

Run image_view

  1. # source normal workspace, not edge tpu workspace
  2. source /opt/ros/${ROS_DISTRO}/setup.bash
  3. # object detector
  4. rosrun image_view image_view image:=/edgetpu_object_detector/output/image
  5. # face detector
  6. rosrun image_view image_view image:=/edgetpu_face_detector/output/image
  7. # human pose estimator
  8. rosrun image_view image_view image:=/edgetpu_human_pose_estimator/output/image
  9. # semantic segmenter
  10. rosrun image_view image_view image:=/edgetpu_semantic_segmenter/output/image
  11. # panorama object detector
  12. rosrun image_view image_view image:=/edgetpu_panorama_object_detector/output/image
  13. # panorama face detector
  14. rosrun image_view image_view image:=/edgetpu_panorama_face_detector/output/image
  15. # panorama human pose estimator
  16. rosrun image_view image_view image:=/edgetpu_panorama_human_pose_estimator/output/image
  17. # panorama semantic segmenter
  18. rosrun image_view image_view image:=/edgetpu_panorama_semantic_segmenter/output/image

ROS node information

Object detector: edgetpu_object_detector.py

edgetpu_object_detector

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/rects (jsk_recognition_msgs/RectArray)

    • Rectangles of detected objects
  • ~output/class (jsk_recognition_msgs/ClassificationResult)

    • Classification results of detected objects
  • ~output/image (sensor_msgs/Image)

    • Visualization of detection results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~score_thresh: (Float, default: 0.6)

    • Score threshold for object detection
  • ~top_k: (Int, default: 100)

    • Maximum number of detected objects
  • ~model_file (String, default: package://coral_usb/models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite)

    • Model file path
  • ~label_file (String, default: package://coral_usb/models/coco_labels.txt)

    • Label file path.

Face detector: edgetpu_face_detector.py

edgetpu_face_detector

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/rects (jsk_recognition_msgs/RectArray)

    • Rectangles of detected faces
  • ~output/class (jsk_recognition_msgs/ClassificationResult)

    • Classification results of detected faces
  • ~output/image (sensor_msgs/Image)

    • Visualization of detection results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~score_thresh: (Float, default: 0.6)

    • Score threshold for face detection
  • ~top_k: (Int, default: 100)

    • Maximum number of detected faces
  • ~model_file (String, default: package://coral_usb/models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite)

    • Model file path

Human pose estimator: edgetpu_human_pose_estimator.py

edgetpu_human_pose_estimator

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/poses (jsk_recognition_msgs/PeoplePoseArray)

    • Estimated human poses
  • ~output/rects (jsk_recognition_msgs/RectArray)

    • Rectangles of detected humans
  • ~output/class (jsk_recognition_msgs/ClassificationResult)

    • Classification results of detected humans
  • ~output/image (sensor_msgs/Image)

    • Visualization of estimation results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~model_file (String, default: package://coral_usb/python/coral_usb/posenet/models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite)

    • Model file path
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~score_thresh: (Float, default: 0.2)

    • Score threshold for human pose estimation
  • ~joint_score_thresh: (Float, default: 0.2)

    • Score threshold of each joint for human pose estimation

Semantic segmenter: edgetpu_semantic_segmenter.py

edgetpu_semantic_segmenter

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/label (sensor_msgs/Image)

    • Estimated label image
  • ~output/image (sensor_msgs/Image)

    • Visualization of estimation results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~model_file (String, default: package://coral_usb/models/deeplabv3_mnv2_pascal_quant_edgetpu.tflite)

    • Model file path
  • ~label_file (String, default: None)

    • Label file path. pascal_voc label is used by default.
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Panorama object detector: edgetpu_panorama_object_detector.py

edgetpu_panorama_object_detector

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/rects (jsk_recognition_msgs/RectArray)

    • Rectangles of detected objects
  • ~output/class (jsk_recognition_msgs/ClassificationResult)

    • Classification results of detected objects
  • ~output/image (sensor_msgs/Image)

    • Visualization of detection results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~score_thresh: (Float, default: 0.6)

    • Score threshold for object detection
  • ~top_k: (Int, default: 100)

    • Maximum number of detected objects
  • ~model_file (String, default: package://coral_usb/models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite)

    • Model file path
  • ~label_file (String, default: package://coral_usb/models/coco_labels.txt)

    • Label file path.
  • ~n_split (Int, default: 3)

    • Number of splitting images from one large panorama image.
  • ~overlap (Bool, default: True)

    • Recognize with overlapping splitted images.
  • ~nms (Bool, default: True)

    • Use non-maximum suppression or not for overlap detection.
  • ~nms_thresh (Double, default: 0.3)

    • Non-maximum suppression threshold

Panorama face detector: edgetpu_panorama_face_detector.py

edgetpu_panorama_face_detector

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/rects (jsk_recognition_msgs/RectArray)

    • Rectangles of detected faces
  • ~output/class (jsk_recognition_msgs/ClassificationResult)

    • Classification results of detected faces
  • ~output/image (sensor_msgs/Image)

    • Visualization of detection results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~score_thresh: (Float, default: 0.6)

    • Score threshold for face detection
  • ~top_k: (Int, default: 100)

    • Maximum number of detected faces
  • ~model_file (String, default: package://coral_usb/models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite)

    • Model file path
  • ~n_split (Int, default: 3)

    • Number of splitting images from one large panorama image.
  • ~overlap (Bool, default: True)

    • Recognize with overlapping splitted images.
  • ~nms (Bool, default: True)

    • Use non-maximum suppression or not for overlap detection.
  • ~nms_thresh (Double, default: 0.3)

    • Non-maximum suppression threshold

Panorama human pose estimator: edgetpu_panorama_human_pose_estimator.py

edgetpu_panorama_human_pose_estimator

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/poses (jsk_recognition_msgs/PeoplePoseArray)

    • Estimated human poses
  • ~output/rects (jsk_recognition_msgs/RectArray)

    • Rectangles of detected humans
  • ~output/class (jsk_recognition_msgs/ClassificationResult)

    • Classification results of detected humans
  • ~output/image (sensor_msgs/Image)

    • Visualization of estimation results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~model_file (String, default: package://coral_usb/python/coral_usb/posenet/models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite)

    • Model file path
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~score_thresh: (Float, default: 0.2)

    • Score threshold for human pose estimation
  • ~joint_score_thresh: (Float, default: 0.2)

    • Score threshold of each joint for human pose estimation
  • ~n_split (Int, default: 3)

    • Number of splitting images from one large panorama image.
  • ~overlap (Bool, default: True)

    • Recognize with overlapping splitted images.

Panorama semantic segmenter: edgetpu_panorama_semantic_segmenter.py

Subscribing Topic

  • ~input/image (sensor_msgs/Image)

    • Input image

Publishing Topic

  • ~output/label (sensor_msgs/Image)

    • Estimated label image
  • ~output/image (sensor_msgs/Image)

    • Visualization of estimation results

Parameters

  • ~classifier_name (String, default: rospy.get_name())

    • Classifier name
  • ~model_file (String, default: package://coral_usb/models/deeplabv3_mnv2_pascal_quant_edgetpu.tflite)

    • Model file path
  • ~label_file (String, default: None)

    • Label file path. pascal_voc label is used by default.
  • ~enable_visualization (Bool, default: True)

    • Whether enable visualization or not
  • ~visualize_duration (Float, default: 0.1)

    • Time duration for visualization
  • ~image_transport: (String, default: raw)

    • Set compressed to subscribe compressed image

Dynamic parameters

  • ~n_split (Int, default: 3)

    • Number of splitting images from one large panorama image.

Node manager: edgetpu_node_manager.py

You can see the sample launch edgetpu_node_manager.launch

Parameters

  • ~nodes: (Dict, default: {})

    • Dictionary of node’s name and type.
    • type can be as follow:
      • edgetpu_object_detector
      • edgetpu_face_detector
      • edgetpu_human_pose_estimator
      • edgetpu_semantic_segmenter
      • edgetpu_panorama_object_detector
      • edgetpu_panorama_face_detector
      • edgetpu_panorama_human_pose_estimator
      • edgetpu_panorama_semantic_segmenter
    • Parameters for each node can be set after name namespace.
  • ~default: (String, default: None)

    • Default node name
  • ~prefix: (String, default: '')

    • Prefix for each nodes

Service

  • ~start: (coral_usb/StartNode)

    • Start node by node name
  • ~stop: (coral_usb/StopNode)

    • Stop node