项目作者: hav4ik

项目描述 :
A python framework for designing high-performance Computer Vision pipelines at the Edge. Supports Coral Edge TPU, Raspberry Pi Camera, and more.
高级语言: Python
项目地址: git://github.com/hav4ik/eyesight.git
创建时间: 2020-03-22T11:24:59Z
项目社区:https://github.com/hav4ik/eyesight

开源协议:Apache License 2.0

下载


Eyesight

Python 3.6
License
Travis CI

Eyesight is a high-level minimalistic framework for Computer Vision on the Edge, written in Python. It was developed with focus on performance. Supports Coral Edge TPU, Pi Camera, and more.

Installation

Currently, this package is not uploaded to PyPi. The easiest way to install it right now is:

  1. $ git clone https://github.com/hav4ik/eyesight.git
  2. $ cd eyesight/
  3. $ pip install -e .

This will install the package in your local environment (the best practice is to use either Conda or virtualenv to create an Python environment).

Basics

The EyeSight framework allows developers to define asynchronuous computation graph for Computer Vision pipelines without
having to worry about thread management, timestamp synchronization, locking mechaisms and all that crap. The framework
consists of following elements:

  • Packages is the thing that gets passed between computation nodes. A package can hold any type of python
    objects (images, numpy arrays, pandas dataframes, etc.) together with the full time-stamped history of its processing.
    For example, if an collection of bounding boxes was acquired from an object detection node, that took its input
    images from two cameras, then the package holding these boxes will include timestamps from both cameras and
    from the detection node, in chronological order.

  • Services are the minimal computation unit in the EyeSight graph, represented as a node. It takes in outputs from
    input nodes (can be an image, numpy array, or anything) and feeds its output to child nodes (services).

  • Adapters controls the input data stream into each Service. It decides the timestamp synchronization strategy
    for the input data.

  • Manager manages the services assigned to it. Although services are self-sustained (e.g. if you turn off
    an input service, it will turn off all services that depends on its outputs), having a service manager
    is always handy.

Both Services and Adapters are implemented in lock-free and no-copy fashion, meaning the exposed variables are
thread-safe. However, sometimes they rely on the internal lock mechanism of Python, which is not always suitable for a
computer vision pipeline and it can significantly slow down the pipeline. For this reason, the framework by default
uses RW locks everywhere.

Minimal Example

This is a minimal example of constructing an EyeSight computation graph. The full example can be found in eyesight/examples/demo_basic.py.

  1. import eyesight
  2. import eyesight.services as services
  3. # Raspberry Pi camera input node
  4. camera = services.PiCamera()
  5. # MobileNetV2 SSD COCO applied to the camera's outputs
  6. detector = services.ObjectDetector(camera)
  7. # Aside from detection, we also want to
  8. tracker = services.OpticalFlowLucasKanade(camera)
  9. # Output visualization
  10. composer = services.DetectronDraw(
  11. image_stream = camera, detector=detector, tracker=tracker)
  12. # ServiceManager will automatically detect service's dependencies
  13. # and include them recursively as well
  14. manager = eyesight.ServiceManager(composer)

Troubleshooting

  • I’m using TensorFlow Lite and there’s an FPS drop / memory leak.
    This is probably not the problem of EyeSight, but the TensorFlow problem. Update TensorFlow to version 2.3.0.
    If you are using it with EdgeTPU, then any version of TensorFlow or tflite_runtime should work.