Paper Augmented Reality Toolkit - 用于处理的交互式投影
PapARt is a software development kit (SDK) that enables the creation of interactive projection mapping.
It is a long running project by Jeremy Laviole, created by Inria, Bordeaux University and the lastest updates
are from CATIE and some personal time.
It comes from the augmented physical drawing tools created by Jeremy Laviole, which are documented in his PhD thesis (free to read).
It is possible to switch easily between AR on top of video (SeeThrough) and
AR using Projection just by changing a few lines of code.
The native tracking in PapARt is based on ARToolkitPlus, nowadays it is possible to
use higher quality marker tracking with ARUCO.
We have a built-in support for color detection in RGB, HSV and Cie XYZ color spaces.
The latest examples use custom circular tracking for colored stickers that provide
position, and orientation.
Many depth cameras are supported : Kinect, Orbbec Astra, Realsense Depth cameras (older models).
For these depth images, we have two object detection, a simple one that detect and track
objects over a plane. The second one is a hierarchical tracking, of arm, hand and finger detection
and tracking. It achieves high quality results for finger tracking but it is still harder to tweak
and requires more hardware resources.
We use a fork of ControlP5 called Skatalo which is updated to handle multiple “touch” events
instead of a single cursor and click. The elements detected and tracked can be used to activate
widgets: buttons, toggles, sliders.
We follow the Processing APIs, using millimeters instead of pixels.
The interesting consequence is that rendering can be adjusted following the
hardware capabilities and projector location.
PapARt is build on top of Processing, OpenCV, JavaCV and JavaCPP. Our calibration
boards use the SVG format and are created using Inkscape.
The latest updates rely on Redis, ARUCO, and other open source projects.
This library is an outcome enabled by many open source communities.
We include advanced examples of uses:
After a two years break a new version comes to life. It integrates back the Natar developments
and builds up a new structure from the micro-service creation experience.
The main update is the support of modern version of OpenCV : 4.5.4, and modern operating systems and machines.
This new support is permitted by the Processing community.
Support on other architecture and devices should be possible, notably Raspberry PI or Android.
PapARt hardware from RealityTech will be distributed soon with a free licence (Creative Commons).
The 3D models and sample calibrations for known hardware will be released.
The bill of materials will be included also, with projector, camera, screen support and
recommended configuration.
Although RealityTech is not in operation anymore, we could help with the creation of such devices
and for research/industrial projects with PapARt you can contact us at CATIE: j.laviole@catie.fr .
Processing 4 is currently in beta, it brings support of Java 17.
We support again all major OS for this release.
Initially, Natar was the follow-up project of PapARt for larger projects.
Natar is a communication protocol for images based on Redis. It features a support of
calibration files within Redis.
The full support and update is in progress and full tutorials are to be created.
Updated support and revival of Natar will be the goal of 1.7.
The last bits are opened, as RealityTech stopped its AR two years ago.
Most notably the calibration tools, used to create the hardware, are now
available and will be documented.
The first public demonstrations were 2011 at the “Palais de la découverte” in Paris, a few months
before the first paper was published.
The first steps were getting a project projection, then it snowballed:
In 2019, it slowed down to a few customer projects, and stopped dead for two years.
All of the basics are there. I got quite sick of this project after 8 years on it.
Now, new people come to projection-based AR and want to give it a go.
You will suffer with calibration issues until the guides are perfect, or a new hardware is created/sold.
However, the tools offered by PapARt are wide enough to create a wide variety of experiences.
A few developer devices are out there (at least 4) in universities, if the projector and cameras were not salvaged
the new guides could come handy.
Aside the research projects, two commercial applications are in use, and a few more should be created soon.
This project comes back to life from demands in research, by students and retail use.
I want to give it a push, maybe also ressurect the devices as a Kit to download, or buy pre-build to assemble.
This new release brings many new features. It is now easier to place PaperScreen on the table with
the new TableScreen class.
The color tracking and particularly the circular tracking is quite robust and enable the creation of
physical interfaces with a high detection rate. There will be a complete tutorial on how to create
a mixed reality interface with touch and circle tracking.
We work to improve the current API, as it will be part of the coming Nectar platform. The main
motivation for Nectar to push further the possibilites of SAR with PapARt. The rendering will not
be limited to Processing for rendering with the Unity3D Nectar plugin. The plugin is in
internal test/development phase, and is already quite promising.
More on the example repository, 1.4rc branch..
The 1.4 version and development versions are hosted on gitlab. You can request access if you collaborate with RealityTech, or use RealityTech Hardware platforms.
The 1.3 version, sister of 1.4 will be free and publicly available on github.
The first 2018 releases are 1.1 and 1.2.There are two major updates:
Other features:
The first big release is ready. If you want to try it out download our precompiled version from the example repository.
This repository is for the development of the library.
You may want to go to the PapARt-Examples repository to see how to use it or discover the features and demos.
It enables the creation of Augmented Reality (AR) applications in Processing.
Like most AR toolkit, it is vision based and it detects the world using color cameras.
In addition to this, PapARt also uses a depth camera.
We generally use pre-calibrated (intrinsics parameters) and PapARt enables the extrinsic calibration: how cameras are located relatively from one to another. It also provides simple and unprecise tools to create intrinsic calibration.
It uses tracking libraries such as ARToolkit and OpenCV, and can be extended.
The strength of this library is the creation of interactive projection (also called spatial augmented reality in research).
In addition to cameras, PapARt calibrates the projector’s extrinsics to create projector/camera systems also called ProCams.
Interactivity is increased thanks to an object and hand tracking enabled by the depth camera.
More information about the research project here:
PapARt is large library, and work with many different systems:
The open source release is new (end of August 2016), feel free to fork, star, and file issues for this sources.
You can contribute your examples to the example repository to
share your creations with the community.
The distribution got better, and the next steps would be to create versions on Android and/or on Raspberry PI.
PapARt is an open source software owned by Inria, Bordeaux University and CNRS, distributed
under the LGPL license.