Project

General

Profile

Actions

0309 » History » Revision 12

« Previous | Revision 12/27 (diff) | Next »
Pierre Narvor, 2018-03-09 14:00


03/09 : PoM/Prism Discussion

Participants :
Simon, Ellon, Quentin, Pierre

Goal of the meeting : Discuss the PoM/Prism architecture (previously InSitu). Prism is the sub-task of PoM which handle the internal transform tree of a robot (RobotBaseFrame to SensorFrame(s)).

Note : PoM is divided in two part : Prism which handles the pose of sensors relative to the robot and (name undefined, default = MightyLocalizer) MightyLocalizer which handles the pose of the robot frame relative to the World Frame. The meeting is only about Prism.

Features of Prism

  • Load and maintain the RobotBaseFrame to SensorFrame(s) tree of the robot up to date : URDF configuration file for initialization. Is a client of moving sensor mounts (arms, platforms...) to update internal transform tree. No memory. Handles transform uncertainty.
  • Sensor pose service : A sensor acquiring data must ask Prism for a time stamp and a sensor pose(= current transform RobotBaseFrame to SensorFrame + uncertainty). The pose and the time are stored by the sensor node alongside the sensor data (this pose will be of use only with this data. Hence no memory is need in Prism).
  • Send time stamp to mighty-localizer for saving

mighty-localizer

First, we have to keep a transform tree up to date. The tree used here is represented below:

The frames are the nodes of the graph, and the DFPCs that change them are visible in red. Some DFPCs give a transformation between the same frame at two successive times (WO for wheel Odometry, and VO for visual Odometry) while some DFPCs give the transform between two frames at a given time.

The way we proposed to do it is to index the poses on the highest frequency localization DFPC (Wheel Odometry on principle), and timestamp each observation made by any sensor in order to keep track of the poses we need to memorize. Below is a small example in time commented:

As we begin, no observations are made, only poses coming from the Wheel Odometry, which are added to the graph.

Observations are provided by both sensors, and each time an observation is produced, a timestamp is made, to memorize the available pose at that time (coming from WO, because no other source has produced a pose at the moment).

After a while, a pose in the past is given by PG SLAM. A corresponding Edge is added to the graph. The changes can then be propagated to the rest of the poses, i.e. the future.

Graph can be the pruned in order to remove unneeded poses [CORRECTION MIGHT BE NEEDED THERE]. Every pose corresponding to a timestamp is kept.

After a while, the PG-SLAM produces a new pose corresponding to another LIDAR obs. How do we propagate then? Future AND past? We might need to update poses corresponding to stereo (e.g. if we want to produce the corresponding DEM). To me each time a node is the recipient of an update, ALL edges leading to this node should be updated.

A but more pruning. Still need to keep the poses corresponding to a timestamp!

If the PG-SLAM updates all poses, we propagate from the most ancient one to the closest one.

Updated by Pierre Narvor over 6 years ago · 27 revisions