0309 » History » Version 6
Pierre Narvor, 2018-03-09 13:30
1 | 5 | Pierre Narvor | h1. 03/09 : PoM/Prism Discussion |
---|---|---|---|
2 | 1 | Pierre Narvor | |
3 | *Participants:* |
||
4 | 3 | Pierre Narvor | Simon, Ellon, Quentin, Pierre |
5 | 1 | Pierre Narvor | |
6 | 5 | Pierre Narvor | *Goal of the meeting*: Discuss the PoM/Prism architecture (previously InSitu). Prism is the sub-task of PoM which handle the internal transform tree of a robot (RobotBaseFrame to SensorFrame(s)). |
7 | |||
8 | 6 | Pierre Narvor | h2. Features |
9 | 1 | Pierre Narvor | |
10 | 6 | Pierre Narvor | * *Monolithic structure, containing the entirety of the project:* this solution has the advantage of the "all-in-one", but we have to work on a solution to be able to compile only parts of the project |
11 | |||
12 | * *Several "big" repos:* 1 for all the DFPCS, 1 for the Simu, 1 for the displays, 1 for the tools, 1 for the 3rd party libraries, etc... This has the advantage of "easily" picking what you want, but multiplies the sources and makes it hard to compile on every system if not done properly |
||
13 | |||
14 | * *One repo = one thing:* 1 for EVERY DFPC, 1 for the Simu, 1 for every type of display, etc... This is too much |
||
15 | |||
16 | |||
17 | mighty-localizer |
||
18 | 1 | Pierre Narvor | |
19 | First, we have to keep a transform tree up to date. The tree used here is represented below: |
||
20 | |||
21 | !frames.png! |
||
22 | |||
23 | The frames are the nodes of the graph, and the DFPCs that change them are visible in red. Some DFPCs give a transformation between the same frame at two successive times (WO for wheel Odometry, and VO for visual Odometry) while some DFPCs give the transform between two frames at a given time. |
||
24 | |||
25 | The way we proposed to do it is to index the poses on the highest frequency localization DFPC (Wheel Odometry on principle), and timestamp each observation made by any sensor in order to keep track of the poses we need to memorize. Below is a small example in time commented: |
||
26 | |||
27 | !timeline1.png! |
||
28 | |||
29 | As we begin, no observations are made, only poses coming from the Wheel Odometry, which are added to the graph. |
||
30 | |||
31 | !timeline2.png! |
||
32 | |||
33 | Observations are provided by both sensors, and each time an observation is produced, a timestamp is made, to memorize the available pose at that time (coming from WO, because no other source has produced a pose at the moment). |
||
34 | |||
35 | !timeline3.png! |
||
36 | |||
37 | After a while, a pose in the past is given by PG SLAM. A corresponding Edge is added to the graph. The changes can then be propagated to the rest of the poses, i.e. the future. |
||
38 | |||
39 | !timeline4.png! |
||
40 | |||
41 | Graph can be the pruned in order to remove unneeded poses [CORRECTION MIGHT BE NEEDED THERE]. Every pose corresponding to a timestamp is kept. |
||
42 | |||
43 | !timeline5.png! |
||
44 | |||
45 | After a while, the PG-SLAM produces a new pose corresponding to another LIDAR obs. How do we propagate then? Future AND past? We might need to update poses corresponding to stereo (e.g. if we want to produce the corresponding DEM). To me each time a node is the recipient of an update, ALL edges leading to this node should be updated. |
||
46 | |||
47 | !timeline6.png! |
||
48 | |||
49 | A but more pruning. Still need to keep the poses corresponding to a timestamp! |
||
50 | |||
51 | !timeline7.png! |
||
52 | |||
53 | If the PG-SLAM updates all poses, we propagate from the most ancient one to the closest one. |