A Feature Point is a distinct, high-contrast visual feature in an image. A corner of a poster on the wall, the grain on a wooden floor or a detail in the facade of a building.
Map construction works by finding the same Feature Points in multiple images from different viewpoints and estimating the 3D structure of scene by triangulating those feature points.
It is important to try to cover the target environment from multiple viewpoints and possibly different distances if necessary.
You should aim to have even as much as 50% overlap between images you want to match.
For best results, the same Feature Points should be matched in at least 3 different images.
When mapping, our Mapping App will notify you if subsequent images can be connected by matching Feature Points.
Below is an illustration of 9 captured images and how they connect to each other.
Keep in mind that not all sequential images need to connect!
The above images were captured in sequence, and images 3 and 4 are not connected to each other. That does not matter as, for example, image 3 will connect to 1, 2 and 9 when the map is constructed.
Not all spaces can be mapped.
For example, highly reflective surfaces don't have static visual features for map construction.
Another problem is large areas with uniform color with no detectable visual features at all.
Not enough visual features and most of them are on an object that is likely to move around.
Plenty of visual features. Some of them are still on a moving object, but many others are on static surfaces. This kind of a scene can be mapped easily.
Reflective surfaces cause the camera to see false visual features. These reflected features will move visually depending on the viewpoint and can not be used for construction.
Low-light scenes will be difficult for the camera to see. Any visual features will likely be fuzzy, noisy and cause problems if they are even detected.
To construct a good map, the captured images need to cover the same areas from different angles.
When localizing against a map, the map should contain data from a captured image with a similar viewpoint in it.
The captured images don't need to be identical by any means, but should have a roughly similar angle and distance to target.
The more captured images a Feature Point appears in, the better its accuracy will be. The system requires a Feature Point to be found in two captured images, but more will be better.
No overlap between captured images as all the images are viewing different directions. Matching features can't be found between them and the images will not connect.
No parallax between sets, all images are captured from a single viewpoint.
Matching Features Points are found but the resulting map might be inaccurate with no distance between captured images.
For better results, try capturing images further apart from each other.
Nice overlap of Feature Points between captured imaged and plenty of parallax between them. The resulting map will be very accurate.
This image illustrates the overlapping Feature Points between captured images from previous illustration.
You should aim to have roughly 50% of a captured image matching another one.
By default, the constructed map is in no specific orientation!
The Y-axis of the map might be correctly pointing up, but rotation around the Y-axis is not constrained and can change between different constructions if you remap the same space.
Using just GPS or the device compass to orient the map is not reliable enough.
This makes it difficult to maintain AR content so we provide a way to accurately define the orientation of the map.
With an Anchor Image, you can make sure the map orientation is preserved when remapping a space.
Only the latest Anchor Image is used when constructing a map.
Using Anchor Images
When using the Mapping App, open the Tools Menu in Workspace mode and click Add Anchor Image.
A notification should appear when Anchor Image is captured.
When adding the Anchor Image, an image is captured and the device orientation is recorded.
The recorded orientation is then used when constructing the map.
When capturing the Anchor Image, the device camera direction is used to determine the map Z-axis.
Map Y-axis is automatically computed based on the device sensors and "up" direction.