Hi everyone, my name is Konstantin, I’m a computer vision engineer at Arcona and today we’re going to share an important tehcnological update with you. Recently we have adapted and ported our SLAM algorithms to a mobile device – pretty oldschool mobile device to be honest. this is iPad Air, released in 2013, so almost 5 years old. And that’s one of our main goals – to make our digital world as accesibble as possible, so we try to support a wide range of devices, not just the top ones. So we have our test app installed here, ARViewer. We are gonna watch it. What a stunning picture! 🙂 So SLAM allows us to determine a position of your device (3D position) in space, using just a camera input. The way it works: SLAM detects special patches on images, called “visual features”, you can see these circles, they also show the size and orientation it makes them scale on rotation invariant. Which is very important. And by tracking those features from frame to frame, we can analyse motion and we can calculate our 3D position in space. While I’m moving, you can see this green trajectory of mine moving as well. It goes round the corner a bit and then back. We can also see the position here Having this precise positioning combined with global coordinates via GPS, and with our plane detection algorithms, we can deliver real rich augmented reality experiences And more: you can see these object not just from one device but from multiple at the same time, and we can place the object like it realy exists in our real world, so it opens up a lot of opportunities for really interesting AR applications. Later we’ll show more how it works and looks, so stay tuned and thank you for your attention!