News

Introducing Project Tango Area Learning – Google I/O 2016 Coverage

imgres

Please note that this is just my thoughts and a summary of the event. Also this event and many others can be live streamed in the Google I/O app or on the website, but if your short on time and don’t want to sit through many one hour long events then just read our Google I/O coverage articles.

We’ve been hearing things about Project Tango for a while now. For those who don’t know Project Tango is about enabling mobile devices to understand the world around them. For example if your in a really big building like Mall Of America and your trying to find a McDonald’s your Tango device will show you on the device where to go. It’s essentially another example of augmented reality.

There are three main technologies of Project Tango, and those are motion tracking, depth perception and area learning. motion tracking is just as the name implies. It is used to track you movements as well as how far you moved and where you started and ended. Depth perception in Project Tango is just the 3D geometry of an object or the area around you.

What is Area Learning?

Area learning is mainly what this I/O live stream was mostly focused on. Area Learning is essentially the computer equivalent to our memories and how we remember things. When you buy a new house you have to remember where everything is, what door goes where, what room is what, and the list goes on. Area Learning is very similar to that. When you bring your Project Tango device into a new space it will use its camera to look at the space it’s in and compute this mathematical description of what the space looks like and it will store that description in its memory. If you leave that space and come back later the device will be able to compare what it sees with its camera with the description stored in it’s memory and when the two match up it will know exactly where it is in that space.

How does Area Learning Work?

How does Area Learning work under the hood? Tango devices have a wide angle camera that it uses to look at the space your in. It also uses that lens to find key landmarks in the space your in. As you move around you see things from a different perspective. Tango will look at all the landmarks in the space and see how they move as the Tango device move and based on the motion of the device it can create its own motion. Now that just the motion tracking part of it, that doesn’t incorporate the Area learning part of it. You need to add area learning in order for the device to remember the space. It doesn’t need to remember the whole space. It just needs to remember those landmarks and what they look like. Tango only stores the position of the landmark and where it looks like in its memory and this is how Tango can remember the space.

This is definitely really cool tech that will be making it’s way into applications as soon as developers start adding the Area Learning code into their already make augmented reality applications. Also google has partnered with Lenovo to build the first consumer ready Tango enabled smartphone which we should see later this year.

 

 

 

 

About the author

Profile photo of Josh Ramnauth

Josh Ramnauth

A young tech enthusiast who loves all sorts of technology and loves to write about it. I live and breath innovation.

Leave a Comment

Powered by keepvid themefull earn money