The power of the cloud, artificial intelligence, and machine learning is making smart cities and data-based Location of Things navigation a reality. Join the Principal Product Manager for Microsoft Azure Maps and others and learn how advanced location technology will revolutionize everything from autonomous cars to connected cities. Don’t miss this VB Live event!
Location data is the foundation of technology: It’s what binds a device and a user, a user and the environment they’re in. And as location data moves to the cloud and gets smarter, powered by artificial intelligence and machine learning, the number of potential applications for smart location data is exploding, says Chris Pendleton, Principal PM for Azure Maps. We’re on the threshold of creating a smarter society, built on the hundreds of millions of connected devices that together create The Location of Things.
“This is the planet we all live on — we’re always going to need to know where things are,” Pendleton says. “Now it’s really about broadening the definition of ‘location’ and making that knowledge more accessible.”
The Location of Things concept is built on four pillars: First the external mapping platform, which is what users are familiar with today — the location data that offers us directions on a map to houses and business listings. Second is indoor mapping, or in other words, being able to find locations and pathways inside of buildings, which is actually a very different mapping methodology. Then there’s analysis, or business intelligence, around location analytics, and finally there’s navigation, which extends to autonomous driving.
The first pillar is already fairly sophisticated, but the other three are gaining traction as new sensor technology is developed, Pendleton says, and localization improves.
GPS is pretty inaccurate, particularly in urban corridors. You might have noticed the phenomenon yourself as you drive through a city full of skyscrapers — an echo or bounce, where the GPS signal ricochets off buildings and offsets your position.
Using additional sensors, such as the camera in a car, can pinpoint that position far more, he says.
“From a front-facing car camera, for example, I can actually look at an index of images and tell exactly where you are, because there’s only one place, probably, in that one square kilometer that looks exactly like that position from that exact spot, even getting down to a single lane and a position where a camera would be projected from,” he says. “Using the camera sensor and computer vision to identify objects, and convert those into vector mapping, allows us more accurate positioning.”
However, it isn’t one-use and done, good only for autonomous cars, he points out.
For example, there’s smart-mapping the internal working of a manufacturing plant, from robot and vehicle pathways to human crossroads, and making that traffic move as smoothly and efficiently across optimized, autonomous routes.
“We can not only create the map of the manufacturing plant, and not only build the travel, but we actually also make all the moving parts talk to and optimize for each other,” Pendleton says.
It’s almost like a smart city in miniature, and shows the potential of connected infrastructure and regenerative maps that continuously update, allowing the system to predict events, adjust on the fly, or discover potential issues just via the penetration of data. You can identify a traffic incident faster, get paramedics to a scene more quickly — and even cure traffic.
“That’s one of the things we’re trying to do: solve traffic congestion in ways where you’re using data in aggregate, but the infrastructure needs to be there,” Pendleton says.
Another completely different scenario that he’s excited to point out is the revolutions that could occur just in the foster care system, where participants can identify the system services such as medical care and food that are available to their region, and then find the transportation network, the buses or light rail, to get themselves there. And from there, intelligent planning is the next step, to identify where concentrations of foster homes are and the services they need, and how to optimize transportation for these people to be more easily transported to get to those services.
Moving forward, he says, “the first set of map services is out there, but if you start to think about data models and implementation, you’re building the world in your mind as a hierarchal, topological, temporal graph.”
In other words, looking at data in the context of a world, a continent, the United States, Washington, a county, a city, and how data from all these regions touch, overlap, move and shift over time, and it’s the foundation of a new way of mapping.
“This is a big moment of change,” Pendleton says. “We’re understanding the importance, the criticality, and the significance of maps, the map data quality, the vintage of data, and how the penetration of devices all come together to benefit not just leveraging a map for reference and context, but actually creating maps. And Microsoft is extending partnerships with some of the best companies in the world to further engage and get deeper.”
To learn more about how smart location data is getting smarter, how it’s not just transforming industry now but is set to change cities in the future, don’t miss this VB Live event.
Don’t miss out!
In this VB Live event, you’ll learn:
- How to leverage the power of the cloud, AI, and machine learning across devices by contextualizing location data in real time
- The role of location-based data mapping in the “Location of Things”
- The application of data-enriched mapping to industries like retail and automotive
- How “Location of Things” powered by geographical data can be used to connect autonomous driving, smart mobility, and smarter cities
Speakers:
- Chris Pendleton, Principal PM, Azure Maps
- Peter Frans Pauwels, Co-founder, TomTom
- Jennifer Belissent, Principal Analyst, Forrester
- Rachael Brownell, Moderator, VentureBeat
Sponsored by TomTom