Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Facebook announced today that it is using artificial intelligence to make sure that 360-degree photos uploaded to the social network look their best when other people view them. The company laid out a system at its @Scale conference today that uses deep neural networks to try to correct for common orientation errors with the photos that are uploaded.

If someone taking a 360 degree photo doesn’t hold the camera perfectly in line with the horizon, the resulting image can be tilted, which makes it harder to read and breaks the sense of immersion if the image is being viewed in virtual reality.

Facebook’s system takes in a photo and outputs a pair of values for the tilt and roll correction needed to bring the horizon of the photo in line. That way, it doesn’t feel like users are viewing a crooked image when they look around a scene. It’s based on AlexNet, an image recognition system that has been used to tackle other problems like determining the contents of images. Right now, that system isn’t in production, but the company’s research shows promise.

Making 360 degree photos look good on Facebook is key for the social networking company, which is investing heavily in virtual reality. For example, Facebook’s Rooms social VR app lets avatars hang out with 360 degree photos as their backdrops. If those images don’t look their best, it would degrade the entire experience of using the app.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

In addition to the automatic rotation issue, Facebook also had to contend with the massive size of the 360 degree photos that were being uploaded to its service. While that may not be a huge problem on super fast networks and devices, it could be an issue on mobile devices on cellular networks.

Facebook converts the photos into cubes, and then stores those cubes at different resolutions. Those images are then broken up into a set of 512×512 pixel squares. When a user pulls up a photo, Facebook will calculate which resolution and what position within the image needs to be loaded. In the event it’s not possible to get a high enough resolution right away, the social network will render a lower-res version until the correct quality is available.

Update 10:45 Pacific: Facebook’s auto-orientation system is not in production yet. The story has been updated to clarify that.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.