For frequent iPhone camera users, the new low-light photography tool Night Sight was quite possibly the most compelling addition to Google’s Pixel lineup last year; it used machine learning to restore colors and details that would otherwise be lost to tiny phone camera sensors. Now third-party developer Halcyon Products has brought a similar tool to iPhones, enabling users to enjoy the benefits of computational photography while Apple perfects an integrated iOS alternative.

NeuralCam Night Photo uses a solution that’s been seen before — an HDR-style merging of multiple frames captured at different exposure levels — but with an exclusive focus on fixing images taken in low light. After focusing and snapping the initial image, the app asks you to hold your iPhone still as it grabs more color data, then spends some time compositing and outputting the final image.

Ideally, the result will look roughly as detailed and colorful as if it were taken in much brighter light, and in some test shots, that was indeed the case. There’s no question that low-light images captured with NeuralCam look markedly better overall than ones taken with the iPhone’s integrated camera app, if not perfect.

Above: Raw iPhone XS camera output from a dimly lit room on the left, NeuralCam photo on the right.

Image Credit: Jeremy Horwitz/VentureBeat

However, dim light creates focus lock challenges — an issue NeuralCam lets you combat with an optional manual focus slider — and might or might not give you a sharp result automatically. In some cases, a NeuralCam image will have much better colors but lower sharpness than raw iPhone camera output due to focus and/or hand shake.

The app is also quite slow. On Apple’s current top-end iPhone XS, capturing images in dim light took 3 to 5 seconds, with another 10 or 15 for processing. iPhone XS and XS Max models can deliver full-resolution 4,032 by 3,024 images, but earlier devices fall markedly in resolution due to limited RAM, and only iPhone 6 or newer phones are supported.

Halcyon CEO Levi Szabo tells VentureBeat that its machine learning-based brightening solution is based on a curated dataset of several thousand high-quality smartphone images, which blossomed into a synthetic dataset of roughly 10 million images for training. The company is using a customized, iOS-efficient convolutional neural network for initial brightening, adding sharpening and visual effects in post processing to improve the results. All of the image processing is handled on your device rather than sending anything off to a cloud server for processing.

NeuralCam is available now from the iOS App Store for $3. It’s presently iPhone-exclusive, but can run on iOS 12 iPads as well.

Updated on 8-29 at 8:28 a.m. Pacific: We’ve added additional details on NeuralCam’s machine learning solution above.

The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here