The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
Optical character recognition (OCR), or the conversion of images of handwritten or printed text into machine-readable text, is a science that dates back to the early ’70s. But algorithms have long struggled to make out characters that aren’t parallel with horizontal planes, which is why researchers at Amazon developed what they call TextTubes. They’re detectors for curved text in natural images that model said text as tubes around their medial (middle) axes. In a paper describing their work, the coauthors claim that their approach achieves state-of-the-art results on a popular OCR benchmark.
As the researchers explain, scene text is typically broken down into two successive tasks: text detection and text recognition. The first involves localizing characters, words, and lines using contextual clues, while the second aims to transcribe their content. Both are easier said than done — text in the wild is affected not only by deformations, but by viewpoint changes and arbitrary fonts.
The team’s solution is a “tube” representation of the text reference frame that captures most of the variability, taking advantage of the fact that target text is usually a concatenation of characters of similar size. It’s formulated as a mathematical function that enables the training of machine learning scene text detectors, in contrast to traditional approaches that use overlap- and noise-prone rectangles and quadrilaterals to capture text information.
The researchers evaluated TextTubes’ performance on CTW-1500, a data set consisting of 1,500 images collected from natural scenes and image libraries and over 10,000 text instances with at least one curved instance per image, and on Total-Text, which contains roughly 1,255 training images and 300 test images with one or more curved text instances. They report that they achieved industry-leading results with 83.65% accuracy on CTW-1500, compared with the closest method’s 75.6% accuracy.
“Modeling an instance’s medial axis and average radius … captures information about the instance overall,” wrote the paper’s coauthors. “On datasets that consist of individual words, such as Total-Text, our model is able to achieve state-of-the-art performance. On datasets that have line-level annotations, such as CTW-1500, our model is able to better capture textual information along an instance’s separate words.”
Assuming TextTubes makes its way into production someday, it could be a boon for enterprises that rely heavily on OCR to conduct business. It’s estimated that paper remains in over 80% of digital processes; roughly 97% of small businesses still use paper checks. That’s perhaps why the OCR solutions market is anticipated to be worth $13.38 billion by 2025, according to Grand View Research.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more