Twitter has just upgraded the machinery behind its search, this time with added real-time human computation!

Since machines are terrible at things like irony and instantly forming new associations and contexts between seemingly unrelated terms (e.g., “binders of women” and “presidential debate”), the company’s brilliant engineers have decided to call in the big guns: Actual. People.

Although the prevailing wisdom would have it that we meatbags are ridiculously underpowered in the computational power category, Twitter devs Edwin Chen and Alpa Jain write this morning on the company’s engineering blog that meatbags will be used to create annotations for newly trending search terms.

From the blog:

First, we monitor for which search queries are currently popular. Behind the scenes: we run a Storm* topology that tracks statistics on search queries. … As soon as we discover a new popular search query, we send it to our human evaluators, who are asked a variety of questions about the query [via a custom pool of specialized workers from Amazon’s Mechanical Turk service]. … Finally, after a response from an evaluator is received, we push the information to our backend systems, so that the next time a user searches for a query, our machine learning models will make use of the additional information. For example, suppose our evaluators tell us that [Big Bird] is related to politics; the next time someone performs this search, we know to surface ads by @barackobama or @mittromney, not ads about Dora the Explorer.”

* Storm is Twitter’s trend-spotting software that quickly identifies spikes in search queries as they occur. You can check it out on GitHub if that sounds interesting to you. Storm, an open-source project, was built at BackType, which Twitter acquired.

Twitter says it uses the meatbag method for other tasks and also focuses on making its machine learning better with human input. For those worried about the quality of said input, Twitter assures the public that only the finest of Mechanical Turk workers are being tapped to handle these kinds of tasks. As the blog notes, “Having highly trusted workers means we don’t need to wait for multiple annotations on a single search query to confirm validity, so we can send responses to our backend as soon as a single judge responds. This entire pipeline is designed for real-time, after all, so the lower the latency on the human evaluation part, the better.”

If all that up there was too wordy for ya, here’s the news in singing telegram form:

Image credit: Shutterstock