Return of the human computers Source: Belle Mellor
IT WAS late summer 1937, and the recovery from the Depression had stalled. American government officials had stimulus money to spend but, with winter looming, there were few construction projects to fund. So the officials created office posts instead. One project was assigned to a floor of a dusty old New York industrial building, not far from Times Square. It would eventually house 300 computers―humans, not machines.
The computers crunched through the calculations necessary to create mathematical tables, then an indispensable reference tool for many scientists. The calculations were complex and the computers, drawn largely from the ranks of New York’s poor, possessed only basic numeracy. So the mathematicians in charge of the project worked out how to break each calculation down into simple operations, the outcomes of which could be combined to give a final result.
It was a technique that had been employed for decades across America and Europe. The field of human computing even had its own journal and trade-union representation. Computing offices calculated ballistics trajectories, processed census statistics and charted the course of comets. They would continue to do so until the 1960s, when electronic computers became cheap enough to consign the profession to history.
Until recently, that is. Over the past few years, human computing has been reborn. The new generation of human computers carry out different tasks, but they mirror their predecessors in many other ways. They are being drafted in to perform tasks that computers cannot. They are employed in large numbers and are organised into streamlined workflows. And, as was the case in the age before electronic computers, their output is combined to generate results that could not easily be produced in any other way.
In one proof-of-principle experiment, published earlier this year, human computers were used to create encyclopedia entries. Like performing mathematical calculations, this is a skilled job, but one that can be broken down into simpler parts, such as initial research, writing and editing. Aniket Kittur and colleagues at Carnegie Mellon University in Pittsburgh, Pennsylvania created software, known as CrowdForge, that manages the process. It hands out tasks to online workers, which it contacts via Mechanical Turk, an outsourcing website run by Amazon. The workers send their work back to CrowdForge, which combines their output to produce surprisingly readable results.
Several American start-ups are operating similar workflows. CastingWords breaks audio files down into five-minute segments and farms each out to a transcriber. Each transcription is automatically bounced back to other workers for checking and, once deemed good enough, an (electronic) computer combines the segments and returns the finished product to the customer. At CloudCrowd a similar system is used to co-ordinate teams of human translators. Others are combining human and artificial intelligences. An app called oMoby, produced by IQ Engines, can identify objects in images snapped by iPhone users. First it applies object-recognition software, which may not be able to cope if the lighting is poor or the image was captured from an unusual angle. When that happens, the image is sent to a human analyst. Either way, the user gets an answer in half a minute or so.
Much more is to come. In old-fashioned computing offices, workflows were co-ordinated by senior staff, often mathematicians, who had worked out how to deconstruct the complex calculations the computers were tackling. Now silicon foremen such as CrowdForge oversee human computers. These algorithms, which co-ordinate workers by plugging into Mechanical Turk and other online piecework platforms, are relatively new and are likely to get considerably more sophisticated. Researchers are, for example, creating software to make it easier to assign tasks to workers―or, to put it another way, to program humans.
Eric Horvitz, a researcher at Microsoft’s research labs in Redmond, Washington, has considered how such software could be put to use. He imagines a future in which algorithms co-ordinate an army of human workers, physical sensors and conventional computers. In the event of a child going missing, for example, an algorithm might assign some volunteers to search duties and ask others to examine CCTV footage for sightings. The system would also trawl local news reports for similar cases. These elements would be combined to create a cyborg detective.
This sounds terribly futuristic, and rather different to the pen-and-paper human computation of the 19th century. But David Alan Grier, a historian of computing at George Washington University in Washington, DC, thinks that the architects of the new systems could learn a lot by studying the old ones. He points out that Charles Babbage, the designer of an early mechanical computer, gave much thought to reducing the errors that human computers made. Babbage realised that duplicating tasks and comparing the results was not enough, because different workers tended to make the same mistakes. A better solution was to find different ways to perform the same calculation. If two methods produce the same answer, the result is much less likely to be flawed, Babbage reasoned.
There are many more such useful tips in the historical record, says Dr Grier. Human-computing pioneers also wrote a lot about how best to break a complex calculation into sub-tasks that are completely independent of each other, for example. “There are all sorts of hints in the old literature about what’s useful,” he says. He is often invited to human-computing conferences at which he likes to chide researchers for overlooking such lessons from this forgotten but intriguing early chapter of computer history.
| }
|