In The Guardian, Olivia Solon reports on the pervasive use of “pseudo-AI” in Silicon Valley companies large and small. This is when companies take the work of humans—often low-paid contract workers—and pass it off as the work of AI. Here’s an excerpt from the piece:
In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself.
In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.
Alison Darcy, a psychologist and founder of Woebot, a mental health support chatbot, describes this as the “Wizard of Oz design technique”.
“You simulate what the ultimate experience of something is going to be. And a lot of time when it comes to AI, there is a person behind the curtain rather than an algorithm,” she said, adding that building a good AI system required a “ton of data” and that sometimes designers wanted to know if there was sufficient demand for a service before making the investment.
This approach was not appropriate in the case of a psychological support service like Woebot, she said.
“As psychologists we are guided by a code of ethics. Not deceiving people is very clearly one of those ethical principles.”
Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health.
Image via The Guardian.