Does the generation of AI have to be so grim?

How many humans does it take to make tech seem human? Millions.

Source: Inside the AI Factory: the humans that make tech seem human – The Verge

Back in 2005-2006 I already saw that there was going to be a need for humans to be in the loop, specifically for annotating data to contextualize the quickly growing content and social streams on the web.

I used to say there were three ways to add context to data: just watching what folks do, tagging as you go about your day (Delicious was big back then), and what I called ‘librarian’ duty – actually having folks annotate data to provide context.

Benign benefits?
Fast forward to 2016 and I was working with a company that was using AI to annotate medical notes. I started thinking again about annotation in general and was wondering if there was some way we could employ folks to annotate. What’s more, realizing that, for so much annotation, you really just needed a human, regardless of education level, I had envisioned a benign system where the annotation was educational and beneficial to the annotator.

I kept thinking of places like West Virginia, where there were deep shifts in society as the coal industry fell apart. What if we could not only tap into all these unemployed, who would have more than enough skills to annotate, being human and all, and, through the annotation process, educate them, provide them with skills to help them into their next non-coal job?

Then reality catches up
So it goes without saying that this article in the Verge really struck me hard.

Of course, if the tech-bros could find a way to pay by piece or sweatshop their growth, they would.

And now we have the sad backstory to all our amazing AI of today.

Will the ethics of the generation of the AI contextualizations and annotations become a larger topic for discussion? Will companies use the ethics of their annotation as a marketing differentiator? Will there be a label or rating for ‘ethicallly annotated AI’ (“no humans were harmed in the creation of this AI”)?

All I know is that we’re missing an opportunity to actually make annotation useful to the annotator as well. And I think that’s a fantasy world I’ll never see.