pilic.work

AI ethics or AI Law concept. Developing AI codes of ethics. Compliance, regulation, standard , business policy and responsibility for guarding against unintended bias in machine learning algorithms.

Enlarge (credit: Parradee Kietsirikul)

Anyone who has had to go back and retype a word on their smartphone because autocorrect chose the wrong one has had some kind of experience writing with AI. Failure to make these corrections can allow AI to say things we didn’t intend. But is it also possible for AI writing assistants to change what we want to say?

This is what Maurice Jakesch, a doctoral student of information science at Cornell University, wanted to find out. He created his own AI writing assistant based on GPT-3, one that would automatically come up with suggestions for filling in sentences—but there was a catch. Subjects using the assistant were supposed to answer, “Is social media good for society?” The assistant, however, was programmed to offer biased suggestions for how to answer that question.

Assisting with bias

AI can be biased despite not being alive. Although these programs can only “think” to the degree that human brains figure out how to program them, their creators may end up embedding personal biases in the software. Alternatively, if trained on a data set with limited or biased representation, the final product may display biases.

Read 10 remaining paragraphs | Comments

Cookies on this website

There's no tracking cookies or other nonsense used.