The Role of Machine Learning in Culture
The Role of Machine Learning in Culture
Machine learning changes our culture. Try this text editing tool to see how this can be done.
Most of us have a laid-back attitude when painting a picture about ourselves. However, few of us have stopped at this point to consider the potentially destructive ways in which this technology may shape our culture.
Human language is full of ambiguity and dual meanings. Consider, for example, the potential meaning of the phrase: “I went to the project class.” Without a background, this is a vague sentence.
Computer scientists and linguists have spent decades trying to program computers to understand the intricacies of human language. And in certain ways, computers are rapidly approaching the human ability to understand and produce text.
Every affiliation is inferred from a huge collection of billions of words written by ordinary people.
Predictive text and autocomplete features on our devices change the way we think through the practice of suggesting some words rather than others. Through these subtle, everyday interactions, machine learning influences our culture. Are we ready for that?
I created an online interactive work for the Kyogle Writers’ Festival that lets you harmlessly discover this technology.
What is Natural language processing?
The use of everyday language to interact with a computer is called “natural language processing.” When we talk to Siri or Alexa or type the words in the browser and see the rest of our sentences as predicted, we deal with natural language processing.
This has only been possible due to the dramatic advances in natural language processing over the past decade – through sophisticated machine learning algorithms trained on large data sets (usually billions of words).
Last year, the potential of this technology was revealed with the release of Generative Pre-trained Transformer 3 (GPT-3). This sets a new standard for what computers can do with language.
GPT-3 can take just a few words or phrases by generating textual relationships between words in a sentence and produce complete documentation of “meaningful” language. It does this by using machine learning models, including two well-accepted models called “BERT” and “ELMO.”
How does this technology affect culture?
However, every language model produced by machine learning has a fundamental problem: they generally learn everything they know from data sources such as Wikipedia and Twitter.
GLoVe represents each word in English as a vector in a multidimensional space (about 300 dimensions). With such tasks, it can perform calculations with words and add and subtract words as numbers.
In fact, machine learning takes data from the past, “learns” it to produce models, and uses it to perform future tasks. But in the process, a model may absorb a distorted or problematic worldview from its training data.
If the training data is biased, this bias is coded and reinforced in the model rather than challenged. For example, one model may associate certain identity groups or races with positive words and others with negative words.
As described in the recent document “Coded Archive,” this can lead to serious rejection and inequality.
Everything you have said so far
I created this interactive work that allows people to gain fun insights into how computers understand language. It is called “Everything You Ever Said” (EYES), depending on how natural language models use a variety of data sources for instruction.
EYES lets you write any text (less than 2000 characters) and “add” one concept and “add” another. In other words, it allows you to use the computer to change the meaning of the text. You can try it yourself.
Image: Screenshot of a natural language processing tool. EYES can add and subtract concepts from the text you enter based on your English understanding of the training data. Screenshot.
Machine learning takes data from the past, “learns” it to produce models, and uses it to perform future tasks.
EYES uses GLoVe to create text by creating a series of metaphors, while “analogy” compares one thing and another. For example, if I ask you, “A man is supposed to be a king, what about a woman?” – You may answer “Queen.” This is an easy case.
But I can ask a more challenging question: “A rose is supposed to have a razor. What about love?” Depending on your interpretation of the language, there are several possible answers. When asked about these analogies, GLoVe generates “queen” and “betrayal” responses, respectively.
GLoVe represents each word in English as a vector in a multidimensional space (about 300 dimensions). With such tasks, it can perform calculations with words and add and subtract words as numbers.
Cyborg culture has been here.
The problem with machine learning is that some concepts’ connections are hidden inside the black box. We can not see or touch them. Approaches to creating more transparency in machine learning models have been the focus of much current research.
The purpose of EYES is to test these dependencies playfully. So you can create an intuition for how machine learning models view the world.
Some similes will surprise you with their temper, while others may confuse you. However, every affiliation is inferred from a huge collection of billions of words written by ordinary people.
Models like GPT-3, which have learned from similar data sources, now influence how language is used. Having all the news made up of typewritten texts is no longer a scientific myth. This technology has been created so far.
The use of everyday language to interact with a computer is called “natural language processing.”
And the cultural footprint of machine learning models seems to be just growing.