Algorithms have transformed our society, bringing with them a range of benefits and challenges. They can make faster decisions, process more data, and may be more reliable and accurate than a human. Algorithms may allow more reliable predictions in comparison to a human. These benefits have transformed the way we live and have become so ubiquitous that many people do not realize just how omnipresent algorithms have become in our daily lives.
As more popularly understood, the term “algorithm” is generally used to refer to either artificial intelligence, a subfield of computer science concerned with intelligent behavior, or machine learning, a subfield of artificial intelligence that focuses on computer programs that are able to learn from data. For example, Netflix uses algorithms to predict which television shows and movies you may want to watch based on your past viewing habits. Amazon’s “Alexa” device uses an algorithm to interpret human speech and respond appropriately.
The more advanced algorithms work by taking a body of data and using it to make predictions according to the programed goals of its creators. For example, “nearest neighbor” algorithms try to interpret a new input by comparing it to similar data from the past and then making a prediction whether the new input is the same as the others. This is how, for example, the United States Post Office uses computers to read handwritten addresses on envelopes. The algorithm compares a letter in new handwriting to examples of handwritten letters of a similar shape, then predicts what the letter in question likely is. The use of this algorithm enables a computer to read an envelope and sort it faster than a human could.
Algorithms are created by humans and operate using information obtained from human society. As such, they are not value free or free from bias. Instead, they reflect the biases of their creators and biases in the data they use to making predictions.
It should first be noted that sometimes algorithms are directed to explicitly discriminate. For example, Facebook created a mechanism by which advertisers could target their ads according to the demographics of individuals, including their race and sex.For this, Facebook faced lawsuits from multiple organizations, ultimately settling them in 2019, though additional monitoring has led to renewed litigation.
One way algorithms can implicitly discriminate is through reliance on data sets that are themselves tainted by discrimination. For example, Amazon created an algorithm to predict who would make the best employees and then screen applicants based on that criteria. In creating the algorithm, it used data from its current workforce over the past ten years. The Tech industry, however, has a long history of sexism and so there were comparatively few women in Amazon’s workforce for the algorithm to look at. As a result, the algorithm predicted that women would not make for good employees and it ended up screening out female applicants based on its historically tainted data set. With no human intervention involved, the algorithm perpetuated the existing patterns of discrimination.
Algorithms can unintentionally mischaracterize or misinterpret the data even when the data sets are reliable. Google Translate is an algorithm that translates text from one language to another, using scanned texts from one language to choose the most likely correct translation. Not all languages function in the same way, however, which can lead to machines having to make a prediction as to the most closely analogous translation. For example, Turkish only has one pronoun to use for the third-person, “O,” as opposed to “he,” “she,” and “it” used in English. When making a translation, Google scans texts in English and Turkish to select the most appropriate pronoun to use since there is not a clear analogue between the two languages.
Algorithms are already in use by many government agencies throughout Connecticut to fill a variety of functions including in hiring and decision-making processes. Despite their use and utility, there is little to no statewide oversight of algorithms during procurement, implementation, or monitoring.