Twitter, Facebook, and YouTube are well-known for the misinformation they spread, not as content providers – but as platforms. But Amazon? How could the superstore behemoth sow the same problematic information? For the answer, one only has to look to its algorithm.
Amazon, like the aforementioned social media, is driven by algorithms. For social media, algorithms are optimized to keep you on the website, presumably absorbing the advertising along with pictures, articles, and arguments. Amazon’s algorithm is designed to sell. For all of these companies, the key is our affinity for similarity – birds of a feather not only flock together socially, but they purchase together.
Amazon provides many recommendations, “you might like,” “often purchased with this product,” or “other people have purchased.” And like the social media giants, Amazon keeps track of your clicks as well as purchases and the information you voluntarily provide to “improve your experience” when you log in.
“Search engines are modern-day gatekeepers and curators of information. Their black-box algorithm can shape user behaviour, alter beliefs and even affect voting behaviour either by impeding or facilitating the flow of certain kinds of information.”
Searching the Internet is increasingly the way we navigate the world. Half of us look to Dr. Google for medical information. The authors [see source at the bottom] want to look at what those search engines offered when seeded with specific inquiries. They were able to use “naïve” Amazon accounts – those without a history, as well as accounts where they created a history by their clicks. The seeds for those search trees were constructed using words fashioned from Google trends, a dataset that captures what phrases are most popular at any given time. They developed 48 search queries based upon the ten most popular vaccine topics – they ranged from what we might all agree were neutral, like chickenpox or influenza vaccine, to the more polarized, like anti-vaccination or Andrew Wakefield.
Over 15 days, they ran those searches every day through both the naïve Amazon accounts and the accounts with associated histories, harvesting Amazon’s recommendations. As it turns out, Amazon pays attention to our keystrokes; clicking, clicking, and adding an item to our cart, purchasing the item all impact the algorithm. So the researchers varied the searches to see if putting an item in a cart behaved differently than simply clicking on an item and moving on.
Perhaps the researchers' greatest challenge was in categorizing items that were misleading, uninformed, or wrong. As they report, “labeling a product for misinformation is hard and time-consuming.” While books were the primary objects returned by a search, other items appeared, including fashion (presumably with printed thoughts) and supplements designed to protect your health.
There was no computerized, scalable way to classify these objects. For the nearly 5,000 items promoted by their searches, they were labeled debunking, neutral, or promoting vaccine myths manually – the principal author determining 78%, the paid volunteers of Amazon’s Mechanical Turk, an online source for surveys, the remaining 32%. Another individual reviewed the choices, and only 2% required discussion. It takes a great deal of hand labor to classify the informational value of objects.
While this information is buried within the paper’s methodology, it holds a key point for us to recognize. There are no simple computerized means of determining the content of Amazon’s inanimate products. Amazon sells 12 million products; when you add their outside “partners” in Amazon Marketplace, the number jumps to 350 million. Similarly, there are about 6,000 Tweets per second and 4,000 photographs/second uploaded into Facebook, and that doesn’t even begin to count written postings. These platforms, including Amazon, are not capable, technologically, of overseeing their content, even if they wanted to or we required it.
When you clicked on objects that the researchers had classified as misinforming, Amazon served up more of the same. When you clicked on neutral or debunking topics, the preponderance of what you saw was neutral or debunking. It might be likened to popularity in high school. Everyone wanted to be in the cool group, whether athletes, science whizzes, or the best at Dungeons and Dragons. All these algorithms serve up what you consider to be the cool group, along with a few other offerings in case you are not well targeted yet.
“The finding suggests that Amazon nudges users more towards misinformation once a user shows interest in a misinformative product by clicking on it but hasn’t shown any intention of purchasing it.”
The researchers found was the Amazon’s algorithms were designed to sell you stuff. The greatest number and most direct recommendations were made before you committed to adding anything to the cart. Like the automated salesperson that it was, the algorithm was attempting to close the deal. And it sealed the deal by showing you more and more objects that individuals facing the same purchasing decision had made. It is very similar to the “nudges” employed in the UK, where they found that when sending a utility bill, there was a higher percentage of payments if they included the fact that most of your neighbors had paid their bill.
The algorithm has no intent to deceive or offer up misinformation. Its goal is simply to sell – it is amoral, just like that UK nudge. This, too, is an important realization. The same holds for social media; it intends to sell your attention to advertisers. You make the initial choice, intentional or unintentional, to enter the rabbit hole. The algorithm simply pulls you deeper and deeper, searching for your similars.
The researchers end with recommendations to improve the situation. That includes creating lists of misinformative authors - that smacks of blacklisting. Or a “Verified by Amazon” logo, which given the scale and liability, is unrealistic. The only suggestion that I thought was useful was not promoting sponsored recommendations that were misinforming; while that is feasible, who will be the judge?
As it turns out, Terminator's plot of the machines taking over is not so far-fetched. Skynet is not a science-fiction device involving artificial intelligence. It is already amongst us. Not built by Cyberdyne Systems, but by Google, Amazon, Facebook, and Twitter. And we do not fight against it; we embrace it.
Source: Auditing E-Commerce Platforms for Algorithmically Curated Vaccine Information, arXiv (a pre-publication online journal) DOI: 10.1145/3411764.3445250