By Teddy Tawil
In his 1961 farewell address, President Eisenhower warned about the rise of a “technological elite” that could covertly control policy without the public’s awareness. Now, there is a tech giant with the capability to make his alarming, Orwellian vision a reality. And it isn’t necessarily Facebook.
Psychologist Robert Epstein of the American Institute for Behavioral Research and Technology has researched how search engines can be used to manipulate voter preferences for almost a decade. This manipulation is not merely constrained to the actions of rogue, nefarious employees tampering with search results to promote their preferred candidates; it can be as subtle as foreseeing a skewed outcome and doing nothing to counter it. In randomized, controlled, peer-reviewed experiments that have been replicated four times with thousands of participants, Epstein and his colleagues have found that politically biased search rankings can shift the voting preferences of undecided voters by 20 percent or more, and the bias can be masked so that people are unaware of it. The effect even held up in a trial on 2,000 real undecided voters in the 2014 Indian Lok Sabha election.
Epstein says these huge shifts suggest that search engine manipulation is “one of the most powerful forms of influence ever discovered in the behavioral sciences.” Crucially, Epstein calculates that these shifts can be large enough to flip any election with a projected win margin under 2.9 percent. Worldwide, more than 25 percent of national elections are won by margins up to 3 percent. The implication that search algorithms could have inadvertently determined the results of a quarter of the world’s elections is chilling.
Search manipulation is such a powerful tool because users are inclined to click and trust top search results more. An analysis of about 300 million clicks on one search engine found that 91.5% of clicks are on the first page of search results, with 32.5% on the first result and 17.6% on the second. Influencing the tone of coverage in these precious top spots could thus influence the perceptions of a large number of voters.
In addition, Google users develop mental associations that lead them to trust top pages more. The fact that top results are often accurate and impartial (e.g. Googling “what is the capital of Serbia?” yields the result “Belgrade”) causes users to associate high-ranking pages with having truthful information. This effect lends additional credibility to top results related to political issues and candidates, even when they’re based on opinion rather than mere fact.
There are five reasons why search engine manipulation is a particularly dangerous form of election interference:
First, search engine results have a veneer of objectivity that could make covert bias more dangerous. We tend to believe that search results are generated by the machinations of politically disinterested algorithms, disarming us towards the potential partiality in their results. Being unaware of the bias makes users think that they are coming to conclusions on their own, making them more persuadable and vulnerable to subconscious manipulation.
Second, the effect is difficult to detect and nearly impossible to counteract. Google’s search algorithm is a proprietary secret, making it very difficult to definitively prove political bias in search results. In addition, Google’s huge market power in Internet search leaves those disadvantaged by its potential bias with very few ways to counteract it.
Third, the effect can become self-reinforcing, multiplying its impact. Search results privileging positive coverage of a candidate will likely cause more positive searches and engagement with content that praises that candidate, creating a feedback loop that promotes complimentary articles and suppresses criticism.
Fourth, as use of the Internet to supplement or replace traditional news sources such as newspapers and television increases over time, so will the impact of changes in search results.
Fifth, unlike other forms of influence, even being aware of it may not be enough. As Epstein explains:
In our national study in the United States, 36 percent of people who were unaware of the rankings bias shifted toward the candidate we chose for them, but 45 percent of those who were aware of the bias also shifted. It’s as if the bias was serving as a form of social proof; the search engine clearly prefers one candidate, so that candidate must be the best.
The appearance of legitimacy that a webpage gets from appearing as a top result is so powerful that combatting it seems to be a very formidable task. In another study, using different and more sophisticated alerts, Epstein was able to suppress the effect by about two-thirds, but cancelling it out entirely has proven elusive.
But Google would never actually do this, right? Indeed, its executives argue that it would be “corporate suicide,” a PR fiasco they would rather avoid. To their credit, they have pledged to remove autocomplete suggestions that “could be interpreted as claims for or against any candidate or political party” during the 2020 election.
Epstein is most concerned about a subtler but just as sinister way Google could put its thumb on the electoral scale. As Jonathan Bright, a research fellow at the Oxford Internet Institute explains, “it’s not really possible to have a completely neutral algorithm.” Epstein worries that Google could shift some of the weights and details of its algorithm in a way that is, on its face, politically neutral, but actually ends up favoring certain candidates. In this case, they would be able to claim that they are not “re-ranking” results to favor any candidate (as they have) and that apparent favoritism was merely the result of organic user activity.
This is not out of the realm of plausibility. Google admits to tweaking its algorithm 500-600 times a year, and a bombshell Wall Street Journal investigation found they meddle with search results far more than the company publicly admits. Google emails leaked in 2018 unearthed that one employee asked co-workers about using “ephemeral experience” to change users’ views about Trump’s travel ban. And Google’s peer Facebook has advertised the power it has to flip elections through targeted advertising.
Epstein claims to have uncovered proof that Google is biased against conservatives, but this research is much less reputable. One published study by a team of researchers at Northeastern found modest left-leaning bias in Google results at the bottom of the page. Other scholars, like UCLA Professor Ramesh Srinivasan, remain unconvinced.
Even if Google is not currently exercising its influence to the extent it could, the power to swing millions of votes should not be taken lightly. The inconvenient truth is that the interventions necessary to address an issue of this magnitude are dramatic.
One novel proposal comes from Harvard Professor Jonathan Zittrain and Yale Professor Jack Balkin. They suggest that the tech companies that serve as Internet intermediaries should become “information fiduciaries” who are legally obligated to act in their users’ best interests like doctors and stock brokers are. Zittrain proposes incentivizing tech companies to assume this role by offering them immunity from lawsuits over their use of personal information in exchange.
For his part, Epstein favors an extreme antitrust intervention—making the database Google uses to generate search results “public commons.” This way, rivals would have equal access and search would become a competitive market, making politically biased results more transparent and avoidable.
In the meantime, continued vigilance is a must. We will need an informed, passionate public to combat the scary prospect of a technocracy run by the Internet’s algorithmic gatekeepers.
Title Image Credit: Fili Wiese/Medium