Select date

April 2024
Mon Tue Wed Thu Fri Sat Sun

Google Stealthily Infuses Political Agenda Into Products to Prevent Trump Reelection, Insiders, Documents Say

26-6-2019 < SGT Report 31 1053 words
 

by Petr Svab, The Epoch Times:



Google is surreptitiously pushing its political agenda onto its users through its products in an effort that one employee described as “preventing the next Trump situation.”


The company has cloaked its political agenda in the veneer of “fairness,” but in reality, it means promoting the company’s political worldview at the expense of others’. It also is raising voices of those aligned with its worldview at the expense of those that oppose it, or that just aren’t aligned with it enough, according to internal documents obtained and employees speaking out or being caught on hidden camera by Project Veritas.



“They are a highly biased political machine that is bent on never letting somebody like Donald Trump come to power again,” said one Google employee, who wished to remain anonymous and spoke on camera with his likeness not shown and his voice disguised.


Google, the world’s largest internet company with some 100,000 employees and more than $130 billion in annual revenue, has long been accused of channeling the politics of its mostly left-leaning workforce into its products, which the company has repeatedly denied.


The latest revelations, however, depict a company that has expended substantial effort in putting its thumb on the political scales after the 2016 election.


“Right after Donald Trump won the election in 2016, the company did a complete 180 in what they thought was important,” the insider said.


Blaming Trump’s success on “hate and misogyny and racism,” the company decided to “fix” it and went on to diverge from its previous values of “self-expression and giving everyone a voice,” he said.


“They’re like … ‘We need to start policing our users, because we don’t want to have an outcome like that, we don’t want to have an outcome like that to happen again.’”


A similar sentiment was expressed by Jen Gennai, head of Google’s Responsible Innovation, who was caught on hidden camera by a Project Veritas reporter several weeks ago.


“We all got screwed over in 2016, again it wasn’t just us, it was, the people got screwed over, the news media got screwed over, like, everybody got screwed over so we’ve rapidly been like, ‘What happened there and how do we prevent it from happening again?’” she said.


Gennai said that she used to work for Google’s Trust and Safety team and that the 2020 presidential election has been “top of mind” for the team. “They’ve been working on it since 2016, to make sure we’re ready for 2020,” she said.


None of this has been disclosed by Google to its users, many of whom still consider it an objective source of information, the insider said.


‘Fairness’


Google believes that by filtering, ranking, aggregating, or generating media through its products, its users are “programmed” to certain worldviews, an internal document suggests.


As it turned out, however, these worldviews are not always those preferred by the company. For example, when one searches on Google for “CEOs” the results would likely show a lot of pictures of men, because most CEOs are men. But that would be “algorithmic unfairness” according to an internal document, because “it would reinforce a stereotype about the role of women in leadership positions.”


In some such cases, “it may be desirable to consider how we might help society reach a more fair and equitable state, via either product intervention or broader corporate responsibility efforts,” the document states.


On paper, “algorithmic unfairness” was defined by the company as “unjust or prejudicial treatment that is related to sensitive characteristics such as race, income, sexual orientation, or gender, though algorithmic systems or algorithmically aided decision-making.”


Since every person shares in such characteristics, the definition could make one believe that Google is simply trying to make sure it treats everybody fairly. But Gennai made clear her job was to bring “fairness” only to certain people, based on whether they belong to a group Google deems sufficiently “marginalized.”


“My definition of fairness and bias specifically talks about historically marginalized communities. And that’s who I care about. Communities who are in power and have traditionally been in power are not who I’m solving fairness for,” she was recorded as saying.


‘Re-bias’


The insider described a Google initiative called “ML Fairness,” the “ML” stands for machine learning. The existence and purpose of ML Fairness were confirmed by Google software engineer Gaurav Gite, who was recorded describing the initiative by a Project Veritas reporter.


It was set up to develop an artificial intelligence (AI) algorithm that would put in place Google’s idea of “fairness” at scale.


Gennai appeared to confirm that Google is going even further, looking to its algorithms for a specific political outcome.


“We’re also training our algorithms, like, if 2016 happened again, would we have, would the outcome be different?” she said.


Algorithms are “trained” by being fed with sufficient quantities of properly classified data. If, for example, the AI is provided enough cat images classified as “a cat,” the machine will eventually learn to recognize almost any cat image, even one it hasn’t encountered before, as “a cat.”


But the insider said Google has been training its “fairness” algorithm to produce results that reflect the company’s political views instead.


“What they’re really saying about fairness is that they have to manipulate their search results so it gives them the political agenda that they want,” he said. “And so they have to re-bias their algorithms.”


In 2008, for instance, Google introduced the “search suggestions” function. Whenever one started to type into the Google search bar, an algorithm would draw upon signals such as the user’s previous searches, searches by users worldwide, sites in Google’s index, and ads in Google’s network to suggest search phrases that start with the text already typed in.


Now, however, the “fairness” algorithm causes the function to, at least sometimes, display results infused with Google’s preferred worldview, the insider said.


When one types in “men can” and makes a space, the suggestions would show phrases like: “men can have babies,” “men can get pregnant,” and “men can have periods.”


Read More @ TheEpochTimes.com





Loading...




Print