Google's Eric Schmidt Designs Clinton Knowledge Organization

Google's Eric Schmidt Designs Clinton Knowledge Organization

Postby admin » Thu Dec 08, 2016 6:43 am

https://wikileaks.org/podesta-emails/em ... #efmAC9AE0

This email has also been verified by Google DKIM 2048-bit RSA key

Fwd: 2016 thoughts

From: cheryl.mills@gmail.com
To: robbymook@gmail.com, john.podesta@gmail.com, daplouffe@icloud.com
Date: 2014-04-15 17:16
Subject: Fwd: 2016 thoughts

---------- Forwarded message ----------
From: Eric Schmidt <eschmidt@google.com>
Date: Tue, Apr 15, 2014 at 1:56 PM
Subject: 2016 thoughts
To: Cheryl Mills <cheryl.mills@gmail.com>

Cheryl, I have put together my thoughts on the campaign ideas and I have
scheduled some meetings in the next few weeks for veterans of the campaign
to tell me how to make these ideas better. This is simply a draft but do
let me know if this is a helpful process for you all. Thanks !! Eric

*********************************

Notes for a 2016 Democratic Campaign
Eric Schmidt
April 2014

DRAFT DRAFT DRAFT DRAFT

Here are some comments and observations based on what we saw in the 2012
campaign. If we get started soon, we will be in a very strong position to
execute well for 2016.

1. Size, Structure and Timing

Lets assume a total budget of about $1.5Billion, with more than 5000 paid
employees and million(s) of volunteers. The entire startup ceases
operation four days after November 8, 2016. The structure includes a
Chairman or Chairwoman who is the external face of the campaign and a
President who is the executive in charge of objectives, measurements,
systems and building and managing the organization.

Every day matters as our end date does not change. An official campaign
right after midterm elections and a preparatory team assembled now is best.

2. Location

The campaign headquarters will have about a thousand people, mostly young
and hardworking and enthusiastic. Its important to have a very large
hiring pool (such as Chicago or NYC) from which to choose enthusiastic,
smart and low paid permanent employees. DC is a poor choice as its full of
distractions and interruptions. Moving the location from DC elsewhere
guarantees visitors have taken the time to travel and to help.

The key is a large population of talented people who are dying to work for
you. Any outer borough of NYC, Philadelphia, Atlanta, Boston are all good
examples of a large, blue state city to base in.

Employees will relocate to participate in the campaign, and will find low
cost temporary housing or live with campaign supporters on a donated basis.
This worked well in Chicago and can work elsewhere.

The computers will be in the cloud and most likely on Amazon Web services
(AWS). All the campaign needs are portable computers, tablets and smart
phones along with credit card readers.

3. The pieces of a Campaign

a) The Field

Its important to have strong field leadership, with autonomy and
empowerment. Operations talent needs to build the offices, set up the
systems, hire the people, and administer what is about 5000 people.
Initial modeling will show heavy hiring in the key battleground states.
There is plenty of time to set these functions up and build the human
systems. The field is about organizing people, voter contact, and get out
the vote programs.

For organizing tools, build a simple way to link people and activities as a
workflow and let the field manage the system, all cloud based. Build a
simple organizing tool with a functioning back-end. Avoid deep integration
as the benefits are not worth it. Build on the cloud. Organizing is
really about sharing and linking people, and this tool would measure and
track all of it.

There are many other crucial early investments needed in the field:
determining the precise list of battleground states, doing early polling to
confirm initial biases, and maintaining and extending voter protection
programs at the state level.

b) The Voter

Key is the development of a single record for a voter that aggregates all
that is known about them. In 2016 smart phones will be used to identify,
meet, and update profiles on the voter.
A dynamic volunteer can easily
speak with a voter and, with their email or other digital handle, get the
voter videos and other answers to areas they care about ("the benefits of
ACA to you" etc.)

The scenario includes a volunteer on a walk list, encountering a potential
voter, updating the records real time and deepening contact with the voter
and the information we have to offer.

c) Digital

A large group of campaign employees will use digital marketing methods to
connect to voters, to offer information, to use social networks to spread
good news, and to raise money. Partners like Blue State Digital will do
much of the fund raising. A key point is to convert BSD and other partners
to pure cloud service offerings to handle the expected crush and load.

d) Media (paid), (earned) and (social), and polling

New tools should be developed to measure reach and impact of paid, earned
and social media. The impact of press coverage should be measurable in
reach and impact, and TV effectiveness measured by attention and other
surveys.

Build tools that measure the rate and spread of stories and rumors, and
model how it works and who has the biggest impact. Tools can tell us about
the origin of stories and the impact of any venue, person or theme.
Connect polling into this in some way.

Find a way to do polling online and not on phones.

e) Analytics and data science and modeling, polling and resource
optimization tools

For each voter, a score is computed ranking probability of the right vote.
Analytics can model demographics, social factors and many other attributes
of the needed voters. Modeling will tell us what who we need to turn out
and why, and studies of effectiveness will let us know what approaches work
well. Machine intelligence across the data should identify the most
important factors for turnout, and preference.

It should be possible to link the voter records in Van with upcoming
databases from companies like Comcast and others for media measurement
purposes.

The analytics tools can be built in house or partnered with a set of
vendors.

f) Core engineering, voter database and contact with voters online

The database of voters (NGP Van) is a fine starting point for voter records
and is maintained by the vendor (and needs to be converted to the cloud).
The code developed for 2012 (Narwahl etc.) is unlikely to be used, and
replaced by a model where the vendor data is kept in the Van database and
intermediate databases are arranged with additional information for a voter.

Quite a bit of software is to be developed to match digital identities with
the actual voter file with high confidence. The key unit of the campaign
is a "voter", and each and every record is viewable and updatable by
volunteers in search of more accurate information.

In the case where we can't identify the specific human, we can still have a
partial digital voter id, for a person or "probable-person" with attributes
that we can identify and use to target. As they respond we can eventually
match to a registered voter in the main file. This digital key is
eventually matched to a real person.

The Rules

Its important that all the player in the campaign work at cost and there be
no special interests in the financing structure. This means that all
vendors work at cost and there is a separate auditing function to ensure no
one is profiting unfairly from the campaign. All investments and conflicts
of interest would have to be publicly disclosed. The rules of the audit
should include caps on individual salaries and no investor profits from the
campaign function. (For example, this rule would apply to me.)

The KEY things

a) early build of an integrated development team and recognition that this
is an entire system that has to be managed as such
b) decisions to exclusively use cloud solutions for scalability, and choice
of vendors and any software from 2012 that will be reused.
c) the role of the smart phone in the hands of a volunteer. The smart
phone manages the process, updates the database, informs the citizen, and
allows fundraising and recruitment of volunteers (on android and iphone).
d) early and continued focus of qualifying fundraising dollars to build the
field, and build all the tools. Outside money will be plentiful and
perfect for TV use. A smart media mix tool tells all we need to know about
media placement, TV versus other media and digital media.
admin
Site Admin
 
Posts: 36125
Joined: Thu Aug 01, 2013 5:21 am

Google's Eric Schmidt Designs Clinton Knowledge Organization

Postby admin » Thu Dec 08, 2016 6:46 am

Google staffers have had at least 427 meetings at the White House over course of Obama presidency - averaging more than one a week
by Dailymail.com Reporter
23 April 2016

NOTICE: THIS WORK MAY BE PROTECTED BY COPYRIGHT

YOU ARE REQUIRED TO READ THE COPYRIGHT NOTICE AT THIS LINK BEFORE YOU READ THE FOLLOWING WORK, THAT IS AVAILABLE SOLELY FOR PRIVATE STUDY, SCHOLARSHIP OR RESEARCH PURSUANT TO 17 U.S.C. SECTION 107 AND 108. IN THE EVENT THAT THE LIBRARY DETERMINES THAT UNLAWFUL COPYING OF THIS WORK HAS OCCURRED, THE LIBRARY HAS THE RIGHT TO BLOCK THE I.P. ADDRESS AT WHICH THE UNLAWFUL COPYING APPEARED TO HAVE OCCURRED. THANK YOU FOR RESPECTING THE RIGHTS OF COPYRIGHT OWNERS.


• The White House's close relationship with Google was highlighted in data published Friday
• Records show 169 Google employees met with 182 government officials
• Google's top lobbyist paid 128 visits to the White House since 2009
• 'Of course' Google is a frequent guest, company responded in statement

Newly compiled data reveals Google and its affiliates have attended meetings at the White House more than once a week, on average, since President Barack Obama took office.

Numbers crunched by the Campaign for Accountability and the Intercept show 169 Google employees have met with 182 government officials in the White House.

The meetings took place at least 427 times. The data used spans from Obama's first month in office in 2009 until October 2015, and includes government meetings with representatives of Google-affiliated companies Tomorrow Ventures and Civis Analytics.

Data shows Google employees visited the White House at least 427 times between the time Obama took office and October 2015


The Google employee with the most visits is the company's head of public policy, Johanna Shelton, who paid the White House 128 visits.

Image
Johanna Shelton, pictured, is Google's top lobbyist. She paid 128 visits to the White House

The government's apparently cozy relationship with Google was brought up about a year ago by the Wall Street Journal.

In response to a story in the Journal titled 'Google Makes Most of Close Ties to White House,' the company responded: 'Of course we’ve had many meetings at the White House over the years.'

Google wrote that topics discussed in the meetings ranged from patent reform, STEM education, and self-driving cars to Internet censorship, smart contact lenses, and cyber security.

Friday's report in the Intercept came a week after Obama announced his support for a Federal Communications Commission plan that would make it easier for pay-TV customers to buy their own set-top boxes - a plan which an AT&T executive blasted as a 'Google proposal.'

'This will allow for companies to create new, innovative, higher-quality, lower-cost products,' the White House wrote in a blog post announcing the initiative.
admin
Site Admin
 
Posts: 36125
Joined: Thu Aug 01, 2013 5:21 am

Google's Eric Schmidt Designs Clinton Knowledge Organization

Postby admin » Thu Dec 08, 2016 7:04 am

Research Proves Google Manipulates Millions to Favor Clinton
by Robert Epstein
sputniknews.com
9/12/16

NOTICE: THIS WORK MAY BE PROTECTED BY COPYRIGHT

YOU ARE REQUIRED TO READ THE COPYRIGHT NOTICE AT THIS LINK BEFORE YOU READ THE FOLLOWING WORK, THAT IS AVAILABLE SOLELY FOR PRIVATE STUDY, SCHOLARSHIP OR RESEARCH PURSUANT TO 17 U.S.C. SECTION 107 AND 108. IN THE EVENT THAT THE LIBRARY DETERMINES THAT UNLAWFUL COPYING OF THIS WORK HAS OCCURRED, THE LIBRARY HAS THE RIGHT TO BLOCK THE I.P. ADDRESS AT WHICH THE UNLAWFUL COPYING APPEARED TO HAVE OCCURRED. THANK YOU FOR RESPECTING THE RIGHTS OF COPYRIGHT OWNERS.


Image
© Photo: Youtube/SourceFed

In this exclusive report, distinguished research psychologist Robert Epstein explains the new study and reviews evidence that Google's search suggestions are biased in favor of Hillary Clinton. He estimates that biased search suggestions might be able to shift as many as 3 million votes in the upcoming presidential election in the US.

Biased search rankings can swing votes and alter opinions, and a new study shows that Google's autocomplete can too.

A scientific study I published last year showed that search rankings favoring one candidate can quickly convince undecided voters to vote for that candidate — as many as 80 percent of voters in some demographic groups. My latest research shows that a search engine could also shift votes and change opinions with another powerful tool: autocomplete.

Because of recent claims that Google has been deliberately tinkering with search suggestions to make Hillary Clinton look good, this is probably a good time both to examine those claims and to look at my new research. As you will see, there is some cause for concern here.

In June of this year, Sourcefed released a video claiming that Google's search suggestions — often called "autocomplete" suggestions — were biased in favor of Mrs. Clinton. The video quickly went viral: the full 7-minute version has now been viewed more than a million times on YouTube, and an abridged 3-minute version has been viewed more than 25 million times on Facebook.

The video's narrator, Matt Lieberman, showed screen print after screen print that appeared to demonstrate that searching for just about anything related to Mrs. Clinton generated positive suggestions only. This occurred even though Bing and Yahoo searches produced both positive and negative suggestions and even though Google Trends data showed that searches on Google that characterize Mrs. Clinton negatively are quite common — far more common in some cases than the search terms Google was suggesting. Lieberman also showed that autocomplete did offer negative suggestions for Bernie Sanders and Donald Trump.

"The intention is clear," said Lieberman. "Google is burying potential searches for terms that could have hurt Hillary Clinton in the primary elections over the past several months by manipulating recommendations on their site."

Google responded to the Sourcefed video in an email to the Washington Times, denying everything. According to the company's spokesperson, "Google Autocomplete does not favor any candidate or cause." The company explained away the apparently damning findings by saying that "Our Autocomplete algorithm will not show a predicted query that is offensive or disparaging when displayed in conjunction with a person's name."

Since then, my associates and I at the American Institute for Behavioral Research and Technology (AIBRT) — a nonprofit, nonpartisan organization based in the San Diego area — have been systematically investigating Lieberman's claims. What we have learned has generally supported those claims, but we have also learned something new — something quite disturbing — about the power of Google's search suggestions to alter what people search for.

Lieberman insisted that Google's search suggestions were biased, but he never explained why Google would introduce such bias. Our new research suggests why — and also why Google's lists of search suggestions are typically much shorter than the lists Bing and Yahoo show us.

Our investigation is ongoing, but here is what we have learned so far:

Bias in Clinton's Favor

To test Lieberman's claim that Google's search suggestions are biased in Mrs. Clinton's favor, my associates and I have been looking at the suggestions Google shows us in response to hundreds of different election-related search terms. To minimize the possibility that those suggestions were customized for us as individuals (based on the massive personal profiles Google has assembled for virtually all Americans), we have conducted our searches through proxy servers — even through the Tor network — thus making it difficult for Google to identify us. We also cleared the fingerprints Google leaves on computers (cache and cookies) fairly obsessively.

Google says its search bar is programmed to avoid suggesting searches that portray people in a negative light. As far as we can tell, this claim is false.


Generally speaking, we are finding that Lieberman was right: It is somewhat difficult to get the Google search bar to suggest negative searches related to Mrs. Clinton or to make any Clinton-related suggestions when one types a negative search term. Bing and Yahoo, on the other hand, often show a number of negative suggestions in response to the same search terms. Bing and Yahoo seem to be showing us what people are actually searching for; Google is showing us something else — but what, and for what purpose?

As for Google Trends, as Lieberman reported, Google indeed withholds negative search terms for Mrs. Clinton even when such terms show high popularity in Trends. We have also found that Google often suggests positive search terms for Mrs. Clinton even when such terms are nearly invisible in Trends. The widely held belief, reinforced by Google's own documentation, that Google's search suggestions are based on "what other people are searching for" seems to be untrue in many instances.

Google's Explanation

Google tries to explain away such findings by saying its search bar is programmed to avoid suggesting searches that portray people in a negative light. As far as we can tell, this claim is false; Google suppresses negative suggestions selectively, not across the board. It is easy to get autocomplete to suggest negative searches related to prominent people, one of whom happens to be Mrs. Clinton's opponent.

A picture is often worth a thousand words, so let's look at a few examples that appear both to support Lieberman's perspective and refute Google's. After that, we'll examine some counterexamples.

Before we start, I need to point out a problem: If you try to replicate the searches I will show you, you will likely get different results. I don't think that invalidates our work, but you will have to decide for yourself. Your results might be different because search activity changes over time, and that, in turn, affects search suggestions. There is also the "personalization problem." If you are like the vast majority of people, you freely allow Google to track you 24 hours a day. As a result, Google knows who you are when you are typing something in its search bar, and it sends you customized results.

For both of these reasons, you might doubt the validity of the conclusions I will draw in this essay. That is up to you. All I can say in my defense is that I have worked with eight other people in recent months to try to conduct a fair and balanced investigation, and, as I said, we have taken several precautions to try to get generic, non-customized search suggestions rather than the customized kind. Our investigation is also ongoing, and I encourage you to conduct your own, as well.

Let's start with a very simple search. The image below shows a search for "Hillary Clinton is " (notice the space after is) conducted on August 3rd on Bing, Yahoo, and Google. As you can see, both Bing and Yahoo displayed multiple negative suggestions such as "Hillary Clinton is a liar" and "Hillary Clinton is a criminal," but Google is showed only two suggestions, both of which were almost absurdly positive: "Hillary Clinton is winning" and "Hillary Clinton is awesome."

Image
© PHOTO: BING, YAHOO, GOOGLE “Hillary Clinton is ”

To find out what people actually searched for, let's turn to Google Trends — Google's tabulation of the popularity of search results. Below you will see a comparison between the popularity of searching for "Hillary Clinton is a liar" and the popularity of searching for "Hillary Clinton is awesome." This image was also generated on August 3rd. "Hillary Clinton is a liar" was by far the more popular search term; hardly anyone conducted a search using the phrase, "Hillary Clinton is awesome."

Image
© PHOTO: GOOGLE “Hillary Clinton is awesome.”

Okay, but Google admits that it censors negative search results; presumably, that is why we only saw positive results for Mrs. Clinton — even a result that virtually no one searched for. Does Google really suppress negative results? We have seen what happens with "Hillary Clinton is." What happens with "Donald Trump is "? (Again, be sure to include the space after is.)

Image
© PHOTO: GOOGLE “Donald Trump is “?

In the above image, captured on August 8th, we again found the odd "awesome" suggestion, but we also saw a suggestion that appears to be negative: "Donald Trump is dead." Shouldn't a result like that have been suppressed? Let's look further.

Consider the following searches, conducted on August 2nd, for "anti Hillary" and "anti Trump." As you can see below, "anti Hillary" generated no suggestions, but "anti Trump" generated four, including "anti Trump cartoon" and "anti Trump song." Well, you say, perhaps there were no anti-Hillary suggestions to be made. But Yahoo — responding merely to "anti Hill" — came up with eight, including "anti Hillary memes" and "anti Hillary jokes."

Image
© PHOTO: GOOGLE, YAHOO “anti Hillary” and “anti Trump.”

This seems to further refute Google's claim about not disparaging people, but let's dig deeper.

After Mrs. Clinton named Senator Tim Kaine to be her running mate, Mr. Trump dubbed him with one of his middle-school-style nicknames: "Corrupt Kaine." Sure enough, that instantly became a popular search term on Google, as this July 27th image from Trends confirms:

Image
© PHOTO: GOOGLE “Corrupt Kaine.”

Even so, as you can see in the image below, in response to "corrupt," the Google search bar showed us nothing about Senator Kaine, but it did show us both "Kamala" (Kamala Harris, attorney general of California) and "Karzai" (Hamid Karzai, former president of Afghanistan). If you clicked on the phrases "corrupt Kamala" and "corrupt Karzai," search results appeared that linked to highly negative web pages about Kamala Harris and Hamid Karzai, respectively.

Oddly enough, both on the day we looked up "corrupt Kaine" and more recently when I was writing this essay, Google Trends provided no popularity data for either "corrupt Kamala" or "corrupt Karzai." It is hard to imagine, in any case, that either search term has been popular in recent months. So why did the Google search bar disparage Attorney General Harris and President Karzai but not Mrs. Clinton?

Image
© PHOTO: GOOGLE, YAHOO “corrupt Kaine”, “corrupt Kamala”, “corrupt Karzai.”

If you still have doubts about whether Google suggests negative searches for prominent people, see how Senators Cruz, Rubio and Sanders fared in the following searches conducted between July 23rd and August 2nd:

Image
© PHOTO: GOOGLE Searches conducted between July 23rd and August 2nd - Lying Ted
[img]
http://punklawyer.com/wp-content/upload ... nik.28.jpg[/img]
© PHOTO: GOOGLE Searches conducted between July 23rd and August 2nd - Little Marco

Image
© PHOTO: GOOGLE Searches conducted between July 23rd and August 2nd - Anti-Bernie

I could give you more examples, but you get the idea.

The brazenness of Google's search suggestion tinkering become especially clear when we searched for "crooked" — Mr. Trump's unkind nickname for Mrs. Clinton — on Google, Bing, and Yahoo on various dates in June and July. On Google the word "crooked" alone generated nothing for Mrs. Clinton, even though, once again, its popularity was clear on Google Trends. Now compare (in the image following the Trends graph) what happened on Bing and Yahoo:

Image
© PHOTO: GOOGLE “crooked”

Image
© PHOTO: GOOGLE, BING, YAHOO “crooked”

No surprise here. Consistent with Google's own search popularity data, Bing and Yahoo listed "crooked Hillary" near the top of their autocomplete suggestions.

The weird part came when we typed more letters into Google's search bar, trying to force it to suggest "crooked Hillary." On June 9th, I had to go all the way to "crooked H-I-L-L-A" to get a response, and it was not the response I was expecting. Instead of showing me "crooked Hillary," I was shown a phrase that I doubt anyone in the world has ever searched for — "crooked Hillary Bernie":

Image
© PHOTO: GOOGLE “crooked H-I-L-L-A”

Crooked Hillary Bernie? What the heck does that mean? Not much, obviously, but this is something my associates and I have found repeatedly: When you are able to get Google to make negative suggestions for Mrs. Clinton, they sometimes make no sense and are almost certainly not indicative of what other people are searching for.

Masking and Misleading

There are also indications that autocomplete isn't always pro-Clinton and isn't always anti-Trump, and in this regard the Sourcefed video overstated its case. While it is true, for example, that "anti Hillary" generated no suggestions in our study, both "anti Clinton" and "anti Hillary Clinton" did produce negative results when we search on August 8th, as you can see below:

Image
© PHOTO: GOOGLE “anti Clinton”

Image
© PHOTO: GOOGLE “anti Hillary Clinton”

At times, we were also able to generate neutral or at least partially positive results for Donald Trump. Consider this image, for example, which shows a search for "Donald Trump" on August 8th:

Image
© PHOTO: GOOGLE Search for “Donald Trump” on August 8th

If you believe Google can do no wrong and that it never favors one candidate over another (even though Google and its top executives donated more than $800,000 to Obama in 2012 and only $37,000 to Romney), so be it. But trying to be as objective as possible in recent months, my staff and I have concluded that when Google occasionally does give us unbiased election-related search suggestions, it might just be trying to confuse us. Let me explain.

When Ronald Robertson and I began conducting experiments on the power that biased search rankings have over voter preferences, we were immediately struck by the fact that few people could detect the bias in the search results we showed them, even when those results were extremely biased. We immediately wondered whether we could mask the bias in our results so that even fewer people could detect it. To our amazement, we found that a very simple mask — putting a search result that favored the opposing candidate into the third search position (out of 10 positions on the first page of search results) — was enough to fool all of our study participants into thinking they were seeing unbiased search results.

Masking a manipulation is easy, and Google is a master of obfuscation, as I explained a few years ago in my TIME essay, "Google's Dance." In the context of autocomplete, all you have to do to confuse people is introduce a few exceptions to the rule. So "anti Clinton" and "anti Hillary Clinton" produce negative search suggestions, while "anti Hillary" does not. Because those counter-examples exist, we immediately forget about the odd thing that's happening with "anti Hillary," and we also ignore the fact that "anti Donald" produces negative suggestions:

Image
© PHOTO: GOOGLE “anti Donald”

Meanwhile, day after day — at least for the few weeks we were monitoring this term — "anti Hillary" continued to produce no suggestions. Why would Google have singled out this one phrase to protect? As always, when you are dealing with the best number crunchers in the world, the answer has to do with numbers. What do you notice when you look below at the frequency of searches for the three anti-Hillary phrases?

Image
© PHOTO: GOOGLE “anti Hillary”

That's right. "Anti Hillary" was drawing the most traffic, so that was the phrase to protect.

Sourcefed's video was overstated, but, overall, our investigation supports Sourcefed's claim that Google's autocomplete tool is biased to favor Mrs. Clinton — sometimes dramatically so, sometimes more subtly.

Sputnik's Recent Claims

All of the examples I've given you of apparent bias in Google's search suggestions are old and out of date — conducted by me and my staff over the summer of 2016. Generally speaking, you won't be able to confirm what we found (which is why I am showing you screen shots). This is mainly because search suggestions keep changing. So the big question is: Do new search suggestions favor Mr. Trump or Mrs. Clinton.

Recently, Sputnik News reported that Google was suppressing search suggestions related to trending news stories expressing concern about Mrs. Clinton's health. Sure enough, as you can see in the following screen shots captured on August 29th, suggestions on Bing and Yahoo reflected the trending news, but suggestions on Google did not:

Image
© PHOTO: BING Bing

Image
© PHOTO: YAHOO Yahoo

Image
© PHOTO: GOOGLE Google

And, yes, once again, Google Trends showed a recent spike in searches for the missing search suggestions:

Image
© PHOTO: GOOGLE Google Trends

While the news was buzzing about Mrs. Clinton's health, hundreds of stories were also being published about Mr. Trump's "flip flopping" on immigration issues, and that too was reflected on Google Trends:

Image
© PHOTO: GOOGLE Mr. Trump’s “flip flopping”

But, as you can see, Google did not suppress "Donald Trump flip flops" from its suggestions:

Image
© PHOTO: GOOGLE “Donald Trump flip flops”

Google, it seems, is playing this game both consistently and slyly. It is saving its bias for the most valuable real estate — trending, high-value terms — and eliminating signs of bias for terms that have lost their value.

And that brings me, at last, to a research project I initiated only a few weeks ago. If Google is really biasing its search suggestions, what is the company's motive? A new study sheds surprising and disturbing light on this question.

How Google's Search Suggestions Affect Our Searches

Normally, I wouldn't talk publicly about the early results of a long-term research project I have not yet published in a scientific journal or at least presented at a scientific conference. I have decided to make an exception this time for three reasons: First, the results of the study on autocomplete I completed recently are strong and easy to interpret. Second, these results are consistent with volumes of research that has already been conducted on two well-known psychological processes: negativity bias and confirmation bias. And third, the November election is growing near, and the results of my new experiment are relevant to that election — perhaps even of crucial importance.

I began the new study asking myself why Google would want to suppress negative search suggestions. Why those in particular?

In the study, a diverse group of 300 people from 44 U.S. states were asked which of four search suggestions they would likely click on if they were trying to learn more about either Mike Pence, the Republican candidate for vice president, or Tim Kaine, the Democratic candidate for vice president. They could also select a fifth option in order to type their own search terms. Here is an example of what a search looked like:

Image
© PHOTO: GOOGLE Tim Kaine

Two of the searches we showed people contained negative search suggestions (one negative suggestion in each search); all of the other search suggestions were either neutral (like "Tim Kaine office") or positive (like "Mike Pence for vice president").

Each of the negative suggestions — "Mike Pence scandal" and "Tim Kaine scandal" — appeared only once in the experiment. Thus, if study participants were treating negative items the same way they treated the other four alternatives in a given search, the negative items would have attracted about 20 percent of the clicks in each search.

By including or suppressing negatives in search suggestions, you can direct people's searches one way or another just as surely as if they were dogs on a leash.


But that's not what happened. The three main findings were as follows:

1) Overall, people clicked on the negative items about 40 percent of the time — that's twice as often as one would expect by chance. What's more, compared with the neutral items we showed people in searches that served as controls, negative items were selected about five times as often.

2) Among eligible, undecided voters —the impressionable people who decide close elections — negative items attracted more than 15 times as many clicks as neutral items attracted in matched control questions.

3) People affiliated with one political party selected the negative suggestion for the candidate from their own party less frequently than the negative suggestion for the other candidate. In other words, negative suggestions attracted the largest number of clicks when they were consistent with people's biases.

These findings are consistent with two well-known phenomena in the social sciences: negativity bias and confirmation bias.

Negativity bias refers to the fact that people are far more affected by negative stimuli than by positive ones. As a famous paper on the subject notes, a single cockroach in one's salad ruins the whole salad, but a piece of candy placed on a plate of disgusting crud will not make that crud seem even slightly more palatable.

Negative stimuli draw more attention than neutral or positive ones, they activate more behavior, and they create stronger impressions — negative ones, of course. In recent years, political scientists have even suggested that negativity bias plays an important role in the political choices we make — that people adopt conservative political views because they have a heightened sensitivity to negative stimuli.

Confirmation bias refers to the fact that people almost always seek out, pay attention to, and believe information that confirms their beliefs more than they seek out, pay attention to, or believe information that contradicts those beliefs.

When you apply these two principles to search suggestions, they predict that people are far more likely to click on negative search suggestions than on neutral or positive ones — especially when those negative suggestions are consistent with their own beliefs. This is exactly what the new study confirms.

Google data analysts know this too. They know because they have ready access to billions of pieces of data showing exactly how many times people click on negative search suggestions. They also know exactly how many times people click on every other kind of search suggestion one can categorize.

To put this another way, what I and other researchers must stumble upon and can study only crudely, Google employees can study with exquisite precision every day.

Given Google's strong support for Mrs. Clinton, it seems reasonable to conjecture that Google employees manually suppress negative search suggestions relating to Clinton in order to reduce the number of searches people conduct that will expose them to anti-Clinton content. They appear to work a bit less hard to suppress negative search suggestions for Mr. Trump, Senator Sanders, Senator Cruz, and other prominent people.

This is not the place to review the evidence that Google strongly supports Mrs. Clinton, but since we're talking about Google's search bar, here are two quick reminders:

First, on August 6th, when we typed "When is the election?," we were shown the following image:

Image
© PHOTO: GOOGLE “When is the election?”

See anything odd about that picture? Couldn't Google have displayed two photos just as easily as it displayed one?

And second, as reported by the Next Web and other news sources, in mid 2015, when people typed "Who will be the next president?," Google displayed boxes such as the one below, which left no doubt about the answer:

Image
© PHOTO: GOOGLE “Who will be the next president?”

Corporate Control

Over time, differentially suppressing negative search suggestions will repeatedly expose millions of people to far more positive search results for one political candidate than for the other. Research I have been conducting since 2013 with Ronald Robertson of Northeastern University has shown that high-ranking search results that favor one candidate can easily shift 20 percent or more of undecided voters toward that candidate — up to 80 percent in some demographic groups, as I noted earlier. This is because of the enormous trust people have in computer-generated search results, which people mistakenly believe are completely impartial and objective — just as they mistakenly believe search suggestions are completely impartial and objective.

The impact of biased search rankings on opinions, which we call the Search Engine Manipulation Effect (SEME), is one of the largest effects ever discovered in the behavioral sciences, and because it is invisible to users, it is especially dangerous as a source of influence. Because Google handles 90 percent of search in most countries and because many elections are very close, we estimate that SEME has been determining the outcomes of upwards of 25 percent of the national elections in the world for several years now, with increasing impact each year. This is occurring, we believe, whether or not Google's executives are taking an active interest in elections; all by itself, Google's search algorithm virtually always ends up favoring one candidate over another simply because of "organic" search patterns by users. When it does, votes shift; in large elections, millions of votes can be shifted. You can think of this as a kind of digital bandwagon effect.

The new effect I have described in this essay — a search suggestion effect — is very different from SEME but almost certainly increases SEME's impact. If you can surreptitiously nudge people into generating search results that are inherently biased, the battle is half won. Simply by including or suppressing negatives in search suggestions, you can direct people's searches one way or another just as surely as if they were dogs on a leash, and you can use this subtle form of influence not just to alter people's views about candidates but about anything.

Google launched autocomplete, its search suggestion tool, in 2004 as an opt-in that helped users find information faster. Perhaps that's all it was in the beginning, but just as Google itself has morphed from being a cool high-tech anomaly into what former Google executive James Whittaker has called a "an advertising company with a single corporate-mandated focus," so has autocomplete morphed from being a cool and helpful search tool into what may be a tool of corporate manipulation. By 2008, not only was autocomplete no longer an opt-in feature, there was no way to opt out of it, and since that time, through strategic censorship, it may have become a tool for directing people's searches and thereby influencing not only the choices they make but even the thoughts they think.

Look back at the searches I have shown you. Why does Google typically show you far fewer search suggestions than other search engines do — 4 or fewer, generally speaking, compared with 8 for Bing, 8 for DuckDuckGo and 10 for Yahoo? Even if you knew nothing of phenomena like negativity bias and confirmation bias, you certainly know that shorter lists give people fewer choices. Whatever autocomplete was in the beginning, its main function may now be to manipulate.

Without whistleblowers or warrants, no one can prove Google executives are using digital shenanigans to influence elections, but I don't see how we can rule out that possibility.


Perhaps you are skeptical about my claims. Perhaps you are also not seeing, on balance, a pro-Hillary bias in the search suggestions you receive on your computer. Perhaps you are also not concerned about the possibility that search suggestions can be used systematically to nudge people's searches in one direction or another. If you are skeptical in any or all of these ways, ask yourself this: Why, to begin with, is Google censoring its search suggestions? (And it certainly acknowledges doing so.) Why doesn't it just show us, say, the top ten most popular searches related to whatever we are typing? Why, in particular, is it suppressing negative information? Are Google's leaders afraid we will have panic attacks and sue the company if we are directed to dark and disturbing web pages? Do they not trust us to make up our own minds about things? Do they think we are children?

Without whistleblowers or warrants, no one can prove Google executives are using digital shenanigans to influence elections, but I don't see how we can rule out that possibility. There is nothing illegal about manipulating people using search suggestions and search rankings — quite the contrary, in fact — and it makes good financial sense for a company to use every legal means at its disposal to support its preferred candidates.

Using the mathematical techniques Robertson and I described in our 2015 report in the Proceedings of the National Academy of Sciences, I recently calculated that SEME alone can shift between 2.6 and 10.4 million votes in the upcoming US presidential race without anyone knowing this has occurred and without leaving a paper trail.

I arrived at those numbers before I knew about the power search suggestions have to alter searches. The new study suggests that autocomplete alone might be able to shift between 800,000 and 3.2 million votes — also without anyone knowing this is occurring.

Perhaps even more troubling, because Google tracks and monitors us so aggressively, Google officials know who among us is planning to vote and whom we are planning to vote for. They also know who among us are still undecided, and that is where the influence of biased search suggestions and biased search rankings could be applied with enormous effect.

[Postscript: Google declined to comment on the record when queried about some of the concerns I have raised in this article. Instead, on August 17th, a company representative sent me to a blog post released by the company on June 16th; you can read Google's official position on autocomplete there. For the record, I am a moderate politically, and I support Hillary Clinton for president. I do not believe, however, that it would be right for her to win the presidency because of the invisible, large-scale manipulations of a private company. That would make democracy meaningless, and that is why I am trying to keep the public informed about my research findings. Also for the record, I have chosen to publish this article through Sputnik News because Sputnik agreed to publish it in unedited form in order to preserve the article's accuracy. —R.E.]

___________________

EPSTEIN (@DrREpstein) is Senior Research Psychologist at the American Institute for Behavioral Research and Technology in Vista, California. A PhD of Harvard University, Epstein has published fifteen books on artificial intelligence and other topics. He is also the former editor-in-chief of Psychology Today.
admin
Site Admin
 
Posts: 36125
Joined: Thu Aug 01, 2013 5:21 am

Google's Eric Schmidt Designs Clinton Knowledge Organization

Postby admin » Thu Dec 08, 2016 7:05 am

Did Google Manipulate Search for Hillary?
by Matt Lieberman
June 9, 2016



Google has responded to this video via an email statement to the Washington Times:

"Google Autocomplete does not favor any candidate or cause. Claims to the contrary simply misunderstand how Autocomplete works. Our Autocomplete algorithm will not show a predicted query that is offensive or disparaging when displayed in conjunction with a person’s name. More generally, our autocomplete predictions are produced based on a number of factors including the popularity of search terms."

Read the full article here:
http://www.washingtontimes.com/news/2...
admin
Site Admin
 
Posts: 36125
Joined: Thu Aug 01, 2013 5:21 am


Return to Wikileaks

Who is online

Users browsing this forum: No registered users and 1 guest