Google Surveys MT-AL Poll: Gianforte +10

As I’ve previously mentioned in a Diary for a South Dakota presidential poll I conducted in October 2016 and elsewhere, Google Surveys allows users to create relatively cheap 1-question polls on state and national issues. State polls are usually $0.15 per respondent, or $75 for a 500-respondent poll. Multiple question polls are 10x more expensive. The main problem is that you get what you pay for – these polls aren’t that good, and the one-question methodology requires some sacrifices that probably lower reliability even more. 538 gave Google Consumer Surveys a B rating before the 2016 cycle, but their track record has likely deteriorated since then.

Nevertheless, I recently conducted a Google Survey of the upcoming Montana At-Large Special Election. I put it into the field on April 19, one day before RRH announced that it was going to try to poll the race. It completed today, April 21. The question asked was:

Montanans will go to the polls on May 25 to vote for a new U.S. Congressman. If this special election were held today, for whom would you vote?

The choices were: (randomized) Democrat Rob Quist, Republican Greg Gianforte, Libertarian Mark Wicks and “I am not likely to vote in this election” (always last). As expected about 33% of the 533 respondents chose the “not likely to vote” option. Among the 356 respondents to answer with one of the candidates, the weighted results were as follows:

Gianforte 51%
Quist 41%
Wicks 8%

These results were weighted for sex and age to the percentage of those subgroups who reported voting in the 2014 November CPS survey. The raw results were Gianforte 49%, Quist 42%, Wicks 9%.  Google weighted to the Internet Audience, it is Gianforte 48%, Quist 42%, Wicks 10%.

As I’ve seen in the other recent Google Surveys of the race (more on this below), there is a huge divide between Eastern and Western Montana: Quist leads by 2 points (weighted)/9 points (unweighted) in Western Montana (n=188); Gianforte leads by 24 points (weighted)/23 points (unweighted) in Eastern Montana (n=160). I’ve divided Eastern and Western Montana this way:

Montana Regional Map

Eastern Montana is slightly overrepresented in the poll results. It makes up about 41% of the electorate in your typical election and 46% in the poll. I estimate that controlling for this would cause the poll to move about 2 points toward Quist.

There was no large gender gap in the raw results. Men were about 3 points less likely to choose Quist, but about 4 points more likely to choose Wicks. Suburbanites were much more likely to vote for Quist Q+6 (raw) than Rural residents G+30 (raw). Montana has very few urbanites, according to the poll. 87% of respondents earned $25,000-$49,999 per year, making discerning an income gap difficult.

Other recent Google Surveys have been all over the place:

Poll Dates Weighted Raw
3/12 to 3/14 Quist +17 Quist +14
3/14 to 3/16 Tie Tie
3/18 to 3/20 Quist +8 Quist +8
4/6 to 4/8 Gianforte +1 Quist +2

Gravis also polled the race on April 6, finding Gianforte up by 12.

As I said above, one constant in all of the Google Surveys is that Western Montana is signficantly more Quist-leaning than Eastern Montana. The gap between the two regions has ranged from about 20 to 42 points. A 20-point gap may be believable, but a 42-point gap isn’t.

RRH is currently raising funds to do a proper poll of the election. It would be good to see a reliable poll instead of crappy Google polls.

Previous Post Next Post


  • shamlet April 22, 2017 at 9:35 pm

    Thanks for this! I can actually believe the topline, though I’m not sure how reproducible it would be. All the other data you get seem to check out sanity-wise as well though but given the erratic nature of the other surveys I’m not sure if that’s anything other than dumb luck with this one.

    Part of the problem we had with the GA-6 result is that we were far too reliant on the Google polling to make up for the low IVR sample size. It can work to get a younger sample but only as a very minor adjunct. The big problem with online polling is that it’s hard to get a sample that’s both sufficiently random and screened well enough to be likely voters… if you go with all comers you’re probably going to get a lot of non-voters, especially for a low-profile race (people randomly clicking through without reading the poll to get to their online content) while on the other end invitation-type surveys can result in far too tight a screen. I don’t have a good practical way to solve that problem.

    Interestingly, there’s a company that actually reached out to us after this most recent poll that has a promising method of recruiting panels for this kind of survey, but they are wayyy too expensive for our use (they charge like $15 per response!)

    R, MD-7. Process is more important than outcome.

  • cinyc April 22, 2017 at 10:22 pm

    That’s the thing about most of the Google Surveys I’ve seen and conducted – the cross-tabs generally make some intuitive sense, but sometimes tend to overdo it. I’d be shocked if there’s a 35 point difference between Eastern and Western Montana, as some of the Google Surveys have suggested, for example.

    As you probably know, Google Surveys use 2 main sources to get their polling data – Google ads on (mainly news) websites and their own Mobile App, where the pay people Google Play credits to complete surveys. Often, there’s a huge difference in results between the two types. In this poll, the Mobile App users were overwhelmingly pro-Gianforte (G+40 raw) and more likely to vote libertarian, while website users were split more or less evenly between the two main candidates. The mix of Mobile App respondents versus website respondents matters to the topline. The Mobile App result here is a little surprising, as Mobile App users tend to skew younger than website users – but the 18-24s in this poll were one group that was strongly pro-Gianforte, which is out-of-line with conventional wisdom. I don’t think the Mobile App users can skip a poll question, so this might lead to more random clickers.

    Have you guys ever tried to use a screening question with a Google Survey? Pricing is very opaque (they have to run a test before they give you a price), so I haven’t. The use of screening questions might make it easier to weed out less likely voters.

    • shamlet April 22, 2017 at 10:33 pm

      We actually did on the GA-6 poll. It massively upped the cost, but it was necessary to screen in for voters from the congressional district when all Google could do was give you a statewide sample. We did something like 1500 responses to get 100 people to screen in, and then a second screening question for Likely Voters (most everyone who bothered to say they were in the district said they were likely to vote).

      R, MD-7. Process is more important than outcome.

  • cinyc May 23, 2017 at 11:16 am

    For posterity’s sake, in case I forget or anyone thinks this was the last Google Survey I conducted before the special election. It was not. I conducted another Google Survey from 5/21-23. It is a likely outlier, like the March poll (this was the second poll I conducted in April) – Quist +15. Here’s a link to the poll and weighting spreadsheet with crosstabs:

    U.S. Election Atlas Link to a little bit of my poll analysis:

  • Leave a Reply