Skip to content

Archive for August, 2010

Bias: First Past The Post vs. constituency boundaries

There’s a pretty terrible piece on Lib Dem Voice that essentially argues that because FPTP is biased towards the winning party, the Labour Party does not enjoy an advantage courtesy from the current constituency boundaries. Which is nonsense. But the stats do lead to a graph that suggests a few interesting things. The “fairness ratio” is based on the assumption that if, say, the winning party averaged only 90,000 votes per seat won while the 2nd party required 100,000 seats per seat, then they received an unfair advantage of 90 / 100 = 0.9 fairness ratio.

A nice simple concept which deserves to be knocked around a bit. The graph below suggests two things to me:

  1. FPTP delivers an increasing advantage towards the stronger party, so that as they become more popular they require fewer and fewer seats compared to their opposition. This bias is reflected in the slope of the line.
  2. Labour has an additional advantage, which means that in the elections they win they get far larger majorities in relation to votes cast. Hence their consistently lower ratio than the Tories – not a single red dot gets within 10% of the lowest blue one, even when the 2005 election returned them a smaller majority than the Tories in 1983.

I’m biased, I’ll admit. I want FPTP to go, and I think Labour does have a boundary advantage that should be removed. So perhaps I’m reading that into the data. Still, my reaction does seem reasonably objective. Dr J Lee (a mysteriously anonymous name) seems to think that if there is FPTP bias, then there can be no boundary bias, which makes no sense whatsoever.

Incidentally, the 2010 results aren’t in the above because in addition posting here I’m commenting on the linked article, and I’m restricting myself to the same date range.

Keeping domestic violence in the home as a way to save money

Apparently, Theresa May thinks that protecting the victims of domestic violence isn’t a good use of public money. I can’t image the thought process you have to go through to see cancelling this particular pilot scheme as a good target for cuts.

First of all, in the context of the budget targets, I can’t imagine this has much impact at all. I have no idea how much legislation costs, but in terms of operational costs I don’t see that a temporary 14 day ban is going to be any more expensive that calling out the police anyway. The bans require a senior police officer, and only last 14 days, so it sounds like the cost controls are effectively built in.

Second, taking off my sociopath hat and thinking about the people not the money, many of the great tragedies in life start when someone is forced to say, “I would like to help but the rules won’t let me.” It makes my skin crawl to think of a politician deciding they would rather save a little bit of money and leave a woman to suffer violence both physical and psychological.

I can’t imagine anything more fundamentally humane, more channeling of the wisdom of Solomon, than for a domestic situation bad enough to result in some basic protection: “OK, sir, we can’t just ignore what appears to be a very serious situation, and you’re going to have to give your partner a couple of weeks to decide what she wants to do.” My impression is that we’re talking about a very simple form of intervention, born of natural justice familiar to all rather than any kind of ideology. How many targets for cuts would meet those criteria?

Even in mundane matters, it makes a difference to have time and freedom in a familiar environment to consider your relationship. Surely it makes a huge difference in something like domestic violence. When you’re at an absolute low, dealing with police or social agencies may well be quite intimidating – especially if you have been quite literally beaten into submission.

What’s Theresa May’s message in this time of fiscal crisis? “We’re all in this together, so get back in your house, you two, and let’s all hope he doesn’t hit you so hard next time”?!

Women’s group to provide legal cover for policies that hurt women

Women’s advocacy group The Fawcett Society is in the process of establishing legal precedents that will tend to impact women.

That isn’t their goal. But it is the most likely consequence of starting a court case that would require the Budget and government departments that would require women to be considered as a vulnerable group, and decisions to include a formal assessment of their gender impact. Unless millions of men are hired for no purpose other than to balance the gender ratio in the public sector, all public sector cuts will impact more women generally. Until some point in the distant, unknown future where women earn equal pay, tax hikes or benefit cuts will always be felt by them the most. Read more

Does online coverage really influence poll results?

Stephen Tall at LibDemVoice provides an interesting link to a research paper by a group called Onalytica, which asks whether the share of internet coverage during an election campaign influences their poll results.

Going through it quickly, I didn’t get the feeling that much substance can be taken from it – there wasn’t anything related to the actual election results that I noticed. The idea that Nick Clegg’s sudden boost from the 1st debate might have been a temporary novelty effect, for instance. Did Gordon Brown’s gaffe really make a difference or was it just coincidence? It seems to be an exercise in causation / correlation errors, and I wonder if they have focused on the froth and ignored the tide.

Their analysis of that gaffe provides a good example of why I wouldn’t rely on the judgement of any author of this paper:

Note that Gordon Brown’s influence boost due to the ‘bigot scandal’ did not translate to an equally rapid poll increase for Labour.

Well, of course gaffes don’t lead to poll increases. Although they do go on to cover negative sentiment, this does suggest to me that they may be too intent on proving an assumption that “coverage = results” as compared to seeing what the numbers tell them.

In terms of methodology, there are two significant factors they don’t address. First, it is a very common error in social media to assume that volume relates to influence. However, any serious research on the subject (I believe) shows that individuals are far more influenced by their circumstances and their close social circle than the media. Their Share-of-Influence metric is a prejudicial name, ignores the well covered debate about self-reported voting intentions vs actual voting results, and would be better described as Share-of-Coverage.

Second, online articles and popularity polls are both ways of measuring public sentiment. They should be closely correlated, and I don’t know that measuring them a day apart is not sufficient to assess their independence or otherwise. For instance, imagine the news breaks that unemployment has shot up, and it gets extensive coverage online. When a poll is taken the next day, the party in charge has dropped. Did the pundits really have any influence? Were they anything other than a faster measurement of public opinion than conducting a full formal poll?

Although they say “the relationships may be interpreted as follows; on average, a 10% increase in a party’s share of the total UK Election discussion, the day before a poll, resulted in a 9% increase in poll results for the Tories”, the graphs they use only support correlation, not causation. It takes me an hour to read that paper and write this post – it would take a full day for a poll to be conducted and the results published. I feel it’s a very dubious assumption that the day’s delay in their study is anything more than the inherent lag in two different approaches to assessing the public mood.

They should consider a few more analyses before coming to such conclusions. First, the issue of polls lagging online articles: if you compare polls on the same day, or two days apart, do you get the same results? What about the other way around – do positive poll results lead to more coverage? Second, irrespective of a causal relationship, there will almost always be a correlation between the absolute numbers, because they tend not to change that significantly from day to day. But what if you graph the change in coverage versus change in polls? Third, perhaps the public is influenced by, say, the past three days of coverage – so what happens when you take some form of rolling or weighted average?

In short, they are using data that suggests the possibility of a causal relationship, but then simply “interpreting” that relationship instead of actually testing for it. If they are correct (it certainly seems a valid premise), then it is not by analysis but instinct. I wouldn’t recommend heavily investing in getting online coverage until a relationship to either poll or actual election results was more clearly shown.

They do seem to have an interesting dataset, however. It would be good to see more made of it.