Home » Analysis
Category Archives: Analysis
Imagine that every professional or semi-professional footballer in the country had the same ability and the same fitness levels. How would it be possible to distinguish between them, how would league tables be established, who would play for the top teams? Nonsense of course because we know that not every football player can have the same ability or fitness levels for that matter. And there is a myriad of reasons why this may be the case. However, there is probably little doubt that those that have been professionally coached, even at the lowest level in the professional game can run rings round most part time amateur players.
Not everyone can be at the top, in the Premier League. If we took a sample of players across the leagues and were to somehow measure ability then the likelihood would be that we would find a normal distribution, a bell curve, with most players having average ability and a few with amazing ability and a few with some but perhaps inconsistent ability. It is probably likely that we would find those with the most ability in the Premier League and those with the least in lower or non-league clubs and these are probably semi-professional or amateurs. Perhaps it would be prudent to reiterate that those with the least ability are still way ahead of those that do not play football or just dabble in it occasionally. This then is not to say that those at the lower end of the skills distribution curve are rubbish at playing football, just that they, for whatever reason, are not as skilled as those on the opposite side of the curve. And those that have average skill i.e., the greatest number of footballers, are very good but not quite as good as the most skilled. Make sense so far, I hope so?
If we apply the logic to the skill of footballers can we not apply the logic elsewhere, in particular to university students. Surely, in terms of academic ability, we would find that there were those at the one end of the curve achieving A and B grades or 1st degrees and then the majority in the middle perhaps achieving C and D grades of course tailing off to those that are achieving perhaps low D and F grades. We would probably hope to find a normal distribution curve of sorts. We could probably say that those with lower grades have far greater academic ability than anyone that hasn’t attended university. We could certainly say that the majority i.e. those getting C and higher D grades are good or very good academically when compared to someone that hasn’t attended university but not quite as good as those achieving A and B grades. The assessment grading criteria seems to confirm this, a D grade is labelled as ‘satisfactory’, a C grade ‘commended’ a B grade ‘of merit’ and an A grade ‘distinguished’. Just to reiterate, achieving a D grade suggests a student has displayed ‘satisfactory’ academic ability and met the requisite ‘learning outcomes’.
Why is it then that degrees at institutional level are measured in terms of ‘good degrees’? These are a ‘1st and 2.1. At programme level we talk of ‘good grades’, ‘A’ grades and ‘B’ grades. The antithesis of ‘good’ is ‘bad’. This logic then, this managerialist measurement, suggests that anything that is not a 1st or a 2.1 or an ‘A’ or ‘B’ grade is in fact a ‘bad’ grade. Extending the logic further and drawing on more managerial madness, targets are set that suggest 80% of students should achieve a ‘good grade’. A skewing of a distribution curve that would defeat even the best statistician and would have Einstein baffled.
Let me revisit the football analogy, using the above language and measurements, a comparison would suggest that any player outside of the Premier League is in fact a ‘bad’ player. Not only that but a target should be set where 80% of players should be in the Premier League. The other leagues then appear to be irrelevant despite the fact that they make up probably 90% of the national game and prop up the Premier League in one form or another.
With such a use of language and a desire to simplify the academic world so that it can be reduced to some form of managerial measurement, it is little wonder that perfectly capable students consider their work to be a failure when they earn anything less than an A or B grade or do not achieve a 1st or 2.1 degree.
It is not the students that are failing but higher education and academic institutions in their inability to devise more sophisticated and meaningful measurements. In the meantime, students become more and more stressed and disheartened because their significant academic achievements fail to be recognised as achievements but are instead seen at an institutional level as failures.
The Office of National Statistics has admitted to some frailties in its data collection around migration. What a shock it must have been to discover that the manner in which it collected the data was somewhat flawed, so much so that they have now downgraded the data to ‘experimental’.
It might seem almost laughable that an organisation that prides itself in, and espouses data accuracy and has in the past criticised police recorded figures for being inaccurate (we know they are) has itself fallen foul of inaccuracies brought about by its own ill thought out data gathering attempts. The issue though is far greater than simple school boy errors, these figures have had a major impact on government policy for years around immigration with calls for greater control of our borders and the inevitable identification of the ‘other’.
The figures seem to be erroneous from somewhere between the mid-2000s and 2016, although it is unclear how accurate they are now. New analysis shows that European Union net migration was 16% higher in 2015-16 than first thought. Whilst the ONS admits that its estimation of net migration from non-EU countries is overestimated, it is not clear exactly by how much this might be.
Such a faux pas led to the story hitting the news; ‘EU migration to UK ‘underestimated’ by ONS’ (BBC, 2019) and ‘Office for National Statistics downgrades ‘not fully reliable’ official immigration data as experts claim figures have been ‘systematically under-estimating net migration from EU countries’ (Daily Mail, 2019).
So, there we are the ONS gets statistics wrong as well and the adjusted figures simply support what Brexiteers have been telling everyone all along. But why release the figures now? When were these errors identified? Surely if they have been inaccurate until 2016 then the mistake must have been found some short time after that. So why wait until the eleventh hour when ‘Brexit means Brexit’ is about to come to a calamitous conclusion? And why those headlines? Why not the headline ‘Big mistake: net migration from outside the EU vastly overestimated’?
I’m not one to subscribe to conspiracy theories but at times it is difficult to overlook the blindingly obvious. So called independent bodies may not be that independent, the puppet master pulls the strings and the puppets dance. Little value in headlining facts that do not support the right-wing rhetoric but great political value to be had in muddying the waters about the EU and open borders.
This discourse ignores the value of migration and simply plays on the fears of the populace, these are well rehearsed and now age-old arguments that I and many others have made*. The concern though is when ‘independent institutions’ subtly start to join in the furore and the moral compass starts to become distorted, subjugated to political ideals. I can’t help but wonder, what would Durkheim make of it?
* It is well worth watching Hollie McNish’s Mathematics on YouTube.