Friday, 22 April 2011

The curse of the hero scientist

I have recently read David Edgerton’s book Britain’s War Machine (with thanks to Worcester Public Library). This looks at weapons production in the UK during the Second World War, and along the way shoots down several common myths. For instance, Edgerton notes that Britain was well-prepared for war in 1939, had an army that was more mechanised than Germany, was supported by a vast empire and did not therefore ‘stand alone’ after the fall of France. The most intriguing part of the book for me, however, is the discussion of innovation in weaponry. The British Government, from Churchill downwards, was fascinated by innovative weapons, and sought to make the best possible use of scientists. Yet although the most effective scientific and technological developments were achieved by teams working in industry, the military and the civil service, governments had a concept of the hero scientist, inspired and working alone. The figure of the hero scientist was encouraged after the War in films and books which incorrectly portrayed people like Frank Whittle and Barnes Wallace as lone visionaries struggling without official support against bureaucratic obstruction. Documentaries have subsequently given undue emphasis to innovations such as the Mulberry Harbour and the ‘bouncing bomb’, which had only a marginal impact on the outcome of the War.

Edgerton argues that this idea of hero scientist not only diverted resources from more effective forms of weapon production, but also led to an emphasis on equipment rather than effective strategy and tactics as the means of winning battles. The results could be seen in 1941, when British forces were defeated in Singapore by a smaller and poorly-equipped Japanese Army.

The idea of the hero scientist has been something of a curse in Britain ever since. Grand innovation imposed from above has often been prized above step-by-step technical improvements or enhancing the creativity and skill of the workforce. This can be seen in the British motor industry, which in the generation following the Second World War, produced many innovative designs, from the sports car, the Mini, front-wheel drive cars, cars with tailgates, to the luxury 4x4. Yet the industry paid much less attention to improving the reliability of its products by improving the organisation and skills of the men and women who built the cars. As a result, British cars lost market share, and British-owned companies became bankrupt. Most of the British motor vehicle industry in now owned by the Japanese, who focussed on redesigning the technology by which cars are produced, in co-operation with a skilled and motivated workforce.

Although British industry has now generally adopted Japanese production methods, the concept of the hero scientist is still dominant in many areas of British life, and particularly in the funding of higher education. This has two main streams of public funding for research: the research excellence framework (‘REF’ and formerly ‘RAE’), which allocates cash to departments on the basis of past performance; and the research councils, which allocate most of their funds as research grants to named individuals (called ‘principal investigators’) on the basis of future performance. Both methods are phenomenally expensive and capricious. I have looked at some of the problems with the RAE and the REF in previous postings. What about the allocation of research grants to named individuals by the research councils?

The cost incurred here is not just in the transaction costs involved in adjudicating applications for research grants (large panels of academic experts plus the research council overheads they incur), but in the time spent by university staff throughout the country in preparing the 80 or 90% of applications which are not approved for funding. Most of these failed applications are of high scientific quality, which means that the allocation process becomes determined in the end by politics, horse-trading and reputation rather than by quality. Those who get the money are not necessarily the best scientists, but the scientists who are best at getting the money.

Research council funding also destabilises University research. Grants go to named individuals (or groups of individuals), which means that they take the grant with them if they move to another university. Since grants are mainly used to pay the salaries of the researchers and postgraduate studentships associated with the project, these staff are usually required to move with their principal investigator, rather like a clutch of medieval peasants following their lord and master. This all creates a transfer market, in which hero scientists are bribed to move from one university to another. The effect on the department that loses in this bribery race can be dramatic: in a short time, it can lose not just its senior academic staff, but also a large chunk of its funding, several researchers and even some postgraduate research students.

This whole business is often justified in the same way as the free market in senior executive salaries is justified - that the best prizes go to the best people. But good research is not the product just of few hero scientists, but of teams with different skills that can work effectively together on a sustained basis. These skills should include, for instance, the abilities to conduct thorough field and laboratory work, to critically analyse data, to present results effectively, to apply results for clinical or commercial benefits, and so on. These skills will not all be present in one person, yet many of the members of a research team, however capable they are, will never be able to get major research grants in their own name. Universities have little commitment to developing long-term multi-disciplinary teams because funding is unstable, and because it is much cheaper to bribe a grant-holder to move in with their entire team from another university, rather than to spend years developing the skills of their own staff.

Indeed, the whole research enterprise in the UK is staggeringly wasteful of human talent. Scientists proceed through their three years of doctoral research (usually funded by research grants and other public money). This involves some training in research, in return for which the university gains a very cheap source of labour. After doctoral training, only a minority of the new graduates succeed in gaining university employment, usually as a postdoctoral research fellow funded from a time-limited research grant. This may be followed by other time-limited research fellowships, until the great majority drop out of academic life altogether. Years of research training and experience are thus lost - not because the staff who leave are substandard, but because they have the drive to find a job with an employer who, unlike the university sector, rewards talent and hard work with good pay and decent contracts of employment.

There is an alternative funding model. The research councils do grant some funds on a long-term basis to designated centres, and this model is also used with great success by charities such as Cancer Research UK. By providing stable funding, such centres can maintain multi-skill research teams and sustain the social capital needed for effective research.

We should therefore expand this model into by concentrating public funding for research on developing a limited number of specialised research centres which would be responsible not just for research, but also for training scientists to a high level of skills, retaining them in science, and applying scientific knowledge to the wider world. This long-term funding should replace grants to named individuals and the whole wasteful farce of the REF. Academics working within and outside these research centres would of course be able to continue to seek funding for research from charities and other bodies. Some, however, might realise that there is more to scholarship than the rather narrow concept of ‘research’, and may even come to understand that the main function of universities is to pass on knowledge, values and skills to the next generation. This task is not as heroic as being a lone scientist, but plays a greater part in maintaining our civilisation.



See also:http://stuartcumella.blogspot.com/2009/10/great-crackpot-ideas-of-past.html
http://stuartcumella.blogspot.com/2010/06/status-wars-in-universities.html

Sunday, 10 April 2011

The Rise of Monarchy

Every year, the Economist magazine compiles a ranking of cities according to their ‘livability’. This is based on data relating to personal safety, availability of goods and services and infrastructure. Eight of the ten most liveable cities in the world are in countries with monarchies - actually with a single monarch - Queen Elizabeth II. It is notable that although Vancouver is top of this rating (with two other Canadian cities in the top ten), the highest ranked city in the USA (Pittsburgh) came only 29th. This surely is evidence that Americans were wrong to adopt a republican form of government, and should have stuck with a monarchy like their better-governed neighbours to the North.

Why should constitutional monarchy of the kind found in Commonwealth, Japan and Western Europe be associated with good government? One reason is that the monarch is an agreed national arbiter who, by inheriting their post, avoids all the enmities and political debts incurred by people who compete for power to get to the top. The very lack of power of the monarch also means that he or she does not get blamed for the disappointments and disasters inevitably associated with government. As a result, the monarch can provide a symbol of national unity and continuity in political life.

But monarchy also has a sort of magic, however humdrum the people who occupy these positions. This is partly a product of the wealth and status of the head of state, but also of the very continuity of the post. In the past, kings were seen as being divinely blessed, possessing some special virtue or charisma that set them apart from the rest of us. In England and France, it was even believed that being touched by the king would cure a person of the skin disease Scrofula. This magic continues even with the relatively powerless constitutional monarchs of today, making them (with their families) a focus for imagination, fantasy and envy.

There are several other hereditary regimes in countries that are nominally republics. In India, the Ghandi family possess the magic of royalty (at least in the eyes of several million supporters of the Congress Party). Being the senior member of this family is deemed sufficient qualification for both Party leadership and the post of Prime Minister, despite inconvenient facts like being a foreigner and having no experience of political office. In this respect, the Ghandi family also resemble monarchies, where a person can succeed to the most senior post in the land based on the sole qualification of being the eldest son or daughter of the deceased ruler.

There are other hereditary regimes which have the money and the power but not the magic. North Korea will soon be ruled by the third generation of Kims. Syria is ruled by the son of the preceding dictator, and other Arab states like Egypt and Tunisia would have followed in the same manner if their people had not overthrown their absolutist rulers. In these countries, hereditary succession exists because gangster leaders can only trust their closest relatives. But that of course is how virtually all our European monarchies began several centuries ago.