Friday, 12 November 2010

What old men wear

Each generation lives in its own world of customs and tastes, carrying on into old age the habits and styles of dress they learnt in their youth. On the few occasions I go to the supermarket in the morning, it is full of the very elderly - people in their 80s and 90s trundling around behind their trolleys. What I notice most are the strange clothes worn by the old men: shapeless fawn jackets, often worn with fawn trousers and shiny black shoes. Some wear hats, flat caps being the most popular. This style of dress is not worn by any other age cohort, and certainly not by those in their 60s, who are usually seen in jeans and trainers and who rarely wear hats. 

This shows the peril of talking about ‘the elderly’ as if they are a single undifferentiated group. It is more true that there is a major cultural divide between those in their 60s and those in their 80s, resulting from the great postwar changes in British society. A man aged 85 would have been born in 1925, would have served in the armed forces in the Second World War (or at least in some war-related occupation), would have been raised at a time when what was once called ‘leisure wear’ was unknown, listened to swing music and crooners when an adolescent (teenagers had not yet been invented), and thought about sex before contraceptive pills were invented. Someone who is 65, by contrast, was born after the War ended, listened to rock music, wore jeans and casual clothes, and met women whose sexual desire was no longer constrained by fear it would lead to an unwanted pregnancy. A man in his 80s became of age fearing death in an actual war. A man in his 60s was raised in peacetime but feared nuclear war.

This is not to suggest that a man in his 60s resembles younger age cohorts. I was born in the suburbs of Birmingham in 1946. As a child, I lived in a world in which virtually no-one owned a car. The roads were therefore empty, and we played in them whenever we were not at school until it got dark. We walked to primary school by ourselves at quite a young age, and came home during the dinner break for a substantial meal cooked by our mothers who did not of course go to work. Children who stayed indoors were a matter of concern to us all. We knew of hardly any children from single-parent families, no-one with skin darker  than our own, and no-one whose first language was not English. The only celebrities were film stars, and television was broadcast for a few hours a day. There were no computers of course, but we read books and comics.

People younger than those of my age live in a world with new fears. There is the unspecific threat of some sort of ecodoom. But local fears are more important, and constrain the lives of children. Their parents are rightly scared of traffic, but also have fantasy fears that the streets are full of murderous paedophiles. Children still play in the street, particularly in quiet towns and villages, but spend much of their time in front of computers and television screens. The clothes they wear seem to designate membership of one or other youth tribe, each associated with a particular kind of music. This seems a more elaborate world than the mods and rockers I remember. Still, none of them wear shapeless fawn jackets.

Wednesday, 3 November 2010

The curse of the course

Beware of your metaphors, for they shall make you their slaves. We use metaphors so often in our everyday speech that we fail to recognise how they smuggle implications into our thinking. One example (discussed in a previous blog) is ‘stress’. Another, much used in education, is ‘course’. This word, presumably taken from horse-racing, has multiple smuggled implications. On a racecourse, all the horses start at the same line at the same time. They all jump over a pre-determined sequence of fences in the same order. They all complete the course at the same finishing line, being ranked according to the order in which they finish.

Applied to education in universities, the metaphor implies that groups of students on a course all start and finish their studies at the same time, progressing through their experience of learning in the same order and jumping over the same set of assessments. Students are ranked at the end of the course, with a mark taking the place of speed of completion. However, students who do not complete the course in the same time as others, are regarded as non-completers and fail. Lets challenge each of these smuggled implications.

In the first place, there is no need for students to start and finish a programme of study at the same time. Many people wish to study part-time, and combine university education with work. This usually means that their studies take at least twice as long as a full-time student. Many part-time students are mature and have families. They are therefore more likely to need a break of studies because of childbirth, change of employment , and so on. This is administratively inconvenient for universities, but part-time study may soon be the only way in which many people are able to pay for their studies. Part-time study has another advantage: academic education can be dovetailed with vocational training, also enabling students to apply their increased skills to the workplace and increase their productivity. Universities began, and largely continue to be, places where people are educated into the knowledge, skill and values required for particular occupations. But the domination of full-time study in British universities has split vocational education from vocations. As a result, employers complain that graduates lack the basic skills needed to perform their work, while many graduates fail to find employment appropriate to subjects they have studied for three or more years.

Secondly, not all students need to follow the same sequence of learning materials. In many areas of knowledge, some subjects do indeed have to be understood in sequence (the ‘building block approach’). But this is not always true. In many of the programmes and modules I have taught, much of the material could be studied in any order. Other approaches to learning require students first attain an overall (if rather simplified) perspective of the subject before studying a series of individual areas in more detail, leading to their developing a greater understanding of its complexity. In such cases, students need a shared introduction to the subject and in some cases a shared conclusion, but can explore the remainder of the curriculum in any order at their own pace.

Thirdly, ability should not be confused with speed of completion. Some people just take longer to learn, but are as capable at the end of their studies as the fast finishers. Why should they be penalised or categorised as failures because they take six months longer to acquire the necessary skills, knowledge and values than the average student? One reason is that universities do not have a defined threshold of skills etc which students should attain. Instead, they rank students at completion of their course by ‘first honours’, ‘upper second’ and so on. Yet this ranking system has become increasingly meaningless because of grade inflation. In the last 10 years in England, the proportion of students awarded a first honours has doubled, while another 60% now receive an upper second. This has happened at a time when the proportion of school-leavers entering universities has increased substantially and teaching hours per university student have decreased.

Why is the metaphor of the course still so dominant in British higher education? One reason is that it is convenient for universities and for the government agencies that fund them. Full-time students can all be processed efficiently on a three-year conveyor belt, exams can all be set at the same time for all, and universities can be freed from the difficult business of co-ordinating academic and vocational education with employers. Governments can fund universities using a simple block grant based on the assumption that the great majority of students are full-time. Indeed, the current funding system in England disadvantages part-time students.

Yet there are types of university education which does not correspond to the course metaphor. Professional training degrees (such as in medicine, nursing, social work and the professions allied to medicine) require students to complete part of their training in hospitals and other workplaces, where they acquire skills under the supervision of senior professionals in health and social services who have been co-opted into university education. Degrees of this kind also aim to produce students who meet a defined standard of professional competence, rather than rank them by the marks they achieve on their assignments.

This model could be expanded to other degree programmes. Then, instead of students being given a general education with limited vocational training followed by employment in a lowly administrative post, they could get a job and study for a vocational qualification part-time. This would probably require more distance learning, but we have excellent institutions in this country which can provide this. It would also free up large areas of our cities which have been given over for student rentals and are occupied for only 30 weeks each year. This would have the very useful side-effect of doing something practical to reduce homelessness.

See: My life as a steam engine

                           

Friday, 22 October 2010

The ‘real me’ and the ‘actual me’.

We sometimes explain a desire to break with the routines of life, to do new things or even to go on holiday as a desire ‘to find ourself’ or ‘find the real me’. Since, by definition, we have yet to find the real me, we are usually rather vague about what it looks like, which means in turnthat we can never be sure if we have found it or not. But we talk as if the real me is our authentic identity, more creative and sensitive than the way in which we are usually experienced by others and by ourselves. For some of us, the real me may be a more decisive and courageous version of ourselves. But the common element is that the we use the term ‘the real me’ to designate the type of person we wish to be rather than our actual identity. Few if any of us seek a real me that is a stupider, more insensitive, or more boring version of ourselves.

It is not clear whether finding the real me is ever successful. When we are on holiday, we are often required in a short space of time to drastically change our way of life, encounter new foods, new places, and new ways of relaxation. But we soon re-assert routines - after a few days of disorganisation, we end up going to the same harbour café to drink a glass of San Miguel at the same time each day, have a meal at the same time (although perhaps at a different time compared with home), and look at the same beautiful view each evening. There is a sense that the holiday is settling down as we experience the comfort of the familiar.

Our sense of the real me might therefore shape our behaviour and aspirations, but is ultimately kept close to our actual me by our need for routine. The reactions of others are also an important check on fantasy. Claiming we are a great singer when we are not will usually invite ridicule. We can only persist with this illusion if we categorise others as exceptionally misguided or hostile. Like Don Quixote, we thereby invite humiliation, which we can see when people are recruited for programmes like The X Factor or The Apprentice.

As we get older, however, a subtle change occurs in the relationship between our sense of the real me and the person we actually are. We look in the mirror and see a younger version of ourselves. Less biassed sources of information, such as photographs, come as a shock and are rejected. We still believe we are capable, as we once were, of striding over hills and running down streets, when now we can only stumble. When people ask us if we need help, we refuse because we believe our ‘real me’ is capable and independent. We hold dearly to our discrepant identity so that, like the rejected contestants in The X Factor, we become angry when faced by the evidence of our incompetence. So many old people reject the help they need, and decline into isolation as a means of keeping alive their real me.

Friday, 15 October 2010

Enlightenment and Authority


In 1784, the philosopher Immanuel Kant completed an essay on the nature of ‘enlightenment’ and its implications for society. He defined the term to mean the personal transformation of people’s way of thinking, not just by the accumulation of learning, but by individual people’s willingness to derive their own conclusions about life based on their reason, intellect and wisdom. He equated ‘enlightenment’ with intellectual maturity, which he contrasted with depending on others (such as religious authorities) for one’s beliefs and opinions.

Kant was aware of the problems generated by enlightenment. Most people (even those in authority) were not yet enlightened by his definition. But if enlightenment became common, then there would be problems in maintaining order - people might choose to disobey their rulers, women their husbands, and children their parents. The answer was for enlightenment to be coupled with obedience. People could be enlightened in their private life, but should adhere to convention and consequently express accepted views in their work and public duties. The exemplary society in this respect was the Prussia in which he lived, and Kant designated his time as ‘the Age of Frederick’, named after the authoritarian king of that land.

Kant’s conclusion that enlightenment is compatible with and may even require authoritarian government has been a common stance among public intellectuals since his time. Intellectuals may not support such governments with the same enthusiasm as Kant, but they have limited power in wider society, and therefore depend on that of their rulers to ensure the application of their ideas. This has been particularly the case when intellectuals have sought to achieve major changes in the lives of the rest of us. Given the ‘unenlightened’ character of most people, and hence their probable resistance to such schemes to improve their lives, it has been particularly tempting for intellectuals to support the use of authoritarian methods to create a new kind of person. Historically, this has often involved the mass slaughter of many of the older kinds of person. Popular resistance could be rationalised as ‘lack of enlightenment’, ‘superstition’, ‘false consciousness’ and so on.

Even in more democratic societies, intellectuals may find authoritarianism tempting as a means for achieving a better life for the rest of us. Jeremy Bentham proposed an extraordinarily dehumanising regime for prisoners called the ‘panopticon’. Later intellectual reformers proposed that mental illness and intellectual disability could be best managed in vast authoritarian institutions. In the 20th century, public intellectuals argued for massive slum clearance projects, fragmenting communities and re-housing people in poorly-maintained blocks of flats built miles away from their families, employment, entertainment, shops and so on. The Red Road flats in Glasgow (shown above) were one of the extreme demonstrations of this type of social engineering.

What these exercises have in common is the belief that the good life can be discerned by reason, that it can be applied in an undifferentiated way to whole categories of people such as ‘the peasantry’ or ‘the working class’. This simplified view of the complexities and diversity of people may be a product of lack of experience and seeing the world through books and political debates rather than through observation and experience. In other words, many intellectuals are themselves unenlightened by Kant’s definition.

Wednesday, 11 August 2010

England’s great divide walk

Several years ago, I read Stephen Pern’s excellent book, The Great Divide. This described his walk along the watershed between the Pacific and Atlantic Oceans in the USA. He began in the Mexican desert, and followed the crest (or as close to it as he could manage) of the Rocky Mountains to the Canadian border. If he had continued North, passing West of Banff, he would have come to the Columbia Icefield. This glacier is the origin of rivers that flow into three oceans - not just to the West and the East, but also to the North. The main Northbound river is the Athabasca, and you can follow it along a road called The Icefield Parkway, as it descends from a mountain torrent to a wide sweeping river between meadows near the town of Jasper. After Jasper, the river travels hundreds of kilometres North to become part of the River Mackenzie, flowing through tundra into the Arctic Ocean.

The courses of rivers and their watersheds are easy to follow on the maps of a vast empty country like Canada, but much harder in a crowded one like England. So what would be the route of a great divide walk in England? Most people would guess that the Northern section would be close to the Pennine Way. This is generally true, although the watershed actually crosses the Scottish border several miles West of the Pennine Way, near Kielder. From there, it heads South along the boundary between Cumbria and Northumberland, towards Once Brewed near Hadrian’s Wall. The great divide walk would then follow the Pennine Way South to Edale, close to the crest line of the Pennines, across Saddleworth Moor and the High Peak. But what happens after you reach Edale?

Following the map, you can trace a strange circuitous route around the heads of lowland rivers systems. First, you would walk South around the Western edge of the Peak District, near the Roaches, and then across Staffordshire to the West of Stoke and Stafford. From then on, my imaginary long distance path is difficult to trace, as it passes through the Black Country over Frankley Beeches to the Lickey Hills South of Birmingham. From then on, it would swing East and then South in a long arc around the catchment of the Warwickshire Avon, eventually reaching the Cotswold escarpment and the Cotswold Way. You would follow this lovely scenic long distance path for most of its way until a few miles North of Bath. The great divide path would then need to head East around the watershed between the Gloucestershire Avon and the Thames until you reach an area called the North Down near Devizes. This is England’s equivalent to the Columbia Icefield, albeit a low-lying hill without ice. From it, rivers head West to the Atlantic, East to the North Sea, and South to the English Channel.

As human beings moved back into Britain after the end of the last Ice Age, they would have travelled along the dry watersheds to avoid the marshy and tree-clogged valleys. The North Down and the nearby Salisbury Plain would then have been the great junction of this Stone Age transport system. Early inhabitants have marked this busy place with rings of standing stones, white horses carved into chalk hills, barrows and mounds. This is a kind of commercial, political and religious city, but dispersed over several hillsides and occupied seasonally. It would be a superb end to my Great Divide Walk. While I walk it in my imagination, others may do so on foot.

Friday, 6 August 2010

Research without fear

A week ago, my computer connection to the Internet ceased. This was puzzling because my son was still able to log on from his laptop, via the home wireless network based on my computer. Eventually, I traced the problem to the Norton Security software I had installed. With remarkable success, this had prevented any viruses infecting my computer by blocking access to all websites. Computers of course mimic the humans that create them. The Norton approach is found among experts on ‘security’ who argue that the only way to protect our liberties is to lock up without trial people who might possibly be terrorists, give the police free rein to assault and kill peaceful demonstrators, and subject all citizens to permanent CCTV surveillance. A similar destructive enthusiasm is found in the system used in the National Health Service for assessing the ethics of proposals for research.

Just as there are real computer viruses and real terrorists, so there is a true history of unethical research on patients. The most outstanding, described in every book on medical ethics, was the Tuskegee Experiment carried out by the US Department of Public Health, in which 400 poor black people infected with syphilis were monitored from 1932 onwards. Although penicillin was identified as an effective treatment by 1940, none of the subjects of the research were informed or treated, leading to the infection of their spouses and other sexual partners, and their children. The whole ghastly racist experiment only came to an end when a whistleblower informed the New York Times.

This case is a warning that medical scientists are no more ethical than other people, particularly when the subjects of their research are poor and from racial minorities. Bad ethical practice in research still exists, although usually in a less extreme manner than the Tuskegee Experiment. There have been cases of people included in clinical trials without their knowledge, people being denied effective treatments, pointless research inflicted on people, and so on. To prevent these kinds of problems, the National Health Service has set up a complex network of ethical committees to thoroughly assess all applications for research. This is backed by a parallel system of ‘research governance’, which ensures that the recommendations of ethical committees are followed by researchers, that research in insured by its ‘sponsor’, and that the cost implications of the research for the NHS are taken into account.

As governments have attempted to extend the protection of human rights (the 2005 Mental Capacity Act, the 2004 Human Tissue Act, and others), so the work of the ethical committees has become more demanding. Nevertheless, the system has improved greatly in efficiency in the last few years, and the various local ethical committees strenuously seek to protect the public from unethical research. However, there is a problem - the same sort of problem encountered by all who seek to avoid risk and create a world free of fear.

All autonomous or creative human action, whether individual or collective, involves risk and therefore danger. The risk may be very small, but the resulting fear may lead to disproportionate and even harmful precautions. In my village, both the primary and secondary schools are in easy walking distance for most children. But many parents drive their children to school because they perceive the quiet roads of a peaceful village to be dangerous and the pavements crowded with paedophiles. As a result, the roads through the village at the times the schools open and close become crowded with large 4x4 vehicles. This has the effect of making travel more dangerous at these times, even for those children and parents who do walk to school. Disproportionate precautions of this kind are particularly common among those with a professional responsibility to protect the rest of us. After all, it seems common sense to believe that one can not be too safe or too ethical.

In the case of ethical committees, this can result in extreme precautions being evoked for the simplest and least offensive of research projects. One of my masters students (a qualified and very experienced child mental health nurse) proposed to interview a small number of experienced and qualified paediatric nurses about their experiences of managing children who are admitted to accident and emergency following overdoses. The ethical committee expressed concern at the impact of these interviews on the state of mind of the nurses, and eventually insisted that an independent counsellor be made available to alleviate distress. Another student wished to send a questionnaire to fellow therapists about the impact on their career of changes in NHS employment practices. This has required six months of applications, and permission from dozens of separate NHS trusts. I know of many similar cases.

Why do ethical committees need to be involved when members of staff pose a few questions to other members of staff? After all, if we applied the same procedures to any other walk of life, all collective human activity would cease. One reason may be that all these research projects concerned the care of either children or people with an intellectual disability. Both are included in the ever-expanding group deemed ‘vulnerable’, a term that flashes warning lights to some ethical committees.

I raised these problems with a team sent to carry out a routine quality assurance review of our masters programme. I was advised that the solution is to discourage students from carrying out research of the kind that requires ethical committee approval. This is of course the Norton Security solution and ultimate triumph of risk avoidance - the danger of unethical research in healthcare will be completely eliminated by preventing any research from taking place. The cost of doing this is that we continue to treat children and people with an intellectual disability with medications for which have limited evidence of effectiveness, and that we fail to investigate the reality of care they receive behind the bland brochures and policy statements. As a result, their real vulnerability to poor quality health and social care is increased.

Wednesday, 28 July 2010

The wisdom of economics

Economics rarely gets a good press. This is probably because of the regrettable tendency among economists to pose as modern soothsayers: predicting (usually wrongly) the long-term direction of the stock markets, which countries will go bankrupt, and which economies will prosper. But there is great wisdom in economics, which should be known more widely. Here are three concepts from economics, which, if applied, would make British universities far more effective.

The first concept is marginal cost. This means the additional cost incurred in producing one extra item on a product line. A related concept is marginal income, or the additional income received from selling this one extra item. As a general rule, therefore, firms should expand production as long as the marginal income they receive for doing so exceeds the marginal cost. This seems obvious, but it is not how many organisations (let alone universities) operate. I have had very frustrating discussions in which decisions on whether to expand student numbers on a course have been based on average cost/student rather than the (usually very low) marginal cost of adding one extra student to a course that is already in operation. To make matters worse, the government penalises financially those universities which do expand student numbers beyond the ‘planned’ targets that have been set centrally.

The second concept is comparative advantage. This was originally developed to explain how trade can produce additional wealth if each trading partner specialises in producing those goods which it can produce at the lowest cost compared with the cost of producing the same type of goods elsewhere. Comparative advantage can be adapted to analyse the kind of ‘trading’ of activities within an organisation. Suppose an academic department has two members of staff: Cain is an excellent researcher and a mediocre teacher; Abel is a mediocre researcher and a mediocre teacher. What usually happens is that both are required to do their share of research and teaching. As a result, all the teaching in the department is mediocre, while half the research is excellent. If, however, the university applies the wisdom of economics, then they will specialise their staff. In that case, all the teaching will remain mediocre, but all the research will now be excellent. Of course, it would be even better if Abel had been an excellent teacher. But who knows - the more he specialises, the better he might get.

The third economic concept is satisficing. This involves seeking a solution to a problem that is adequate and costs the least. Satisficing is therefore an alternative to seeking a ‘rational’ solution (which would involve comparing all possible alternatives whatever the cost), and to plumping for the ‘excellent’ solution (which involves selecting the best possible  outcome irrespective of cost). Satisficing is what most people probably do most of the time when they come to make decisions about where to live, who to marry, which university to attend and so on. But universities are temples to rationality and excellence: the greatest contributions to knowledge have often come about because academics have taken infinite pains to collect data and have considered radically new ways in which the world and the cosmos can be explained. Such commitment, essential as it may be in scholarship, can have a damaging effect on how academic committees function. Every committee-member thinks up ways in which the outcome of a decision could be even more excellent, and deliberates whether there might be options no-one has yet thought of. It is hardly surprising that university leadership becomes exasperated and takes decisions without discussion or just hands things over to the administrators.