Saturday, March 26, 2005

Is academic knowledge really any use?

I frequently notice reports in the media about people who have achieved a lot despite lacking a university education. Successful millionaires, authors, media people, chefs, inventors, criminals, artists - who’ve managed to get where they are without the benefits of a formal grounding in academia.

Obviously, this should not be allowed, and in some fields it isn’t. The universities largely run the show in science and sociology, medicine and mathematics, so in fields like this you do need your degree.

To my way of thinking there are three possibilities. There are (or there may be?) fields – like perhaps rocket science – where you really do need a university education to do anything serious. The knowledge is cumulative in the sense that to design your rocket you need to be able to do A and B, and in order to do A you need to do C and D, and in order to do B you need to do E and F, and in order to do C you need to be able to do J and H … No short cuts: you really do need to understand Z. You can’t do it by browsing the web and finding instructions for A and B, because you won’t understand C and D and so on. You need the discipline of a structured course and the support of experts to succeed.

Or do you? Einstein didn’t.

At other extreme are fields which the universities have not colonised. You wouldn’t normally enrol at your local university if you want to be a pop star or a TV presenter.

Which leaves an enormous grey area in the middle. Running a business, writing a novel, teaching – the universities would like you to think that courses on business studies, creative writing or education are essential here, but experience and common sense often suggests the opposite. Does an academic background here really help, or does it just help you translate the obvious into jargon?

Tuesday, March 15, 2005

Why am I writing this?

When I was considering setting up this blog a colleague asked me why I wanted to do it. The implication was that it would not score in the RAE (the UK universities’ Research Assessment Exercise) and so was not worth doing. We only think and write stuff for brownie points which can be cashed in for more money.

I find this very depressing. I’ve just been to a meeting to try and get some research moving. It was all about how to find out which journals would score highly and how to organise co-authorships to everyone’s advantage. But almost nothing on the actual topics of our research.

Similarly, there’s no culture of reading and commenting on each other’s work. No real interest in the power of thought, and in how the world can be improved. Just scoring points for a meaningless competition. If we get lots of points we’ll be able to hire more people to do more research to score more points and get more exhausted. But what’s the point of these points?

This is where this blog comes in. It will score no points but I can say what I think.

Monday, March 07, 2005

Marking the exams

Marking now finished. What a relief. I find marking very difficult to stick at. I will mark one question, then get up for a coffee. Or enter the mark on a spreadsheet. Or count the number of questions yet to be marked, or the percentage of the job I’ve completed. Not sure why it is so difficult to force myself to mark exams. But the task is now 100% complete. Just the second marking to come – about which more later.

It isn’t as though I do the job particularly conscientiously. In fact it feels pretty arbitrary. One part of one question on one script gets 6 out of 20 marks. Why? Because they haven’t got the main point, but have scribbled down something vaguely relevant. The next script gets just 3 for the same question. The scribbling seems a little less relevant. Or is it? What do I mean? I should check back. In fact I should be cross checking all the time. But I don’t, of course. The lure of the end of the pile is too strong.

Again, I am struck by how silly some of my questions are. One part asks candidates to explain how to do something. The next part asks about difficulties and how to get round them. But this should be part of a good answer to the first part. So a good candidate will have little to say here. I try to remember to fudge the marking scheme, but have a nagging worry that some candidates might have given such a good answer to the first part that they had nothing to say for the second.

Another question is about a formal method of making decisions, applied to menu planning for a person allergic to carrots (they do turn some people red). One of five menu options, according to the question, is carrot soup. The first question asks for recommendations which can be made on the basis of the information given. The answer I expected and intended as the right answer was simply not to serve carrot soup. Few candidates got this right!

Instead they came up with the recommendation to use the formal method, or they thought about their tastes and the tastes of their friends and came up with a recommended menu (usually the chicken with fried chocolate ice cream) on this basis. Either answer got no marks. They almost certainly dismissed the right answer as too obvious to be worth writing down. As would any sensible person.

The next part asked candidates to describe the process for eliciting taste preferences and health data from the customer. Candidates were expected to describe the process of doing this. Many thought they had answered this in the first part. And others made up data about typical customers, instead of describing how to get this data.

There are real confusions here. The obvious common-sense answer is right for the first part – don’t serve carrot soup. But not for the second part: the obvious answer that everyone is going to like chicken with fried chocolate ice cream is deemed unacceptable.

I really must set clearer questions next time which make it clear what can be assumed and what can’t be, and what type of answer is expected. But then the questions may turn into philosophical treatises and the difficulties may well be increased? Help!

Wednesday, March 02, 2005

Oh what rubbish a lot of this research is

I’ve just had to dip into the research literature again. Always depressing.

It’s easy to mock a lot of the qualitative style of research. Talk to a few friends and write it up with a liberal sprinkling of long words like social constructionism, hermeneutics, ethnography, phenomenology etc. Then it counts as more than the prejudices of your friends, or an excuse to visit gastro pubs (the focus of one colleague’s research), rather, it’s empirical data and every word is recorded and transcribed and taken very seriously.

But at least the reader (a rare beast in the research game but we have to believe they exist) gets to know something about the topic. They can read about the beer and the food. The so called positivist style of research is usually far worse, partly because positivists assume it’s so much better. I’ve just come across one article which is fairly typical. It’s on the newly discovered broccoli-carrot ratio as a predictor of gym membership.

According to the article the researchers looked at lots of possible variables to see if they could relate them to gym membership. Number of children, marital status, education level, and so on and so forth. The only significant relationship was with this ratio between the amount of broccoli consumed and the amount of carrot. The p value cited was less than 0.01% - so they obviously think it’s pretty important.

It isn’t of course. The significance level just tells you that it’s not a chance thing, but reading the detailed statistics the relationship is actually a very small one. People who eat more broccoli than carrot are ever so slightly more likely to belong to a gym. That’s it.

They make no comments on why it might be so, or on what use knowing about this relationship might be, or on whether there’s some causal mechanism driving those who eat more broccoli than carrot to the gym. And no mention at all that this is a tiny correlation. They assume that because it uses esoteric statistical methods which show it’s statistically significant it must be important.

Not so. And unfortunately this paper is typical of the stuff we’ve got to force our students to read in the name of academic rigour