|
Pointless Punditry - Further Reading |
|
Discussion 12_001d_1
29th February, 2012 |
The first "Pointless Punditry" discussion document was published on the website in February 2011. It detailed a range of problems B2yoR perceived with the type and overall quality of Analysis and Punditry that the dedicated Racing media serve up in Britain. That is all types of media including the two TV channels along with 'published' sources whether they be online or printed on paper. The document also tried to put the issue in a wider context of information processing and presentation in general. Given that 'Pundits' face the same issues in many fields. Also, those wishing to present, or make personal gain from, 'information peddling' have a wide range of opportunities and, increasingly, a large set of technological aids to assist them in reaching a global audience. It also touched on how humans think about information they receive and about the authority figures they get it from. The mix of the two, human cognition and available Punditry information, being an unfortunate cocktail which can produce a final result that is worse than either of the inputs.
Another year's experience has certainly not changed the basic views expressed in that first document. Given that the media had been offering the same things for many years it is no surprise to report that nothing has changed in the last 12 months. The same cast of people pushing exactly the same lines without thought or variation. No sign that any of them are trying anything different or doing some research, with the very rare exception. You could pretty easily write a computer program version of most of the punditry offerred. Feed in the 500, or so, stock phrases and tag them with the right hooks so that they are picked out and daisy-chained together given the right stimulus. Like a race video replay to prattle about. Pretty depressing, there must be more to it than that.
One of the most dispiriting examples would be watching the Attheraces chimera - Chapman Boyce - sitting in a booth spending hours of each day pontificating about racing in their own limited way. Using a small set of shoddy ideas built on anecdote, personal bias and desperation to fill time. But, they have a key position in racing and unlimited time to misinform, or brainwash, the racing audience so their unchecked chatter becomes the orthodox view. They then spend a lot of time reading out e-mails sent to them by unthinking people who just regurgitate the same views back to them. In one long, grisly, love-in.
The fundamental question is how to change this situation? The first document suggested some ideas and a radical project which, unfortunately, probably only got read for the 'Word Pics' and had people worrying over whether they were 'Funny' or not. They were not meant to be funny, unless you like very black humour, they were part of a larger story which was the real point. Highlighting what we have come to accept as the 'Standard' is actually institutionalised mediocrity. This document is not going to expound on how to change the situation but instead add some further context to the problems. One of the people who read the original document suggested a bit of 'Further Reading' back to B2yoR. Many thanks for that and reading that book has increased the overall insight into the issues.
This document covers a range of items, from online blogs to published books, which can be considered to be 'Further Reading' around the information presentation and Punditry issues. The hope being that by understanding the issues further making inroads into a solution may be more achievable. The document is structured so that the items covered become more detailed and 'harder' work as the sections progress. Anyone interested in Racing in general should be able to get through the first two sections comfortably. Going deeper than that will require some real interest in the problem of truly 'knowing' anything to a good standard. Racing or otherwise.
A good example would be the 'Testing Treatments' book covered in section 5. That deals with medical 'Punditry' in many forms and how we, as the audience, interact with the authority figures and information there. It makes a case that there are many problems with the current situation and that real change is needed. Further saying that the first step is that the audience, you and I as the patients, need to expect and demand better service. Which is a bit worrying, if people cannot be convinced for the health of themselves and those they care for to address information properly then in something peripheral like Racing it is going to be difficult. Or, hopefully, an easier test case.
An example in that book, and with a nod to the likely demographic spread of those reading this, concerns Prostate Cancer. Reading what it says about screening for this illness is a real test for your ability to 'let go' of what you intuitively believe. All health screening is beneficial, surely? Try reading the section in the book to test that view. Find out what you think when you understand the sensitivity of the test used in the Prostrate screening and how the Doctor who discoverered it is horrified by how widely it is used and in what context. A lot of the same issues as 'Pointless Punditry' mentioned showing through in a more important context - ensure the tools used are based on thorough research, address the information properly, do not just trust authority figures and organisations, challenge the worth of your own intuitive thinking to get through to the real truth, and so on.
Perhaps by February 2013 the answer to how to move forward with some real improvements might be possible. Who knows, perhaps the third document on the subject will be able to be part of that.
A gentle start to the document with a couple of lighter items but with serious points to be made from both of them. Links related to the section are in grey boxes and will typically go to external sites and open new tabs or windows depending upon your browser type and settings.
|
A tale which highlights the need to keep thinking for yourself and not just accept what the majority believe. There were almost certainly people within the South African team and organisation who knew that the qualification rules were different to what most expected. But, when it came to the pressurised time they would doubt themselves and just go with the consensus view. Behaviour which has been shown to be prevalent in various psychology studies. People want to fit in and be part of the crowd.
But this is a salutory lesson in the problems that can bring. A consistent theme that runs through this whole document - Do not trust the Authority figures to have full knowledge and to be able to make correct decisions. Do not give in to 'Insider Pressure' and question the accepted ways of acting passed on through the generations. Check things for yourself wherever possible.
|
The background here is that an amateur meddler spent some time looking at maps of the locations of prehistoric sites in southern England. He decided he could see patterns in the locations and then compounded this error by extrapolating from his incorrect conclusion to assign causes for the patterns. Not ruling out extraterrestrial intervention. Rather than this bit of nonsense staying of his own personal interest a number of organisations, including the 'Daily Mail', decided it was worth printing. Another example of not trusting 'Authorities' to act in a sensible manner when it comes to information presentation. The first link in the box goes to the 'Daily Mail' article.
The second link goes to a 'Press Release' which a mathematician called Matt Parker had put together. He had studied the locations of the, now defunct, Woolworths stores using the same bogus methods. He had managed to come to similar ludicrous conclusions as the amateur but knew he had. The piece is quite funny in a dry way and quietly makes the point of how anyone that had propagated the original article was very mistaken. The piece includes the following classic line :-
The original 'Pointless Punditry' document listed a number of problematic issues with the way people think and how this impacts on Racing analysis and betting. We cherry-pick data which suits the theory we want to be right, we see patterns in random data which mean nothing and then are desperate to assign causes to the pattern. In the example above the random data is geographical but it can be in any form. A cloud which looks like Mother Theresa is a random set of data interpreted in the same meaningless way. In looking at sets of numbers it it easy to see meaningless patterns using the same approach. This is where it matters in racing because there is a lot of data available and a lot of ways of looking at it to convince youself of invalid conclusions. From stables which are 'in' or 'out of form' guff through to whatever your pet winner selecting theory is.
A good question that occurs when thinking about the low quality of Racing Punditry in Britain, and how it might be improved, is "What do they do abroad?". Have the French already fixed the problem and punters there are listening to a proper split of media presenter and well informed pundit? Have the Americans reduced everything to numbers and systems, to add to the better information and horse training work videos they have, so that inane rabbiting to fill time is unnecessary? Does your typical no-nonsense Aussie accept nonsense from Pundits?
This section links to two pieces from the 'Daily Racing Form' (DRF) website in the US and a publication that presumably has a similar position to the 'Racing Post' in Britain. Both give interesting insights into Punditry in America and have clear comparisons to the Punditry approach in this country. Each sub-section pulls out some useful quotes from the articles and also suggests ways in which the American method compares to the typical standard in Britain.
|
The first article is written by Andy Beyer, of Speed Figure fame, and covers the impact made by a Pundit called Andy Serling. Mr. Serling was taken on by the New York Racing Association (NYRA) to do a TV show at their tracks after his Wall Street career as an Options Trader came to an "inglorious end". The article talks about Serling's approach and gives some examples of his methods. On the negative side the article indulges in clear cherry-picking of the type mentioned previously and draws unwarranted conclusions from the results of single races. It also has no information about his long-term results
But, the article is more interesting to consider in terms of what a Pundit needs to do to stand out from general crowd. Worth comparing the impact discussed with similar examples in Britain like Hugh Taylor & Tom Segal. The article uses the phrase 'Graduate School seminar' to describe some of the Serling approach and demonstrations and compares that to the normal level of 'Handicapping 101'. Translating that into 'British' is perhaps saying one is a University course and the other at the A-B-C level.
Reading through what Serling actually does to stand out and find his edge does not really support that belief in B2yoR's view. As with Hugh Taylor it is not a University degree, or 'Rocket Science', but a few simple principles, diligently grafting, studying race videos, investigating some new data properly and synthesising that into your own view without reference to the accepted wisdom. Hence, the title of this sub-section. What is needed to become a stand out performer like Serling, Taylor & Segal is what ought to be the basic level all Punditry should aspire to. We can then move on to higher levels.
To start with, some Quotes from the Article which touch on the type and quality of Punditry in the US. The first below suggests that at the standard level Americans have to put up with a lot of the same drivel. The second raises an interesting point about educating people about racing and what to expect from Pundits. If the NYRA can employ Serling to solve this problem then can we reproduce that in Britain?
The following quotes cover the methods Serling uses and, again, very easy to see him as an American Hugh Taylor. A lot of the same tools and approach. Nothing that would surprise anyone, no Doctorate level mathematics and nothing to really make this feel like a University Degree, even of the type institutions produce nowadays.
The last quote is interesting because it identifies an area where America is clearly ahead of Britain in addressing 'Race Dynamics'. The history of racing development in the two countries has contributed greatly to that split. Racing on flat, left-hand, ovals on a Dirt surface has taught the Americans the importance of Race Dynamics and therefore they developed the tools to measure and investigate it. Racecards in the US have full details about sectional times and how individual horses ran their races. Developments there to improve this like the 'Moss Pace Figures' described on the DRF website are informative reading.
While we have some excuse in Britain given the variety of our tracks and racing surfaces the lack of similar information is a real problem. British racing followers are not educated in the importance of these factors from an early stage and they are just not part of the general information. When Turftrax tried to get them going the wider markets for the figures were just not there. In early winter 2010 Attheraces appeared to be doing something interesting and said they were going to use Sectional Times at Southwell though the Winter season. To B2yoR's knowledge that did not get past the first afternoon. Without a store of existing data to compare to and, crucially, an audience who understood what to make of the times they were just meaningless numbers.
B2yoR has tried a few related items in this area like Halfway Positions on turf & All-Weather (AW), and Efficiency handicapping with races on the AW and can attest to the usefulness of these ideas. They are simpler than jumping straight into Sectional Times or Pace Figures but help to build up the intuitive feel for Race Dynamics that will lead on to these other approaches. Another area where education by a different type of Pundit can definitely bring about improvements in the future.
|
This article considers how the use of 'Speed Figures' has developed in the US over the last 30 years and more. From a time in the 1970s when just a few people produced them for their own use through to the central position they hold now. It covers how this change and the acceptance of the importance of the figures went from 'Outsiders' to 'Insiders'. The outsiders being the racing followers and punters who first started to use the figures. Then how insiders, such as horse trainers, gradually came to understand their importance. It also compares this development to similar changes in other sports with the classic example being the 'Sabermetrics' revolution which changed Baseball fundamentally during the same period.
The original 'Pointless Punditry' document had a whole section about how reactionary the racing world is in Britain and resistant to change. The example was used of the "my eyes knows best" approach of Champion Trainer Richard Hannon when faced with a number as uncontroversial as a horse's weight. The story in the article clearly has close matches to the problems and the changes needed within British Racing. Take this quote from the article about how even the 'old school' had to change their minds.
The next part pulls out a number of quotes and the top level point to note is how long these changes take. It might take 10 years, 20 years or longer to see real progress depending upon the change required. Time to develop the new approaches and prove their worth. Then a period for their use to be built up by enough people to have demonstrated their value. Even then it will take time to educate the next generation and just as long to convince even relatively receptive insiders. Some will never be convinced and be unable to change. The 'Pointless Punditry' project may well be a long campaign.
A subtlety different point can be taken from the next quote. 'Pointless Punditry' spent a lot of time considering the "How many winners have you ridden?" bullying that takes place in British Racing. Where the 'insider' horsey type believes they have nothing to learn from anyone else and further thinks they can dismiss everyone else just by putting their 'experience' on the table. Truly intelligent and open people know there can always be something to be learnt from other sources and they always understand that their own knowledge is incomplete regardless of experience gained. A point that comes up again in the next section about Experts & Predictions. In general, always be very wary of anyone who thinks they know it all and have no doubts that they are totally right. The only real question will be just how wrong they will be, but they will be wrong to some extent.
The next quote comes from a Baseball insider who has managed to get beyond the influences of those around him. Like the Serling article above the point comes through that you have to look beyond the accepted scope of the 'insiders', and Pundits in British racing terms, views to make progress. The 'Groupthink' of the insiders is a powerful influence on the others in their sphere and you have to understand how it is affecting you adversely to be able to break out. Not an easy thing to achieve.
The final quote is from a Basketball Pundit and he makes a good point that this is not a case of one thing or another. 'Stats' versus 'Insider experience'. It is a two-way transfer of knowledge that if done non-confrontationally will benefit both sides. Further, 'Pointless Punditry' included a part about a racing follower needing to understand the 'Maths' of the racing problem but also being able to develop the 'Gut Feeling' to assist in being able to apply that knowledge. The quote below is a good one with some insight. Change 'games' to 'races, 'team' to 'trainer and 'players' to 'horses' and you have a solid piece of advice about racing. Spending time watching racing & race videos being a particular point that came up with the 'Tall Poppy' Pundits in the previous sub-section.
As with the first DRF article above it does use examples to 'prove' points which are not valid. In particular it uses the filly Upperline to prove a piece of accepted 'Speed Figure wisdom' in an unconvincing manner. The wisdom says that horses which produce a rating which is a long way above their previous best will 'recoil' from that effort and will take a long time to get back to producing ratings at the higher level. This is accepted as true by Speed Figure users and believed to be proven by long experience. But, yesterday's whizzy new idea eventually becomes today's reactionary, insider, mental roadblock so always worth keeping on checking.
You would like to see the article link to some document that addressed the 'recoil' idea fully rather than using a single, suspect, example as 'proof'. Coming at the Upperline example as a non-US observer and looking at the evidence, as presented, shows that it does not prove anything. The horse had a previous best of 89 then apparently ran to a 102 figure and considered the sort of advance she would recoil from. The article claims that this is proven because she ran to figures of 87, 90 & 89 in her next three runs.
As an outsider, not committed to US Speed Figures nor to defending the 102 figure, how about proposing an alternative idea to test. This is a hugely consistent filly who always runs to the 88-90 range if the way the race unfolds allows her to. She never ran to a figure of 102 and you would be better served downgrading it by 10-12 points. Now perhaps the 102 figure is supported by the horses she ran against but that case is not made in the article. But, how much of a 'recoil' is it to go back to running your normal figures? Should not a recoil see a filly like this running to figures in the 60s or 70s? You probably get the point. The process of questioning 'known facts' and insisting upon proper validation doesn't sleep.
|
Part 3. of this document originally ended with sub-section 3.2 but in early April, 2012 a related example came up which was too good not to include at this point. The previous section gave examples of how 'Outsiders' can help the 'Insiders' within a sport to question what they do and improve their performance. The article linked to is a short piece in which former professional cyclist Chris Boardman announces he will be retiring from his position with "British Cycling". He had been working for them for 9 years and during a period of great progress and success with Britain the dominant force in track cycling during those years. Also, having developed a top level (World Tour) Road Cycling Professional team with many of the same personnel. With, for example, one of the former track riders developing to be the first British road World Champion since 1967 (Mark Cavendish) and another a realistic contender to win the Tour de France (Bradley Wiggins).
All of that done under the management of Dave Brailsford whose outlook on sport is the exact opposite of a 'Hannon' type my-eyes-knows-best, touch-and-feel method. No matter how many medals the team has won Brailsford is always talking about how to do things better and how to improve. Part of that approach is to give the people who work for him the licence and funding to research and develop new ideas in the hope that some will deliver real performance benefits.
Which is where Boardman came in after he retired from professional racing after the Athens Olympics in 2004. Boardman had a hugely successful career including winning Olympic track golds. When he raced on the road he won stages of the Tour de France and wore the leader's Yellow Jersey at a time when British road cyclists were oddities from a backwater of the sport in the view of those from other countries.
He also held the 'World One Hour' record which is a brutal activity where the rider goes a round a track, on their own, as fast as they can for the full sixty minutes. The distance covered at the end then determines the result. One of the things being involved in that sort of test instills in anyone is a complete respect for the primary importance of total Efficiency in all areas to maximise your power output and expressed performance. If you let up in any area and just think ".. oh, that will be ok..", then you will fail. You will miss the Record by 20 metres and that 0.1% of improvement you could have gained by attending to the area would have made the difference.
After Athens Boardman was given the role, by Brailsford, to look at 'Marginal Gains' in the performance by the cyclists. This meant he had a free role to question how things were being done in a wide range of areas. Not looking for huge improvements but lots of smaller gains that would add up when you put a lot of them together. Once the areas and ideas were identified he was then given the funding to do the research to prove concepts and develop equipment, if needed. So, where did Boardman start looking for those areas where gains could be made? Did he stand around with his equivalent of racing's "How many winners have you ridden/trained?" outsider bullying? His list of achievements, his 'palmares' in cycling parlance, meant he could 'bully' pretty much anyone if he felt like it. Here is a quote from Boardman :-
Just perfect. An immensely refreshing attitude in this context. Reading what Boardman says in the piece the outlook he uses opens up a range of thoughts and ideas that might be applied to racing. How about proposing a project or perhaps a TV programme called "Asking Stupid Questions". Get an expert in from one area to question those in another. Boardman applied to Racing would be a good example to start with. Very experienced in 'Power Output Test' sports, which is all racing is at root, training for athletes and the use of the most efficient equipment and technology in races.
What racing lacks that cycling has is a central organisation tasked with maximising the overall performance, and medal returns. A body that also has the funding to develop the improvements at the top level that then filter down to the rest of the sport. Who would organise Boardman to come and consult within racing, for example? The BHA have neither the money nor the remit. Some big group like Godolphin might do it but presumably for their own benefit. Perhaps they already do things like that although it is not obvious looking at their horses on track, the equipment they use or the overall results.
Without this central body or some change forced by an outsider, Martin Pipe in the NH world the best example, the competition between participants is not as strong as it could be. To take the obvious example you might be thinking why is Richard Hannon being picked on in this document when he is the Champion Trainer? If everyone is doing similar things and new ideas and approaches not reflected then 'someone' will be Champion but in a weaker test than it could be. Hannon is top class at recruiting the right horses at the level he shops at, which underpins his success.
To go back to cycling, both track & road disciplines, the professionalism there has improved a lot in the last decade and British teams have played a real part in that. Listen to veteran cyclists now and they will say that, as an example, in the year 2000 when they started you were given your bike by a team and you rode round on it a lot to get fit. Not much more than that in terms of support. All the other teams used a similar approach so there were two ways to win. Be the best rider who was fittest for the day or differentiate yourself by using drugs, hormones, blood tranfusions and various other illegal and dangerous treatments.
The top level cycling teams operate at a different level now and an interesting area to think about is aerodynamic effiency. Minimising wind resistance and drag which inefficiently wastes the power that the athlete can produce. Spending time riding their bikes, on static rollers, in a Wind Tunnel is a regular part of many cyclists' preparation these days. Not just in terms of testing the equipment they are using but for optimising the position they sit in while pedalling. Here is Boardman talking about what he considers to be one of his big successes :-
With which let us end by turning back to racing and ask a few 'Stupid Questions'. Why do we never hear anything about aerodynamic effiency there? Do horses with people on board obey some different physics so it does not matter? Trainer Bill O'Gorman, an example of an Insider who did think differently, used to talk about it and insist his jockeys sat tight to the horse and minimised air resistance. These days you might occasionally hear pundits talking about 'cover' but usually in the context of getting a horse to settle down by racing close behind another. If there is a gale force headwind on the day it might get a passing mention. Why don't we look at some results from horses being ridden with different jockey postures in a Wind Tunnel?
The next time Jamie Spencer gets beaten in a photo finish and he has spent the entire race with his legs extended and with his body well above the horse's head and body why would you not send him a message to ask him to refund your losing bet? Why not sit tight to the horse instead of doing a 'Human Sail' impression which the horse is having to drag through the air resistance? And finally, why doesn't someone try fitting an aerodynamically designed air deflector to the big, square, block that the front of the horse's chest presents to the headwind? Reduce the resistance and it must make some real difference in a straight mile race into a headwind. Engineer it properly out of lightweight materials of course. Not steel and leather like most of the other 'extras' the horse has to carry. Feel free to add some Stupid Questions of your own.
[Update December 2013 = Tucked away in the Journal 'Biology Letters' is the following Study Paper, from 2011, written by people at the 'Royal Veterinary College' amongst others - "Speed, pacing strategy and aerodynamic drafting in Thoroughbred horse racing". Thanks go to the people at 'Performance Genetics', in the US, who circulated the link. The study uses data from 3,357 races in Britain from the 2005-7 seasons using position tracking data provided by Turftrax. To quote the Study's Abstract they -
"...determined the position and speed of 44 803 racehorses, once per second, in 3 357 races ranging in length from 1006 to 4225m (50.9–292.9 seconds duration) using a validated radio tracking system. We find that aerodynamic drafting has a marked effect on horse performance, and hence racing outcome."
A thoroughly recommended read and the full .PDF version of the study is available for free at the Link. A good question would be how many people involved profressionally in British Racing, i.e. trainers, jockeys, agents, pundits, vets, etc., have read this paper and thought about the implications in terms of strategy and tactics within a race? Somehow, you would think that getting £500 for each one who had would be a long way from a life changing amount of money.
Another thought would be to think about this quote from the Study, "For a horse that drafts for 75 per cent of a race, this effect is worth three to four finish places". If someone came along to British Trainers and said they had an idea, or product, whose use could bring a 3-4 place improvement in Finishing Position in many races you would think it have 'gone viral' and everyone would be talking about it. How many times have you heard it mentioned in British racing circles?]
Both items in this section are books so mark the point where the reader is going to need to be interested in the 'knowledge' issue to commit to following up the further reading. Both books in this section are comfortable, non-technical, reads written by authors who make a living out of writing overviews of research work as accessable 'stories'. The text in each section gives a summary of what the book contains and also some context around how that relates to the issue at hand. Both in racing and general terms.
|
'Future Babble', by Dan Gardner, is the book suggested as further reading by a B2yoR website user and kicked off the idea for this document. Many thanks to Lawrence (Loz) for that suggestion. Given that the book runs to nearly 300 pages, including notes and a vast bibliography, this sub-section is clearly going to be a brief overview of the main themes and reading the whole book is recommended to those interested.
The book considers the issue of 'Expert Predictions' of how the future will develop in areas such as politics, technology, population growth, financial markets, wars and so forth. Longer term predictions made by Experts in their fields drawing on their knowledge base and the current position when the predictions were made. This activity is a huge industry with many International bodies, Governments, Organisations and Corporate bodies paying for this type of input. There are also a wide range of academics involved in the same predictions as well as general authors producing articles and books for what must be a wide audience for such matter.
Mr. Gardner draws on three main sources for his Predictions and his analysis of how useful they have proven to be :-
Cutting to the chase, the story is that the predictions are mostly dreadful. As you would expect with trying to predict non-linear systems over long periods. The most interesting part of the book is how people react to this failure and how it links to how people think and what really worries them.
Mr Gardner draws on a split of Experts into two types - Hedgehogs & Foxes. The conceit behind this being that 'Hedgehogs know one big thing' whilst 'Foxes know lots of little things'. Characterising Experts into two broad groups where Hedgehogs use simple, preferred, approaches again and again in many circumstances. They are never uncertain, usually very confident about their approach and never believe they have been proven wrong. Foxes tend to a bit more cautious, try to factor in uncertainty and complexity more and tend to be less sure of themselves. They can, at times, admit mistakes and the face up to the limit of certainty in predictions.
But, surveying the books and articles, and the research work of Mr Tetlock the results come back that both types do appallingly badly with predictions. Foxes do better than Hedgehogs but still poorly. Hedgehogs often produce worse results in the constricted research (using clearly defined choices & not vague phrases) than a person tossing a coin would achieve. You and I could do better than many of the Hedgehogs while knowing nothing about the subject.
The Experts, though, tend to believe they have really been proved right even when a dispassionate appraisal of the actual outcomes would make it clear they had been wrong. With Books and articles they might admit to odd errors of timing or limit the scope of their prediction but they have never been proved wrong. They indulge in various forms of self delusion that would be recognisable in many racing punters. The first 'Pointless Punditry' article covered many of these but this book uses a different set of jargon to cover them. For example :-
The book is probably most interesting, in B2yoR's view, when it goes on to consider how the audience colludes with the Experts in various ways. Humans like certainty and the thing that worries us most is uncertainty. That is a large part of the reason why there is a market and appetite for all these future predictions. We want a story from someone of how things are going to develop even if that comes as a prediction of dire problems. We want to believe that the people who feed us these stories know something and are Experts. You can see this to some extent as a form of superstition and wanting to get some control over the future. We can laugh at voodoo and people doing 'rain dances' but they are doing the same thing. Trying to exert some control to make themselves feel better about an uncertain future. As people lapping up predictions from Experts that do worse than a tossed coin we are not in a position to mock.
We then go further in aiding the Experts when the predictions fail. We help them sweep the failures under the carpet and only remember the odd 'successes' they had (which might have been just flukes). Because we are mainly worried about the uncertain future we do not go back and analyse the past much nor hold people to account. We just move on because the past, however badly it went, is not uncertain and is not going hurt us.
B2yoR finds a lot of this useful when applied to racing. People like confident Pundits and tipsters who offer certainty. They also offer stories about there being an unseen cast of people who 'control' racing results to soothe the worries about uncertainty, complexity and help to ease the lack of control and superstition people have. B2yoR has long known that the best way to increase the readership of the website is to go down the 'Tipping' approach couched in terms of "..nailed on certainty for the 2:30,.... the trainer assures me this is the Bet of the CENTURY" and similar baloney.
Rather than offering lots of data and complexity and an invitation to graft away at the subject to try to move things enough in your favour to make money. While trying to get at the core of what is going on along the way. Most people do not want that. But at least this book helps to explain why and put that in the context of much bigger issues than racing.
Given B2yoR's aversion to the way the Betting Market is covered by Pundits and the media it also feels helpful to be able to see much of that in these terms. People want to be sold stories of an unseen 'They' plotting everything and then prosecuting 'Monster Gambles', by the media. It is more of the superstition thing. It inserts some control over things as a 'Story' (which humans are attuned to) which they want to believe in. The real story of uncertainty, complexity and graft is not just unappealing to many but wrong if not plain frightening. Seeing Chapman Boyce as a Shamen offering Snake-Oil and Myths to soothe the masses helps, you find.
We also help the Pundits in racing by cherry-picking results and remembering the successes and forgetting the failures. And do not go back to look at the full history of results. There is another race, move on.
For those interested in bigger examples of non-accountability of experts and predictions then the 2008 Financial Meltdown provides a terrific source. As part of the 'Further Viewing' B2yoR would advise people to watch the documentary 'Inside Job' by Charles Ferguson which charts how the financial issues developed and has interviews with many of the 'Experts' involved. There is a hugely chilling moment in the documentary when Barack Obama's 'new' Financial Team and advisors are introduced to the press after his election. Part of his promise was to address the financial problem and resolve the mess. Who walks down the podium to be presented as his financial gurus? Exactly the same bunch of people move forward who have been in place for 15-20 years and presided over the decisions, strategy, short-termism, poor management and lack of regulations that nearly brought the 'Capital Markets' system to complete collapse.
Having captained the whole ship over the waterfall where are they now? Rather than being in jail or too ashamed to go outside here they are, the Experts, never admitting they got anything wrong, and still in charge. You cannot kill the bogeyman, or woman. The points above help to explain how this can occur but how do you solve this? Camping out while beefing about 'Capitalism' is not going to get the job done, you feel. The issues are much too embedded in all of the people involved, including 'us' as the audience, and a more subtle understanding is required.
Since we are here we should also consider the 'Ratings Agencies' and accountability which is also covered in documentary. These 'Blue Chip' companies like Moody & Fitch, Standard & Poor seem like the absolute pinnacle of 'power without responsibility'. During the build-up to the 2008 collapse they were totally involved with causing the problems. When the unchecked financial system had developed 'toxic' products, through suspect devices like securitisation, what were the Ratings Agencies doing? Giving the Toxic Products that contributed massively to the crash top ratings and encouraging people in to buy them on their recommendation - Pension Funds, Iceland, whole States in the US, etc. who are in now in debt, forever.
What happened to those Agencies? Have they gone out of business, or been massively fined or lost their credibility? Of course not, we are not dealing with reality here but a fantasy financial world littered with worthless experts getting away with it. The documentary shows their senior management when hauled before the US Federal Committee investigating the 2008 collapse. The Agencies', wriggle out, excuse was "our statements are just opinions". So M&F, S&P, etc. are predictions merchants who just offer opinions and we know that their predictions will most likely be wrong. They contributed hugely to the biggest financial disaster the world has ever seen while demonstrating very visibly their 'seals of approval' were worse than useless.
Where are they now? At the time of writing this Britain is fretting over losing it's AAA rating from these outfits. Italy is in big trouble because they have already been downgraded by them. Why do we do this to ourselves? How do we break the cycle with Experts and their worthless Predictions?
|
The previous book touches on the split, in humans, between the information processing that goes on beneath the surface and how that interacts with the relatively small amount of information we conciously attend to. This is a readable book which draws on psychological research work but presents them as stories and examples to make them easier to digest. The author defines the 'Adaptive Unconcious' (AU) as the giant computer that is always working beneath our limited concious outlook. The AU then decides what important information and influences to feed to our 'Concious selves'.
It does not do this in a direct way and is usually the source of the 'intuiton' or 'gut feeling' we ascribe to such sensations. Our concious self is also not able to interrogate the AU in a direct way to heighten the feeling of not knowing where the thoughts came from but also meaning we cannot learn more directly about the process. The AU, as the giant computer assessing all input and comparing it with it's model of the world and required action, is what snaps you out of a daydreaming walk down the road to avoid being run over. You are always processing a lot more input than you are conciously aware of and the AU stops the concious 'you' being swamped with data and input that needs no attention now.
The AU is very good at making snap decisions on limited information ('Thin Slices') and picking out the 'essence' of a situation from just a quick glance. The book's title - 'Blink' - being a nod towards how the AU sums things up in the blink of an eye. But, it is also continuously processing all of the inputs you have received, whether registered with your concious or not, to build a constantly updated view of the world and how to react. It does this without informing your concious self what it is doing which means 'you' have a whole 'world view' your concious has no direct access to. Various psychology tests can demonstrate how this can show through in ways you have limited control over. Consider all the 'biases' covered in relation to the previous book, as an example. The people involved would almost certainly not recognise what biases they were displaying.
The book makes the point that the decision making a person undertakes should be an interplay between 'Thin Slicing' approach of the AU and the more considered, and lengthy, analysis you conciously make. Both are good when used at the correct times. But, it makes a strong case that in many situations what you devine out of the first 2 seconds, say in assessing a person, never changes. In interviewing someone for a job you make think you have followed a process lasting an hour, or more, but your AU already knew what it was expecting before the interview. It then summed the person up very quickly as they came through the door and said some first words. You think you formed your concious view over a period but it almost certainly changed little from those 'First Impressions'.
On the positive side understanding the AU process and how it learns and forms it's internal view means that it can be 'trained' to work for you. Some racing examples are spread through in the later paragraphs but this is a good point to go back to a quote used in the first 'Pointless Punditry' document. The quote there was from the head of a financial trading firm ('betting' in another form) who said of his best traders "They have to be comfortable with the mathematics because this demonstrates you understand the problem, but the successful traders, in my opinion, have the best intuition, a gut feeling for when to take the risk, not necessarily the best mathematicians...".
If we overlay this to the AU and 'Concious Self' model what could we suggest? The best traders have to do enough of the, right sort and proven, 'Concious' longer term analysis to enable the AU to build up enough knowledge to react quickly, but in an informed manner, at 'Thin Slice' time. But, they are able to go with the AU gut feeling at the time when snap decisions about buying stocks, futures, or whatever presents itself. Try mapping that onto how you analyse races and then make your bets.
The bullet points below are not from the book and go back to a much older source but seem to have some relevance here and worth considering how it plays against the AU model. The suggestion in the points are that when you are learning a new activity you go through these four stages :-
We could think about examples of how this model might work, say in Sportspeople. Where does being 'in the Zone' reside in this set-up? When we talk about a footballer having 'more time' than other players we can probably be sure they are able to use their AU more fully. A player like Spaniard David Silva would be good example and presumably has a full picture of the pitch and the players' positions as the ball is travelling towards him and already knows what to do. He is already turning his body to receive the ball most efficiently to make the best onward pass without knowing he is doing it. A lesser player probably includes the concious more. Receiving the ball, controlling it, sizing up the options and making a pass might be a single AU action for a 'more time' player and anything up to 4-5 separate actions, involving the concious at points, for others. Lots of other examples of phrases that map to this and the old-timer, immobile, footballer who gets by because 'the first two yards are in your head' is another one.
To move onto racing examples the book starts with a story about experts asked to give an opinion on whether a supposed ancient Greek Statue was a fake or not. Because this was a well done forgery the experts all 'intuitively' knew that the statue did not 'look right' the moment they first saw it but they were unable to describe properly what it was that was wrong. A clear case of the AU drawing on all the experience the person has had in looking at such finds and in a 'blink' knowing that something is wrong with this example.
That feeling got passed backed to the experts' concious selves in a variety of ways - they felt sick, disoriented and so on. The 'Gut Feeling' ways that the AU has to alert you to a problem. Because you cannot go and interrogate the AU the experts then seem flimsy when asked to articulate what they see as the issue. At that stage they just 'know' it is wrong and any articulation beyond saying how they feel is likely to be just noise to say something.
B2yoR would link this story directly to people who are judging horses as physical specimens or for wellbeing issues in Classical Paddock Review. As with the Greek Statue experts the reviewers go through the four bullet points above and someone with long experience will be doing a lot of the sizing up of the horse unconciously. If you ask them what they see in a particular horse you will be surprised how difficult, inarticulate and superficial the response will seem. Many Bloodstock Agents are savvy enough to 'solve' this presentational issue by having a shortlist of physical details to waffle about if asked the difficult 'what appeals to you physically about this horse' question. What they say is rehearsed and will not match the 'whole' view their AU has actually taken on the horse.
Once, when going around to review how a group of Classical Paddock Reviewers were performing, B2yoR would ask them about what they were seeing. Not having the AU model view at the time the responses seemed dreadful. The Reviewer would usually seem flustered and struggling to come up with a response and phrases they eventually offered were lightweight and clearly just 'words' to try to say something back to the interrogator. The initial response on listening to the responses was - Why are these people doing this job?, they know nothing'. But, some were getting good results so despite the concerns something else was going on.
Once you apply the AU model things make more sense and if you get someone who can describe fully what they are doing they are probably at the 'Conciously Competent' stage of learning. Or are savvy or perhaps have been taught in a way that respects the AU model with the problem to be learnt broken down into smaller pieces. The structure allowing more thoughts to be offered about how the person is processing the information now.
On another occasion B2yoR was sat with a guy who made his living out of Betting in Running. Imagine you have watched thousands of races yourself and think you can spot what is going on pretty well. During NH races this man would point to each horse a little before it was going to expire and drop out of contention and say it was gone. He was nearly always right and spotting things that were otherwise 'invisible' even to someone who thought race reading was within their competencies. But, ask him what he was 'seeing' to identify the horses next to expire and the usual uncertainty and blustering was presented.
A couple more ideas follow which relate to how you might go about analysis of races and betting if you take the AU model into account. How much information do you need to analyse? Is more information always better? If you think you can get through to the essential 'story' of the race in 5 minutes because of your experience then why go further?
The book gives an example of an 'Emergency Room' (ER) , or Accident & Emergency in Britain, in Chicago where they had the licence to try novel approaches to assessing patients coming in with chest pains. As a Public Hospital on low funding they had to find ways to make treatment cheaper. The standard way to deal with people with chest pains was to engage in laborious processes that were expensive and then be over-cautious which lead to admitting a lot of people for observation, including many who in retrospect did not need to be admitted. Because the consequences could be dire and they did not want to get sued the doctors would try every test and gather every piece of information they could.
When a formal study was done to see how reliable the experts were in their diagnosis the answer came back as 'very poor'. What was more, different doctors faced with the same data would do entirely different things. The conclusion being that all the data collection and testing was largely for 'show' to make the expert, a doctor in this case, feel better about the decision they came to. They were not listening to their experience enough. The hospital implemented a much simpler, and much more successful, process in getting the right diagnoses - an algorithm with a simple set of tests and limited collateral data inclusion.
They did not bother with items like the patients' ages, gender, race or whether thay were diabetics, for example. This was counter-intuitve to the doctors but research showed they were over-analysing the problem and getting worse at diagnosis because of it. In racing a lot of races only have so many 'Stories' that might unfold from them. Trying to factor in every bit of data in huge detail will just be over-analysis in many cases. You need to find out what data gives really solid insights into a race by checking and trust whatever procedure, however brief, produces that insight.
The book presents a further example of 'Paralysis by Analysis' with an American war-gaming exercise that cost hundreds of millions of dollars. The US military developed hugely complicated databases and information processing structures prior to the exercise which they thought would tell them how to prosecute the battle including knowing how the enermy would behave. But, the retired US Marine General they put in charge of being the enemy commander used his initiative, freed up his forces to make their own decisions without lots of debate and had the US military side in such trouble they called a halt to the exercise. Their response being to wind the clock back, start again, and tell the enemy commander he had to behave in the way the US Military software systems said he would. Oh, dear.
The view of the General playing the enemy, a man with a lot of real battle experience, was that you can be too structured and organised. Try to factor in too much at the wrong time. He used the comparison that you can do as much strategic planning and analysis as you like before the war but you have to think faster, have simpler decision structures and be more creative when the battle is in progress. You cannot stop at that point to discuss things and feed your endless data into your decision tree. Neither way of thinking is wrong - unconcious versus concious - but using them in the wrong situation is faulty. And assuming 'More is Better' without checking that formally is wrong. We are back to doing your 'Maths' homework before the betting starts, checking it works, then letting your 'Gut Feeling' guide you through the pre-race part.
The previous section about the book - Future Babble - noted a range of biases that people show and the first 'Pointless Punditry' document covered similar areas. The Unconcious and it's effect are the underpinning of many of the biases. Your uncouncious has a rich structure built up of what to expect and is constantly working through what it 'knows', adding new thoughts and mixing this into a world view. We do not have direct control over this but it will show through in biased behaviour that we will unaware we are showing.
A good example, from the book, of how this can work we could relate to the current issues in English football over racism. At the time of writing one player has not long come back from a much publicised ban for 8 games for use of a particular word. The former, but current before the incident, England captain (John Terry) is awaiting a court case from an on-pitch exchange deemed racist. This has had the knock-on effect of the England Manager leaving. The caretaker manager (Stuart Pearce) has had to answer, again, for a racist incident from 17 years prior. So, a topical issue as well as important.
The book includes a good section on how the Unconcious view affects you even when you think it does not. Most people will believe they are not racist. At the 'concious' level that is perhaps true. But your unconcious is not rational in that sense and has synthesised a world view out of all the influences you have ever had. You will have been exposed to more negative input about some races than others and your AU will have built a view of what it expects from that.
The 'Implicit Association Test' (IAT) is a psychology test which demonstrates we do not 'know our own mind'. The Race IAT might, for example, show you positive, or negative words and ask you to associate them, by pressing a keyboard, with a grouping like "European American or Good" or "African American or Bad". This can then be reversed and the positive or negative words asked to be associated with "European American or Bad" & "African American or Good".
What comes out of the test is that the large majority of people take much longer to assign positive words to the African American group, as one example. Their concious tells them they are not racist and that they treat every one the same. But the AU is sitting below with all it's negative input and messes your thinking up. You take longer as your concious works through the conflicting input it is getting from various parts of.. 'You'. There are many versions of this but, for example, people can lessen the time it takes to overcome the conflict by spending some time looking at pictures of Nelson Mandela & Martin Luther King, which reinforce a positive view of African Americans for your AU to factor in, before taking the test. One of the most telling findings is that even when African American people take the test they show the same bias. Their AU has the same input to synthesise. If you want to try the tests then go to - Harvard University IAT Tests.
None of which should be taken as an excuse for the racist issues that have occured in football. Rather to be used as a way of understanding that you do not really know 'yourself'. To be really not racist you have to expose youself to people from other races and be able to experience the positive side of that. So that your AU can re-work it's world view. Deciding at the concious level is not enough. What you might suggest is that when put under pressure and when we get agitated we stop thinking rationally and the AU takes over. It almost certainly has a view which will be taken as racist if not filtered through your concious.
Further than that the book points out that while the AU is a terrific operator it's effectiveness also starts to break down when you are under pressure. If your heart-rate goes too high and the amount of adrenaline in your system is too much even the AU stops blinking properly. You wonder about these points with jockeys and whips, another hot topic in British racing when this article was written. With the fifth amendments to new rules over it's use just having been proposed after major changes in October, 2011.
How about the whip and the same unconcious thoughts and processing. "I do not beat horses" - says the jockey, but how do you know? At the concious level you do not but your AU has been fed with endless guff about 'strong jockeys' and what that entails. The AU will do what it feels the need to when given the chance. A jockey put under pressure and the unconcious bubbles near the top and then the adrenalin of a high profile race is clouding your concious view. Then what happens? Jockeys almost certainly stop 'counting' in those moments then what else shows through?
Perhaps that was the sort of pressure that saw Dettori go bonkers when riding Swain in the Breeders' Cup. A jockey riding for the Grand National win or a Cheltenham success will just stop 'thinking' at the concious level. They need the training to handle these moments. It strikes B2yoR that a lot of the current issues over whips stems from trying to implement changes on jockeys trained, in many cases, years ago in a different world, without understanding what is required to alter the AU.
[Aside = Having managed to avoid individual Pundit references so far in this document an example came up while writing this that was too good not to include. Mike Cattermole is much less than a logical thinker in B2yoR's view, as you may have gathered. On TV after the latest changes to the 'new' BHA (Paul Bittar who has mastered the 'First Impressions' required very well) whip rules had been presented there was Mike, ad-libbing to camera. Bouncing around on his high horse about how good it is that Bittar has done the 'right' thing with the whip rules and, in Mike's case, a view based on the fact he wants to be mates with everyone in his sphere rather than pitiless, forensic, logical analysis. Then he repeats the same blast at the RSPCA as had become the fashion over the previous week. All standard repeating of the 'Party Line' within racing. Hey-Ho.
But, then the lack of logic kicks in and he says - "It is not necessarily about how many times the horse is hit but how. One hit by some jockeys makes me wince..". Which way is it Mike? Is the Whip Debate a bit of nonsense got up by the animal rights types and the RSPCA or is there a real presentational challenge for Racing here? You cannot have it both ways, at the same time. Cognitive Dissonance seeming to be his regular, comfortable, state-of-mind rather than troubling. Then he started looking at the flowers and saying how "Awesome" they were.]
This last set of three items steps the level up further from the more 'popular psychology' style stories of the previous two books. Some interest in science generally would be useful to get the best out of them as would be a basic understanding of the scientific approach and the main statistical 'proof' method that is used. But, none of the three are beyond the scope of an interested reader, lacking some of that background, who is willing to put a bit of effort in.
Having, hopefully, put a lot of doubt in your mind about what and who to trust so far the three pieces here should remove any final unwarranted beliefs you have in the efficacy of pundits and experts. The first piece takes apart the statistical test which is the core of much scientific research. Stats is a tough, difficult and counter-intuitive subject. Most scientists will only be taught the basics and will almost certainly have little understanding of the subject beyond knowing how to do the calculations when told what data to record and how by a qualified Stats person. If you read the piece through and realise that much of the scientific research is based on a suspect understanding of the stats methods used then what do we really know?
The second book takes this further and considers how medical treatments are tested and evaluated for usefulness. Never mind the 'Stats' involved the book reveals other problems. A lot of research is never done, when it is done many parties will try to bias the design to suit their ends, or they will design it incorrectly to make the output worthless. Many pieces of research never gets published and so further introduce biases into what is known. Even if research is done and published people will still try to bypass it and go direct to doctors and patients to get their drug, or whatever, sold anyway. The book documents millions of lives that have been needlessly lost over the years because of a full range errors and plain malpratice in research.
So, you cannot trust what doctors and academics say, without checking. A good example came up while writing this piece and having just re-read the 'Testing Treatments' book. One of the ruses they document is that a pharmaceutical company may produce a new drug and hope it does something useful. They do a simple test on some cells in a petri dish and get a vague 'positive' result which needs checking and still needs to go through full testing in animals and then humans. In the meantime though why not just try to get the NHS to prescribe it anyway on the basis of your basic research? But, that gets knocked back by the body that evaluates and approves drugs for use in Britain (NICE). Go and do some proper research at this point? Well, perhaps have a go at whipping up a public outcry first that the NHS is putting "money before lives..".
People may not want to believe the pharmaceutical spokesperson here so better if they get one of their tame academics to stand up and say that the NHS is denying a proven treatment to ill people. With that knowledge you can then approach these sort of spin stories with better preparation. While writing this the news on the radio includes an interview with an academic who says that the NHS is denying use of a 'proven' valuable drug in fighting Prostate Cancer. The body who evaluates research says there is no evidence of any value in it's use. No-one asks the academic to cite the research that 'proves' usefulness. The news organisation does not want to engage with the real issues so just lets him get away with a hype/scare story. Only one way to understand this further and that is to go and read it up for yourself using the knowledge that reading 'Testing Treatments' would give you.
The final book - "The Trouble with Physics" is the most arcane of them all but is useful to include because of the experts it covers. Most people would consider Theoretical Physics to be a high level activity full of geniuses working on the most fundamental knowledge the human race can hope to understand. Where did we come from? Why are here?, and so on. Einstein, was the most famous of the Theoreticians and is the name used in general conversation for the remarkable 'Genius' who is able to perceive 'Truth' directly. Which he could not, he needed help with his Maths and spent the last 20 years of his life working in increasing obscurity on ideas which people thought were worthless then and are considered 'junk' now. But, that is another story. But he did manage to graft away for 15-20 years, when at his best and most productive, and come up with two or three remarkable insights. Although, probably all would have occured to other people in due course given the areas that were being investigated at that time.
The book is written by a Theoretical Physics practitioner, with 30 years experience, and documents the mess the discipline has got into during that time. There has been very little progress for that period and the 'Theory' which was long considered to be the answer to all the problems has delivered nothing worthwhile but has eaten up the careers of thousands of the most able minds available during that period, as well as huge quantities of research funding.
The book is in five parts split between an overview of the theory and work involved in the first four setions, then a realistic description of how academic research is structured and undertaken in the last one. If you have no interest in the theoretical side, then the final part where he considers how his part of the academic world got themselves into this state is still very interesting. Another insight into how experts know less than we think and can get themsleves into some terrible tangles by being human.
|
[Update February 2014:- Link to original 'Science News' article fixed after their Website upgrade invalidated the original one. As a further example of how the problem with 'P-Numbers' & incorrect use of 'Statistical Significance' is so ingrained, and therefore hard to remove, here is a similar article from 'Nature' in Feb. 2014 = Scientific method: Statistical errors. ]
The first piece here comes from the US magazine 'Science News' from November, 2011. It's target is to highlight the misuse of 'Stats' by well-intentioned scientists. The vast majority are not experts in 'Stats' and will have had varying levels of training in that discipline. They will often be guided by statisticians when designing their experiment but these specialists are in short supply so may not be available and therefore the experiment and data design may be sub-optimal, if not plain flawed. Here is a sample quote from the article :-
Whether the 'Stats' for an experiment have been designed correctly or not, scientists will still have to be very careful in stating what they feel their results have 'proved'. Many will come to wrong conclusions or, at least, go too far in asserting what they have found out. Consider the following quotes from early in the article :-
The article takes this further by suggesting that the statistical tests and methods used are suspect, to various degrees, even if applied correctly. Many statisical tests are elderly and depend upon work from many years ago. As you would expect, statistics itself is in a process of constantly checking and refining it's approaches and methods. These improvements may well uncover limitations, and perhaps flaws, in existing tests and methods. As this quote alludes to :-
The most widespread use of 'Stats' is to test for 'Statistical Significance' of a finding. A phrase that would be known to many in the wider public and an approach used across all scientific disciplines. In an individual experiment a test will be run on the outcomes to see how likely the result that has been achieved would have happened by chance, i.e. through random variations you will get in individual results through factors outside of the experimenters' control.
For example, the originator of this test worked with crops and wanted to test whether he got increased yields using various fertilisers. At the start of the experiment he would state that the application of fertiliser would have no effect on the yields obtained. This is his 'Null Hypothesis' meaning he is hypothesising finding no effect (the 'Null' part) from fertiliser use. This is then the startement he is trying to assess the worth of with his 'Statistical Significance' test at the end. What variation in yield would you get normally and is the effect of fertiliser use, increasing or decreasing yield (positive and negative correlations), producing results which are well outside the normal variation range.
But, the test is never absolute proof and certainly not from a single experiment. It should be repeated many times to increase confidence in an 'significant' result. Sample Size also comes in here because the larger the sample the more likely it should be that any significant finding might be real and the size of the sample is part of the stats algorithm applied in the significance calculations.
To put this into racing terms, say your pet theory is about trainers using blinkers first time on a horse and this bringing about improved performance and profitable betting. To test this 'properly' you need to design your study with care and your Null Hypothesis might be that 'blinkers first time' make no difference to expressed performance nor profitability. You need to know things like what strike rate a trainer usually gets and how profitable he is to follow to be able to compare your results against 'business as usual'. Getting a big enough sample to hint at some significance, even at a low level, will be difficult and you will need to be able to repeat the results consistently. How likely is this to happen and then be analysed properly? How many people who write about their 'Pet Theories' for the general racing audience have actually done any of this?
But, then it get's worse because the article casts doubt on the use of such 'Statistical Significance' tests at all. These are at the core of much of the scientific research already 'in the book', remember. Here are a couple of quotes about the use of these tests (called the 'P' value for Probability) :-
The quotes and notes so far cover just the start of the article and it goes one to deal with various illogicalities, misinterpretations and so on. It also suggests how it might be improved by incorporating and even older approach ('Bayes Method') from the 18th Century and understanding the logic of that test is a good example of how counter-intuitive, plain slippery, stats can be. But, useful reading to get a feel for the issues that need addressing and the effects we, as the general public, are exposed to when scientific research in presented to us by information peddlers. Through the whole range from well intentioned scientists to all-out Charlatans and liars.
One last quote of warning to ponder :-
|
This book is closely related to the 'Bad Science' book that was recommended in the first 'Pointless Punditry' document. The author of that book writes the foreward for 'Testing Treatments', for example and extensively promotes it. A couple of quotes from his foreward :-
We are moving up a level from the statistics of the first article to see how full scale trials that will use those stats are designed and used. But, told as a story about the medical service we have and the problems that have been caused by these trials being done ineffectively, or not at all, or failing to be written up and published, and so on. The book begins with some chapters looking at some common misconceptions about treatments that trials can address. For example, 'New', 'More' and 'Earlier' would usually be seen as benefical when applied to medical treatments. The book explains, with examples, how this is often not the case and the problems, which include preventable deaths, this can cause. Education, of both the medical professionals and the patients, is required to produce a better system.
The book then covers how treatments can be fairly tested and compared to ensure the best available, and safest, options are used as early as possible. It also deals with the problems that are current with this process and the dangers they cause. You should be able to recognise the usual list of human biases, vested interests and frailties that appear throughout this document. It concludes by suggesting how the current situation can be improved to produce a better healthcare system delivering the right treatments, whilst including and informing the patients in that overall process.
One point to note is that although the book is written by 'Good Guys' who want to produce data properly, do proper trials, and so forth, they have no special abilities which mean they do not fall into some of the traps people are prone to. The previous article on statistical usage picks holes in a number of items that the 'Testing Treatment' approach uses. For example it touches on just how well 'Randomised' a 'Randomised Clinical Trial' will actually be and what that really means now we are getting a better understanding of the underlying Genetics. It also warns about the usefulness of 'Meta-analysis' where results from different trials are combined to give an average final figure. Both randomisation and Meta-analysis are cited as being highly useful tools in the 'Testing Treatment' books without addressing the concerns fully.
Ultimately, an experiment or trial is trying to model the, very complex, real world in a bounded and measurable way. Which means that a wound up Stats person can find fault with anything (see note on 'kind' & closed systems in the 'Wrap' section) if they want to. Which means there will always be some discussion over what constitutes a 'Fair Trial' or not and what level of certainty is required in the results. Another area the book only touches briefly on but which is likely to become increasingly important is tailoring treatments more to individuals. At the basic level we are learning that genetics plays a part in how individuals react to specific treatments. How does this affect the types of trials you do when they are currently targeted at trying to find, by averaging, the best One-Size-Fits-All treatments?
But, overall the book should shake your belief in how much the Medical practitioners you have to deal with know but also empower you to ask the right questions of them.
|
This book was published in 2007 and documented the problems that the author saw with the state of Theoretical Physics at the time. In the years since little has changed in terms of the discipline still being unable to make progress on a lot of difficult topics. For the record the 'Five Great Problems in Theoretical Physics' he saw as outstanding were :-
The author's starting point being that there has been rapid progress in theoretical physics over a period of 300 years or more going back to Galileo & Newton. There had never been a period of 10-20 years in that stretch of time when major theoretical advances had not been made and then soon backed up by experimental data or observations. But, this ground to a halt in the mid-1970s once the Standard Model had been put in place and experimental verification for it started coming in. The only substantial change to the list of points above since that time is that the fifth, Dark Energy and Dark Matter have been added to try to explain unexpected results that have presented themselves in the intervening years. But, this point means that around 95% of what the Universe is made of is in this 'Dark' category and Theoreticians have little idea what they are.
The question the book poses is how the Theoretical Physics community has got into this position where little progress is being made the community seems ill-equipped to come up with the bold new thinking that might break the impasse. The author tells the story of how a single approach - String Theory - went from being a backwater in 1984 to become the massively dominant force in Theoretical Physics. How people working on this approach make up most of the community working on the 'Theory of Everything' now and enjoy huge funding despite the lack of success. String Theory having delivered little despite the huge input by 2007 and having got into a dead-end which it's supporters were desperately trying to rationalise into a 'success'. This quote from the author :-
This is not an academic debate, it is about the hundreds of millions of pounds all of us have invested, our taxes, in funding worthwhile research because we want to know some fundamental truths. Although a lot of the book is about the technical story of how String Theory has developed the author goes on to consider, at length, how the problems it has come to are bound up with all the human and informational problems which are covered in earlier pieces above. Flawed experts with biases, post rationalising their lack of success to preserve their own worth and investment. Insider groups protecting themselves and stopping questioning and progress in other areas. And so it goes on, but with what most would consider to be some of the most intelligent people in the world working on the most fundamental problems. Not a backwater like racing and staffed with many who are mediocrities.
If the rest of the book passes you by then the chapters in Part 5 are still useful to further understand how humans organise themselves promotes these problems. Further, how once those set-ups are in place it seems very difficult to shift them. One last example from the book to link it back to the previous sections. Yet another example of where being a 'Confident' expert in predicting the future wins you the 'prize', even when it is underserved. A final quote related to how the swaggering and bombast of String Theorists has enabled them to win over the funding bodies and outsiders based on their unfounded optimism and self-belief :-
If you read everything here you could find it depressing. A lot of things seem messy around information and it's presentation. Who, and What, can you trust?
A more upbeat view is that this is the situation we should expect and understanding the issues involved means we can hope to address them more fully. In every other field of human endeavour we expect, large abstract nouns like 'Progress', so why not in 'Information'. Of course we do not know all the answers and struggle to deal with information properly, we are still learning and improving. Why should we expect to have all-seeing Experts?
For example, we have Power Generating Stations of various types. Some send out pollutants into our atmosphere with effects that we are still trying to put a scope on, although we are pretty sure Climate Changes, of some sort, will ensue. Some Power Stations go critical and release harmful radiation at times. But, the power they produce enable the human race to make real 'Progress' in lots of areas, despite the collateral damage. We should probably see information processing and presentation the same way, We are using a lot of imperfect tools and led by a lot of sub-optimal Experts, but we still manage to get a lot of useful things done.
With the human 'Optimism Bias' showing through we believe that we can make better Power Stations as we work at it and find ways to clean up the collateral mess we have caused along the way. We should see the same process with information. A bright future is out there where we will do so much better and look back to what we do now and shake our heads at how primitive it all seems. But the friction & drag caused by shoddy Pundits and Charlatans along the way should not be underestimated. Although frustrating, directing some time and resources to reduce their impact as we move forward would almost certainly prove to be efficient overall.
We are helped by that fact that the world is 'Kind' in many ways in the same manner as horseracing. Although there are a huge number of possible outcomes in any situation the connections between the 'particles' interacting mean that only a relatively small set actually occur. In experiments that means we can still make progress even with our flawed experiment designs and shaky grasp of statistical significance. Similarly in racing, there are only so many 'Stories' that any race (set of horse particles interacting) can actually produce. Which means that even applying rough tools in a shoddy manner can give you the feeling of being able to do some worthwhile predicting.
But, as with all the other areas we should be looking to improve and Progress and not, as Pundits prefer, go endlessly around using the same information and approaches. The question remains the same, How to shake up the existing group of Pundits? How to map out, and achieve, some real Progress rather than letting them drag the system down to their level of mediocrity.