I was given a link to the updated guidelines on the Environment Agency Website.
Although I don't do a lot of purely environmental work, it is always a consideration, especially in relation to major accidents. These guidelines will be a useful reference when checking company emergency plans to make sure all the environmental aspects have been properly covered.
Thursday, June 24, 2010
Monday, June 21, 2010
HSE Human Factors Roadmap
Recently released by the Health and Safety Executive and available at their website
The document presents a graphical framework explaining how companies can address human factors. The introductory paragraph reads "The following framework is intended to guide the reader through a practical approach for linking major accident hazards (MAH) to the assured performance of humans engaged on safety critical tasks associated with those hazards. The framework is presented as a human factors journey with key milestones. For each of the milestones there is a link to human factors topics which may be investigated by Seveso inspectors. Most of these topics are described in more detail in the UK Human Factors Inspectors Toolkit."
The framework works though the following stages:
* Major accident hazard scenarios
* Safety critical tasks
* Task analysis
* Human error analysis
* Procedures
* Training
* Consolidation
* Competence assurance
It shows that if the above are monitored and reviewed the outcome is assured human performance.
The framework includes a side-stream covering maintenance and inspection, branching off from human error analysis. Its stages are
* Engineering/automation
* Maintenance and inspection
* Task analysis
* Human error analysis.
I think this framework is simple and practical, and it is really useful that HSE have set out what they expect. Equally, I am very pleased that it is very close to what I do with my clients!
The document presents a graphical framework explaining how companies can address human factors. The introductory paragraph reads "The following framework is intended to guide the reader through a practical approach for linking major accident hazards (MAH) to the assured performance of humans engaged on safety critical tasks associated with those hazards. The framework is presented as a human factors journey with key milestones. For each of the milestones there is a link to human factors topics which may be investigated by Seveso inspectors. Most of these topics are described in more detail in the UK Human Factors Inspectors Toolkit."
The framework works though the following stages:
* Major accident hazard scenarios
* Safety critical tasks
* Task analysis
* Human error analysis
* Procedures
* Training
* Consolidation
* Competence assurance
It shows that if the above are monitored and reviewed the outcome is assured human performance.
The framework includes a side-stream covering maintenance and inspection, branching off from human error analysis. Its stages are
* Engineering/automation
* Maintenance and inspection
* Task analysis
* Human error analysis.
I think this framework is simple and practical, and it is really useful that HSE have set out what they expect. Equally, I am very pleased that it is very close to what I do with my clients!
Monday, June 14, 2010
Group Challenges Proposed Limits on Vial Labeling
Article by Erik Greb on PharmTech.com on 10 June 2010
US Pharmacopeia’s (USP’s) Nomenclature Expert Committee has proposed that printing on ferrules and cap overseals should be restricted. They felt that healthcare professionals should rely exclusively on package inserts and vial labels for information about drug products. The organization proposed limiting cap messages to a small set of drugs that pose a risk of imminent harm or death in the event of medication errors.
But this has been challenged by the Consortium for the Advancement of Patient Safety (CAPS) who described the proposal as "ambiguous and could unintentionally reduce patient safety."
CAPS hired Anthony Andre from Interface Analysis Associates and adjunct professor of human factors and ergonomics at San Jose State University, to study the relationship between patient safety and messages on ferrule and cap overseals. A literature review did not find any reported incidents of medication errors that were associated with cap messages, and it was felt that the human-factors principles found in scientific literature did not support the premises of USP’s proposal. An online survey of healthcare practitioners resulted in about 80% of respondents predicting that medication errors would increase if many of the currently allowed cap messages were prohibited and roughly 69% disagreeing with USP’s approach to making warnings more prominent for healthcare professionals.
An empirical human-factors study was carried out. 20 participants included nurses, physicians, and pharmacists who normally handle drug vials and check drugs against prescriptions had to select the correct drug from a group of drug vials, some of which had cap labels that would be prohibited by the USP proposal, and some of which did not. Participants selected drugs with cap labels more accurately and more quickly than they selected unlabelled drugs, according to the report. Participants rated the labelled drugs as easy to use more often than the unlabelled drugs.
I can't comment on whether USP or CAPS is right on the subject. But I am concerned that the CAPS study has only looked at the likelihood of error and not the risk. Some drug administration errors can be fatal and irreversible, whilst others are far more final. It could be the case that reserving this labelling for where it really matters may, as suggested, lead to more errors but may actually reduce risks.
US Pharmacopeia’s (USP’s) Nomenclature Expert Committee has proposed that printing on ferrules and cap overseals should be restricted. They felt that healthcare professionals should rely exclusively on package inserts and vial labels for information about drug products. The organization proposed limiting cap messages to a small set of drugs that pose a risk of imminent harm or death in the event of medication errors.
But this has been challenged by the Consortium for the Advancement of Patient Safety (CAPS) who described the proposal as "ambiguous and could unintentionally reduce patient safety."
CAPS hired Anthony Andre from Interface Analysis Associates and adjunct professor of human factors and ergonomics at San Jose State University, to study the relationship between patient safety and messages on ferrule and cap overseals. A literature review did not find any reported incidents of medication errors that were associated with cap messages, and it was felt that the human-factors principles found in scientific literature did not support the premises of USP’s proposal. An online survey of healthcare practitioners resulted in about 80% of respondents predicting that medication errors would increase if many of the currently allowed cap messages were prohibited and roughly 69% disagreeing with USP’s approach to making warnings more prominent for healthcare professionals.
An empirical human-factors study was carried out. 20 participants included nurses, physicians, and pharmacists who normally handle drug vials and check drugs against prescriptions had to select the correct drug from a group of drug vials, some of which had cap labels that would be prohibited by the USP proposal, and some of which did not. Participants selected drugs with cap labels more accurately and more quickly than they selected unlabelled drugs, according to the report. Participants rated the labelled drugs as easy to use more often than the unlabelled drugs.
I can't comment on whether USP or CAPS is right on the subject. But I am concerned that the CAPS study has only looked at the likelihood of error and not the risk. Some drug administration errors can be fatal and irreversible, whilst others are far more final. It could be the case that reserving this labelling for where it really matters may, as suggested, lead to more errors but may actually reduce risks.
Just How Risky Are Risky Businesses?
A post on Carl Bialik Number Guy blog on 11 June 2010 considers the role of quantified risk assessment in light of the TransOcean disaster.
Apparently "BP didn’t make a quantitative estimate of risk, instead seeing the chance of a spill as “low likelihood” bases on prior events in the gulf." Other industries such as aviation and nuclear tend to use more quantitative assessments, and clearly the question should be whether BP should have done this.
Jon Pack, described as a 'spokesman' is quoted as saying "If you look at the history of drilling in the Gulf, and elsewhere, blowouts are very low-likelihood, but obviously it’s a high impact, and that’s what you plan for,"industry will need to take a second look at measures put in place to prevent hazards," but said this would likely focus on changing processes rather than on calculating risk.
Barry Franklin, a director in Towers Watson’s corporate risk management practice is quoted as saying "My recommendation to companies faced with low-probability and high-severity events would be to worry less about quantifying the probability of those events and focus on developing business continuity and disaster recovery plans that can minimize or contain the damage."
The post includes quite a bit about human error. Some sections are summarised below.
By observing people at work, Scott Shappell, professor of industrial engineering at Clemson University, has estimated that 60% of problems caused by human error involve skill failures, such as of attention or memory, while 35% involve decision errors — poor choices based on bad information, incomplete knowledge or insufficient experience.
NASA has used similar techniques for decades. Among the biggest components of shuttle risk, according to Robert Doremus, manager of the NASA shuttle program’s safety and mission assurance office, are orbital debris — which has a one-in-300 chance of leading to disaster — main-engine problems (one in 650) and debris on ascent (one in 840), which felled Columbia. Human error is also a factor: There’s a 1 in 770 chance that human error in general will cause a disaster, and a 1 in 1,200 chance of crew error on entry.
Human error adds to the imprecision. “Human reliability analysis is a challenge, because you could have widespread variability,” said Donald Dube, a senior technical advisor who works on risk assessment for NRC. “But it is founded on real data.”
In nuclear power plants that have been operating for some time, human errors are the most common ones, said Paul Barringer, a consulting engineer and president of Barringer & Associates Inc. “People are at the root” of many risks, Barringer said.
Doug Wiegmann, associate professor of industrial and systems engineering at the University of Wisconsin, Madison, has studied human error in cockpits, operating rooms and other contexts. “The general human-factors issues are the same whether you’re in a cockpit or anywhere else”: communications, technology design and a checklist chief among them.
Apparently "BP didn’t make a quantitative estimate of risk, instead seeing the chance of a spill as “low likelihood” bases on prior events in the gulf." Other industries such as aviation and nuclear tend to use more quantitative assessments, and clearly the question should be whether BP should have done this.
Jon Pack, described as a 'spokesman' is quoted as saying "If you look at the history of drilling in the Gulf, and elsewhere, blowouts are very low-likelihood, but obviously it’s a high impact, and that’s what you plan for,"industry will need to take a second look at measures put in place to prevent hazards," but said this would likely focus on changing processes rather than on calculating risk.
Barry Franklin, a director in Towers Watson’s corporate risk management practice is quoted as saying "My recommendation to companies faced with low-probability and high-severity events would be to worry less about quantifying the probability of those events and focus on developing business continuity and disaster recovery plans that can minimize or contain the damage."
The post includes quite a bit about human error. Some sections are summarised below.
By observing people at work, Scott Shappell, professor of industrial engineering at Clemson University, has estimated that 60% of problems caused by human error involve skill failures, such as of attention or memory, while 35% involve decision errors — poor choices based on bad information, incomplete knowledge or insufficient experience.
NASA has used similar techniques for decades. Among the biggest components of shuttle risk, according to Robert Doremus, manager of the NASA shuttle program’s safety and mission assurance office, are orbital debris — which has a one-in-300 chance of leading to disaster — main-engine problems (one in 650) and debris on ascent (one in 840), which felled Columbia. Human error is also a factor: There’s a 1 in 770 chance that human error in general will cause a disaster, and a 1 in 1,200 chance of crew error on entry.
Human error adds to the imprecision. “Human reliability analysis is a challenge, because you could have widespread variability,” said Donald Dube, a senior technical advisor who works on risk assessment for NRC. “But it is founded on real data.”
In nuclear power plants that have been operating for some time, human errors are the most common ones, said Paul Barringer, a consulting engineer and president of Barringer & Associates Inc. “People are at the root” of many risks, Barringer said.
Doug Wiegmann, associate professor of industrial and systems engineering at the University of Wisconsin, Madison, has studied human error in cockpits, operating rooms and other contexts. “The general human-factors issues are the same whether you’re in a cockpit or anywhere else”: communications, technology design and a checklist chief among them.
ITV HD goal miss with adverts blamed on human error
According to Kate McMahon in The Mirror on 14 June 2010
Stephen Gerard scored a goal for England after about 4 minutes in the World Cup game against the USA. Unfortunately people watching the match on ITV HD missed it because an add was being shown.
Clearly showing the add at that time was not planned. ITV have blamed "human error" from a French supplier after more than 1.5million viewers missed out on the first England goal of the World Cup. The Daily Mirror understands it was caused by an operator hitting the switch at the wrong time in the French company's London office.
Angry executives organised a crisis meeting yesterday morning to ensure the gaffe wouldn't happen again.
Stephen Gerard scored a goal for England after about 4 minutes in the World Cup game against the USA. Unfortunately people watching the match on ITV HD missed it because an add was being shown.
Clearly showing the add at that time was not planned. ITV have blamed "human error" from a French supplier after more than 1.5million viewers missed out on the first England goal of the World Cup. The Daily Mirror understands it was caused by an operator hitting the switch at the wrong time in the French company's London office.
Angry executives organised a crisis meeting yesterday morning to ensure the gaffe wouldn't happen again.
Windscreen water infection risk
Article on the BBC website by Emma Wilkinson on 13 June 2010
The Health Protection Agency has been studying Legionnaires' Disease and concluded that 20% of the cases experienced may have their source in windscreen wiper water
Yet adding screenwash kills the bacteria and could save lives, the Agency advised.
Legionnaires' disease is fairly rare, with 345 cases reported in 2009. Early symptoms feel similar to flu with muscle aches, tiredness, headaches, dry cough and fever. It mainly affects the over 50s, is generally more common in men and is fatal in around 10-15% of patients.
Most cases are sporadic and a source of the infection is not found. But it was noticed that people who spend a long time driving were at higher risk of infection.
A pilot study found traces of Legionella 20% of cars that did not have screenwash, but none in cars that did.
The advice is clear, add screenwash to your windscreen washer water.
The Health Protection Agency has been studying Legionnaires' Disease and concluded that 20% of the cases experienced may have their source in windscreen wiper water
Yet adding screenwash kills the bacteria and could save lives, the Agency advised.
Legionnaires' disease is fairly rare, with 345 cases reported in 2009. Early symptoms feel similar to flu with muscle aches, tiredness, headaches, dry cough and fever. It mainly affects the over 50s, is generally more common in men and is fatal in around 10-15% of patients.
Most cases are sporadic and a source of the infection is not found. But it was noticed that people who spend a long time driving were at higher risk of infection.
A pilot study found traces of Legionella 20% of cars that did not have screenwash, but none in cars that did.
The advice is clear, add screenwash to your windscreen washer water.
Sunday, June 13, 2010
Organising events with risks
There is a concern across companies and the wider community that health and safety is stopping people from doing things that are worthwhile because there may be a risk. A short article in Tips and Advice Health and Safety provides a useful summary of the issues.
The article points out that the tabloid press gives the impression that every type of outdoor event has been banned. However, the HSE and other safety bodies are trying to say this is not the case. The problem is that people are not sure what would happen if someone did get hurt and the organisers ended up in court.
A case was heard in the high court earlier in 2010 regarding the case of Robert Uren who was paralysed when he hit his head on the bottom of a pool whilst taking part in an "it's a knock-out" type of event organised for the RAF. In this case the judge concluded that the organisers were not at fault, recognising that the fun of the game included a degree of physical challenge. He said "a balance has to be struck between the level of risk involved and the benefits the activity confers on the participants."
The article suggests that participation in any potentially dangerous event should be voluntary and that they should be well informed of the hazards and that they need to take responsibility for deciding if they suitably fit and prepared to take part. Using experienced organisers is probably a good idea, but it is still important to make sure they have a good understanding of the risks and hold the appropriate insurance.
The article points out that the tabloid press gives the impression that every type of outdoor event has been banned. However, the HSE and other safety bodies are trying to say this is not the case. The problem is that people are not sure what would happen if someone did get hurt and the organisers ended up in court.
A case was heard in the high court earlier in 2010 regarding the case of Robert Uren who was paralysed when he hit his head on the bottom of a pool whilst taking part in an "it's a knock-out" type of event organised for the RAF. In this case the judge concluded that the organisers were not at fault, recognising that the fun of the game included a degree of physical challenge. He said "a balance has to be struck between the level of risk involved and the benefits the activity confers on the participants."
The article suggests that participation in any potentially dangerous event should be voluntary and that they should be well informed of the hazards and that they need to take responsibility for deciding if they suitably fit and prepared to take part. Using experienced organisers is probably a good idea, but it is still important to make sure they have a good understanding of the risks and hold the appropriate insurance.
Thursday, June 10, 2010
Oil rig culture can breed mistakes
Article by John Hofmeister in the Calgary Herald on 9 June 2010
Reflecting on the Deepwater Horizon blowout and oil spill. He points out that whilst there are a number of possible technical failures that lead to the accident, evidence from other major accidents shows that human factors is likely to have had a major contribution.
He describes a deep water drilling rig as "many people, highly skilled, brilliant on the job, with decades of knowledge and comprehension of what they are doing, motivated by high pay and great benefits, working for two weeks on and two weeks off. A deepwater rig is also a village dedicated to a single task, yet organized by small neighbourhoods of specialty skills and independent businesses." It is a good example of an oil industry that has fragmented itself through the outsourcing because of economic drivers stemming from oil-price volatility and the anti-competitive requirements of most governments.
Hofmeister identifies chain of command and communications as the two human factors he expects to have been the greatest influence..
"Chain of command in high-risk endeavours is the most important human success factor. It must be clearly understood and must work under all circumstances." But on a drilling rig, individuals do not necessarily know who is charge. There are multiple chains of command from several different subcontractors and staff working alongside each other may barely known one another.
"In the worst cases, decision-making can lead to buck passing until no one knows where it stops. Legal contracts set the ground rules for who is responsible for what. When disputes arise, companies disagree, battle or reconcile at higher levels on or even off the platform." Efforts can be made to formalise the various chains, but "People are still people" who work for their own boss, and may have relatively little understanding of overall operation.
Person-to-person communications is the other factor. "People communicating can be respectful and polite; they can also be demeaning, abrupt or abusive."
Reflecting on the Deepwater Horizon blowout and oil spill. He points out that whilst there are a number of possible technical failures that lead to the accident, evidence from other major accidents shows that human factors is likely to have had a major contribution.
He describes a deep water drilling rig as "many people, highly skilled, brilliant on the job, with decades of knowledge and comprehension of what they are doing, motivated by high pay and great benefits, working for two weeks on and two weeks off. A deepwater rig is also a village dedicated to a single task, yet organized by small neighbourhoods of specialty skills and independent businesses." It is a good example of an oil industry that has fragmented itself through the outsourcing because of economic drivers stemming from oil-price volatility and the anti-competitive requirements of most governments.
Hofmeister identifies chain of command and communications as the two human factors he expects to have been the greatest influence..
"Chain of command in high-risk endeavours is the most important human success factor. It must be clearly understood and must work under all circumstances." But on a drilling rig, individuals do not necessarily know who is charge. There are multiple chains of command from several different subcontractors and staff working alongside each other may barely known one another.
"In the worst cases, decision-making can lead to buck passing until no one knows where it stops. Legal contracts set the ground rules for who is responsible for what. When disputes arise, companies disagree, battle or reconcile at higher levels on or even off the platform." Efforts can be made to formalise the various chains, but "People are still people" who work for their own boss, and may have relatively little understanding of overall operation.
Person-to-person communications is the other factor. "People communicating can be respectful and polite; they can also be demeaning, abrupt or abusive."
Monday, June 07, 2010
IBM distributes virus-laden USB keys at security conference
Article by Asher Moses in the Sydney Morning Herald on 21 May 2010
IBM distributed a virus-laden USB keys to attendees at Australia's biggest computer security conference. The incident is ironic because conference attendees include the who's who of the computer security world and IBM was there to show off its security credentials.
IBM distributed a virus-laden USB keys to attendees at Australia's biggest computer security conference. The incident is ironic because conference attendees include the who's who of the computer security world and IBM was there to show off its security credentials.
Crisis response time measurement
Blog post at Houppermans on 28 April 2010
Here is a simple guide to measure your response time to a crisis:
1. Take a copy of your business continuity plan or guide.
2. Carry it to a safe place.
3. Set fire to it and measure how long it burns.
Speed is essential to deal with a crisis. Reacting appropriately in a timely manner minimises the risk of further escalation, be it a fire, toxic substance release, kidnapping or other grave situation.
Many organisations provide guides that are just too big - one was contained almost 200 separate recovery processes, each extensively documented.
The problems with this include:
* Exercise is critical to facilitate smooth, low risk execution. To ensure so many processes are sufficiently practised presents major challenges.
* Recovery processes must be flexible - changing circumstances will endanger any format that is too prescriptive.
* Processes must be light and focused. It is essential to avoid distraction, extraneous information can distract and costs time to read, thus delaying appropriate reaction.
* Volume causes critical delay. Starting crisis management with a choice of almost two hundred separate processes loses precious time, with an added risk of choosing the wrong starting scenario. After all, this initial selection is made under stress.
Seven to nine crisis handling processes should cover every need. This number is based on practical experience and on client feedback where recovery processes were exercised or used for real. Good processes are slim, efficient, focused and flexible, stripped from anything that can distract from the actual crisis at hand.
The worst time to discover the process problems is during a crisis…
Here is a simple guide to measure your response time to a crisis:
1. Take a copy of your business continuity plan or guide.
2. Carry it to a safe place.
3. Set fire to it and measure how long it burns.
Speed is essential to deal with a crisis. Reacting appropriately in a timely manner minimises the risk of further escalation, be it a fire, toxic substance release, kidnapping or other grave situation.
Many organisations provide guides that are just too big - one was contained almost 200 separate recovery processes, each extensively documented.
The problems with this include:
* Exercise is critical to facilitate smooth, low risk execution. To ensure so many processes are sufficiently practised presents major challenges.
* Recovery processes must be flexible - changing circumstances will endanger any format that is too prescriptive.
* Processes must be light and focused. It is essential to avoid distraction, extraneous information can distract and costs time to read, thus delaying appropriate reaction.
* Volume causes critical delay. Starting crisis management with a choice of almost two hundred separate processes loses precious time, with an added risk of choosing the wrong starting scenario. After all, this initial selection is made under stress.
Seven to nine crisis handling processes should cover every need. This number is based on practical experience and on client feedback where recovery processes were exercised or used for real. Good processes are slim, efficient, focused and flexible, stripped from anything that can distract from the actual crisis at hand.
The worst time to discover the process problems is during a crisis…
Video of an early ergonomics study
This is an excerpt from a half hour documentary on the life and work of Frank Gilbreth. Gilbreth lived at the turn of the last century and was a student of Frederick Taylor. He studied work to make it more efficient. This excerpt is about his work to improve bricklaying and find the “one best way” to lay bricks. In doing so he made bricklaying more efficient but also safer. More on his life can be found at the Gilbreth Network website.
Human error at meat plants gives UK beef farmers an annual £14m bonus
Article by Andrew Forgrave in the Daily Post on 18 May 2010
Trials of Video Image Analysis (VIA) show the machines are more accurate than human operators, who are instructed to give farmers the benefit of doubt in an estimated 6% of cases.
The National Beef Association is calling for compensation for affected farmers to cover the £14 million that may be lost if VIA is introduced.
Trials of Video Image Analysis (VIA) show the machines are more accurate than human operators, who are instructed to give farmers the benefit of doubt in an estimated 6% of cases.
The National Beef Association is calling for compensation for affected farmers to cover the £14 million that may be lost if VIA is introduced.
How You Work Can Affect How You Feel
Article by Dr. Jennifer Yang on Health News Digest on 18 May 2010.
It provides a good summary of typical health and medical problems caused by office work.
Computer work may appear to be a low-effort activity when viewed from a total body perspective, but maintaining postures or performing highly repetitive tasks for extended periods can lead to problems in specific areas of the body. They include
* Cervical myofascial pain syndrome, neck and shoulder pain that can be caused by poor posture and muscle overuse when sitting at a computer workstation for prolonged periods of time.
* Rotator cuff disease, affecting the muscles and tendons that hold the shoulder joint in place (the “rotator cuff”). Shoulder pain and weakness limit movement and are typically caused by frequent performance of overhead activities and reaching.
DeQuervain’s tenosynovitis, an inflammation of the tendons of the muscles moving the thumb, caused by repetitive pinching motions of the thumb and fingers (such as from using joysticks or scissors).
* Ulnar neuropathy at the elbow, which manifests as numbness in the pinkie and ring fingers, hand clumsiness and weakness, and pain from the elbow down the forearm. Symptoms are due to damage to the ulnar nerve that stretches across the elbow joint, and are associated with repetitive elbow movements or prolonged and frequent placement of the elbows on a desk or armrests.
* Carpal tunnel syndrome, the most widely recognized of all CTDs, resulting in pain, tingling and numbness from the heel of the hand through the middle finger and sometimes includes the wrist; in severe cases, hand grip weakness and clumsiness are also common. Repetitive strain and overuse of the wrist joint causes inflammation of the tendons, which in turn crowd around the median nerve that runs alongside the tendons. Any repetitive motions involving the wrist such as excessive keyboard typing and computer mouse use are common causes of carpal tunnel syndrome.
It provides a good summary of typical health and medical problems caused by office work.
Computer work may appear to be a low-effort activity when viewed from a total body perspective, but maintaining postures or performing highly repetitive tasks for extended periods can lead to problems in specific areas of the body. They include
* Cervical myofascial pain syndrome, neck and shoulder pain that can be caused by poor posture and muscle overuse when sitting at a computer workstation for prolonged periods of time.
* Rotator cuff disease, affecting the muscles and tendons that hold the shoulder joint in place (the “rotator cuff”). Shoulder pain and weakness limit movement and are typically caused by frequent performance of overhead activities and reaching.
DeQuervain’s tenosynovitis, an inflammation of the tendons of the muscles moving the thumb, caused by repetitive pinching motions of the thumb and fingers (such as from using joysticks or scissors).
* Ulnar neuropathy at the elbow, which manifests as numbness in the pinkie and ring fingers, hand clumsiness and weakness, and pain from the elbow down the forearm. Symptoms are due to damage to the ulnar nerve that stretches across the elbow joint, and are associated with repetitive elbow movements or prolonged and frequent placement of the elbows on a desk or armrests.
* Carpal tunnel syndrome, the most widely recognized of all CTDs, resulting in pain, tingling and numbness from the heel of the hand through the middle finger and sometimes includes the wrist; in severe cases, hand grip weakness and clumsiness are also common. Repetitive strain and overuse of the wrist joint causes inflammation of the tendons, which in turn crowd around the median nerve that runs alongside the tendons. Any repetitive motions involving the wrist such as excessive keyboard typing and computer mouse use are common causes of carpal tunnel syndrome.
BA - 'stalinist' bosses and safety concerns
Simon Calder writing in The Independent on 29 May 2010
Safety has been one of the issues raised during the long running dispute at British Airways. One union demand was assurances about cabin-crew rosters on new aircraft, to avoid existing staff being obliged to work aboard "an ageing fleet of old, broken, ill-maintained aircraft". Apparently BA flies an older fleet than most carriers, including its low-cost rivals and even Aeroflot.
The article says "Older aircraft are in no sense unsafe, since they are impeccably maintained by BA's engineers." But Professor Martin Upchurch of Middlesex University Business School believes "an embedded culture of bullying and authoritarianism" by the airline's top management could jeopardise safety.
In a report commissioned by Unite and sent to BA's investors, the Professor of International Employment Relations warns:
"The reporting of 'errors' may diminish if staff feel vulnerable and insecure."
"Employing newer, younger staff on lower terms and conditions may not only affect employee commitment (and customer satisfaction) but also have implications for safety when evaluated through 'critical incidents' or 'human error' reporting."
A spokesman for BA said:
"Safety of our customers and crew are our highest priority and we make no compromises. All of our cabin crew are trained to the highest standards and meet all regulatory requirements."
Professor Upchurch also describes the use of disciplinary action against cabin crew as "being reminiscent of the worse [sic] aspects of methods used by Stalinist secret police".
Safety has been one of the issues raised during the long running dispute at British Airways. One union demand was assurances about cabin-crew rosters on new aircraft, to avoid existing staff being obliged to work aboard "an ageing fleet of old, broken, ill-maintained aircraft". Apparently BA flies an older fleet than most carriers, including its low-cost rivals and even Aeroflot.
The article says "Older aircraft are in no sense unsafe, since they are impeccably maintained by BA's engineers." But Professor Martin Upchurch of Middlesex University Business School believes "an embedded culture of bullying and authoritarianism" by the airline's top management could jeopardise safety.
In a report commissioned by Unite and sent to BA's investors, the Professor of International Employment Relations warns:
"The reporting of 'errors' may diminish if staff feel vulnerable and insecure."
"Employing newer, younger staff on lower terms and conditions may not only affect employee commitment (and customer satisfaction) but also have implications for safety when evaluated through 'critical incidents' or 'human error' reporting."
A spokesman for BA said:
"Safety of our customers and crew are our highest priority and we make no compromises. All of our cabin crew are trained to the highest standards and meet all regulatory requirements."
Professor Upchurch also describes the use of disciplinary action against cabin crew as "being reminiscent of the worse [sic] aspects of methods used by Stalinist secret police".
Writing a good checklist
My last post from Checklist Manifesto by Atul Gawande
Bad checklists are vague, imprecise, too long, hard to use and impractical. They are typically written by people sitting in offices and treat the user as "dumb." They "turn people's brains off rather than turn them on."
Good checklists are the opposite. The provide reminders for the most critical and important steps, that even a highly skilled person could miss. Most importantly, they are practical in assisting people manage complex situations by making priorities clear. They have their limitations, and need to be perfected through use.
According to Dan Boorman of Boeing you have to decide what is going to prompt the use of a checklist and what type of checklist is required. The two main types are:
1. Do then confirm - people do the steps from memory then stop and go through the checklist to make sure they have not forgotten anyting
2. Read then do - people follow through the checklist like a recipe, ticking steps off as they do them.
The rule of thumb is to have 6 to 9 items on a checklist (but this can depend on circumstances). Ideally it should fit on one page and be free of clutter and unnecessary colour. Use familiar language. Overall, you have to make sure your checklist achieves the balance between providing help whilst not becoming a distraction from other things.
Gawande uses an example of a checklist for an engine failure on a single engined aircraft. It only has six steps, but the number step is "fly the plane." It has been found that pilots can be so desperate to restart the engine they become fixated and forget to do what they can to survive without an engine.
Bad checklists are vague, imprecise, too long, hard to use and impractical. They are typically written by people sitting in offices and treat the user as "dumb." They "turn people's brains off rather than turn them on."
Good checklists are the opposite. The provide reminders for the most critical and important steps, that even a highly skilled person could miss. Most importantly, they are practical in assisting people manage complex situations by making priorities clear. They have their limitations, and need to be perfected through use.
According to Dan Boorman of Boeing you have to decide what is going to prompt the use of a checklist and what type of checklist is required. The two main types are:
1. Do then confirm - people do the steps from memory then stop and go through the checklist to make sure they have not forgotten anyting
2. Read then do - people follow through the checklist like a recipe, ticking steps off as they do them.
The rule of thumb is to have 6 to 9 items on a checklist (but this can depend on circumstances). Ideally it should fit on one page and be free of clutter and unnecessary colour. Use familiar language. Overall, you have to make sure your checklist achieves the balance between providing help whilst not becoming a distraction from other things.
Gawande uses an example of a checklist for an engine failure on a single engined aircraft. It only has six steps, but the number step is "fly the plane." It has been found that pilots can be so desperate to restart the engine they become fixated and forget to do what they can to survive without an engine.
Sunday, June 06, 2010
More checklists
More from Checklist Manifesto by Atul Gawande
Gawande uses a number of non-medical examples to illustrate the role of checklists.
The Katrina Hurricane that devastated New Orleans provides examples of what can go well and what can go wrong. The main problem was that there were too many decisions to be made with too little information. However, authorities continued to work as if the normal way of doing things applied. This meant the federal government wouldn't yield power to the state, state wouldn't yield to local government and no one would involve the private sector. This led to trucks with vital supplies of water and food were not allowed entry because the authorities did not have them on their plan. Bus requisitions required for evacuation were held up for days. The root of the problem was people assumed the normal command and control structure would work for any situation and that there would be a big plan that was going to provide the solution. This case was far too complex for that.
Gawande uses Wal-Mart as an example of an organisation that did things much better. Apparently Lee Scott, the chief executive said in a meeting with upper management "a lot of you are going to have to make decisions above your level. Make the best decision you can with the information that's available to you at the time, and, above all, do the right thing." This was passed down to store managers and set the way for people to react. The initial focus was on the 20,000 employees and their families, but once they were able to function as stores local managers acted on their own authority to distribute nappies, baby formula, food, toiletries, sleeping bags etc. They even broke into the store pharmacy to supply the hospitals. Senior managers at Wal Mart did not issue instruction but instead supported the people who were in the position to assist. They found that given common goals, everyone was able to coordinate with others and come up with "extraordinary solutions."
Gawande sees the key message from this that under conditions of true complexity, efforts to exert central control will fail. People need to be able to act and adapt. There needs to be expectations, co-ordination and common goals. Checklists have a place here to make sure stupid things are not missed but they cannot tell people what to do.
This is something I can associate with. When suggesting the need for emergency procedures to cover specific types of event I am often given the response that 'you cannot write a procedure to cover everything.' This is something I totally agree with, but are cannot agree that the answer to provide nothing. Instead, people need brief prompt cards or checklists (of sort) to help them make the right decisions. Reading these may not be the first thing someone does when confronted with a situation, but they are very useful in training and assessment, and it is likely that others coming to assist can be pointed to the prompt card to make sure nothing has been forgotten about.
Gawande uses the example of US Airways Flight 1549, the plane that landed in the Hudson River in 2009 after it flew into a flock of geese, which caused both engines to fail. Captain Chelsey Sullenberger was held up as a hero for carrying out the "most successful ditching in aviation history," but he was very quick to point out that the success was down to teamwork and adherence to procedure. Sullenberger's first officer Jeffrey Skiles had nearly as many flying hours under his belt, although less on the Airbus A320. Gawande makes the point that this could have been a problem in an incident because both may have been inclined to take control, especially as the two men had never flown together. But before starting engines the two men had gone through the various checklists, which included requiring the team to introduce themselves, a discussion of the flight plan and how they would handle any problems. By having the discipline to go through this right at the start of the flight "they not only made sure the plane was fit to travel but also transferred themselves from individuals into a team, one systematically prepared to handle whatever came their way." This was a crew that had over 150 total yeats of flight experience, but they still went through the routine checklists, even though none involved had ever been in an air accident before.
The aviation industry has learnt from experience. The need for much better teamwork was identified following the 1977 Tenerife plane collision, where the Captain on the KLM plane had total command and the second officer was not able to intervene successfully. But it has also been learnt that checklists have to avoid rigidity or creating the situation where people follow them blindly. In the Hudson River incident the checklist of main focus was engine failure. Sullenberger took control of the plane and Skiles concentrated on trying to restart the engines, whilst also doing the key steps in the ditching procedure, including sending a distress signal and making sure the plane was configured correctly. Sullenberger was greatly helped by systems on the plane that assisted in accomplishing a perfect glide, eliminating drift and wobble; to the point of displaying a green dot his screen to give a target for optimal descent. All this freed him to focus on finding a suitable landing point. At the same time flight attendants were following their protocols to prepare passengers for crash landing and being ready to open doors. Gawande summarises this by saying the crew "showed an ability to adhere to vital procedures when it mattered most, to remain calm under pressure, to recognise where one needed to improvise and where one needed not to improvise. They understood how to function in a complex and dire situation. They recognised that it required teamwork and preparation and that it required them long before the situation became complex and dire. This is what it means to be a hero in the modern era."
Gawande uses a number of non-medical examples to illustrate the role of checklists.
The Katrina Hurricane that devastated New Orleans provides examples of what can go well and what can go wrong. The main problem was that there were too many decisions to be made with too little information. However, authorities continued to work as if the normal way of doing things applied. This meant the federal government wouldn't yield power to the state, state wouldn't yield to local government and no one would involve the private sector. This led to trucks with vital supplies of water and food were not allowed entry because the authorities did not have them on their plan. Bus requisitions required for evacuation were held up for days. The root of the problem was people assumed the normal command and control structure would work for any situation and that there would be a big plan that was going to provide the solution. This case was far too complex for that.
Gawande uses Wal-Mart as an example of an organisation that did things much better. Apparently Lee Scott, the chief executive said in a meeting with upper management "a lot of you are going to have to make decisions above your level. Make the best decision you can with the information that's available to you at the time, and, above all, do the right thing." This was passed down to store managers and set the way for people to react. The initial focus was on the 20,000 employees and their families, but once they were able to function as stores local managers acted on their own authority to distribute nappies, baby formula, food, toiletries, sleeping bags etc. They even broke into the store pharmacy to supply the hospitals. Senior managers at Wal Mart did not issue instruction but instead supported the people who were in the position to assist. They found that given common goals, everyone was able to coordinate with others and come up with "extraordinary solutions."
Gawande sees the key message from this that under conditions of true complexity, efforts to exert central control will fail. People need to be able to act and adapt. There needs to be expectations, co-ordination and common goals. Checklists have a place here to make sure stupid things are not missed but they cannot tell people what to do.
This is something I can associate with. When suggesting the need for emergency procedures to cover specific types of event I am often given the response that 'you cannot write a procedure to cover everything.' This is something I totally agree with, but are cannot agree that the answer to provide nothing. Instead, people need brief prompt cards or checklists (of sort) to help them make the right decisions. Reading these may not be the first thing someone does when confronted with a situation, but they are very useful in training and assessment, and it is likely that others coming to assist can be pointed to the prompt card to make sure nothing has been forgotten about.
Gawande uses the example of US Airways Flight 1549, the plane that landed in the Hudson River in 2009 after it flew into a flock of geese, which caused both engines to fail. Captain Chelsey Sullenberger was held up as a hero for carrying out the "most successful ditching in aviation history," but he was very quick to point out that the success was down to teamwork and adherence to procedure. Sullenberger's first officer Jeffrey Skiles had nearly as many flying hours under his belt, although less on the Airbus A320. Gawande makes the point that this could have been a problem in an incident because both may have been inclined to take control, especially as the two men had never flown together. But before starting engines the two men had gone through the various checklists, which included requiring the team to introduce themselves, a discussion of the flight plan and how they would handle any problems. By having the discipline to go through this right at the start of the flight "they not only made sure the plane was fit to travel but also transferred themselves from individuals into a team, one systematically prepared to handle whatever came their way." This was a crew that had over 150 total yeats of flight experience, but they still went through the routine checklists, even though none involved had ever been in an air accident before.
The aviation industry has learnt from experience. The need for much better teamwork was identified following the 1977 Tenerife plane collision, where the Captain on the KLM plane had total command and the second officer was not able to intervene successfully. But it has also been learnt that checklists have to avoid rigidity or creating the situation where people follow them blindly. In the Hudson River incident the checklist of main focus was engine failure. Sullenberger took control of the plane and Skiles concentrated on trying to restart the engines, whilst also doing the key steps in the ditching procedure, including sending a distress signal and making sure the plane was configured correctly. Sullenberger was greatly helped by systems on the plane that assisted in accomplishing a perfect glide, eliminating drift and wobble; to the point of displaying a green dot his screen to give a target for optimal descent. All this freed him to focus on finding a suitable landing point. At the same time flight attendants were following their protocols to prepare passengers for crash landing and being ready to open doors. Gawande summarises this by saying the crew "showed an ability to adhere to vital procedures when it mattered most, to remain calm under pressure, to recognise where one needed to improvise and where one needed not to improvise. They understood how to function in a complex and dire situation. They recognised that it required teamwork and preparation and that it required them long before the situation became complex and dire. This is what it means to be a hero in the modern era."
The origin of checklists
As promised sometime ago, I have summarised parts from the Checklist Manifesto by Atul Gawande
Gawande traces the use of checklists back to 1935 when the US Army Air Corps were looking for a long-range bomber. Boeing developed the model 299, which was significantly faster and had a greater capacity than anything offered by other companies and was nick-names the 'flying fortress'. However, at a demonstration flight it crashed shortly after take-off. No technical failure was identified, and it was concluded that it was caused by the pilot forgetting to release a locking mechanism on the elevator and rudder controls. Some people concluded that the plane was too complicated, and would never be flyable by humans. Douglas won the contract to supply their less able, but less complex plane.
Some in the Army were still keen to use the Boeing 299. They realised that the pilots on the plane that crashed were some of the most experienced pilots in the business, so more training could not be the solution. Instead they came up with the idea of checklists for take-off, flight and landing. These lists were simple, brief and to the point. They were short enough to fit on an index card, but they worked. The Army went on to order nearly 30,000 of the planes, which were dubbed the 'B-17.'
Gawande explains that in complex environments, experts are up against two main difficulties:
1. Fallibility of human memory and attention - especially for mundane routine matters that are easily overlooked when there appears to be more important things to attend to;
2. The tendency to skip checks, even when remembered, because they don't always matter - things that could cause a problem but have never done so in the past.
Checklists help to overcome these difficulties because they provide a reminder of what needs to be done and they instil a "kind of discipline" that makes people do things they may otherwise skip over.
In 2001 a critical case specialist at John Hopkins Hospital names Peter Pronovost developed a checklist with the aim of reducing infections by making sure key steps were carried out when inserting a central line (tube inserted into a vein). At first nurses were asked to observe doctors and record whether the steps were performed. The result was that in a third of cases, at least one step was missed. Nurses were then authorised to stop doctors if a step was being missed. This was seen as 'revolutionary' because it gave nurses some power over doctors.
After a year of using the checklist the results were so spectacular Pronovost was not sure whether to believe them or not. So they kept monitoring. It was calculated that after a little over two years the checklist had prevented 43 infections, eight deaths and saved £2 million in costs.
Pronovost observed that checklists helped memory recall and clearly set out the minimum steps in a process. He was surprised with the results because he had not realised how often even very experienced people did not grasp the importance of certain precautions. His results were impressive, and he put a lot of effort into spreading the word across the country (speaking in an average of seven cities per month). But others were very reluctant to take up the idea, either because they did not believe the results or they simply thought that would not need a checklist.
A checklist can be useful, not just if there are lots of steps that can be forgotten or intentionally skipped, but if there are lots of people involved in the task. Taking construction as an example, Gawande explains that in the past the 'Master Builder' designed, engineered and oversaw all aspects of construction for a building. By the middle of the twentieth century this did not work any more, and instead a team of experts and specialists are required. This is where checklists start to become necessary.
The trouble is that some professions have been slow to realise that the nature of the job has changed and become more complex. In medicine doctors don't seem to realise that most patients receive attention from many different specialists. If a checklist or equivalent is not used the result is duplicated, flawed and sometimes completely uncoordinated care.
Gawande traces the use of checklists back to 1935 when the US Army Air Corps were looking for a long-range bomber. Boeing developed the model 299, which was significantly faster and had a greater capacity than anything offered by other companies and was nick-names the 'flying fortress'. However, at a demonstration flight it crashed shortly after take-off. No technical failure was identified, and it was concluded that it was caused by the pilot forgetting to release a locking mechanism on the elevator and rudder controls. Some people concluded that the plane was too complicated, and would never be flyable by humans. Douglas won the contract to supply their less able, but less complex plane.
Some in the Army were still keen to use the Boeing 299. They realised that the pilots on the plane that crashed were some of the most experienced pilots in the business, so more training could not be the solution. Instead they came up with the idea of checklists for take-off, flight and landing. These lists were simple, brief and to the point. They were short enough to fit on an index card, but they worked. The Army went on to order nearly 30,000 of the planes, which were dubbed the 'B-17.'
Gawande explains that in complex environments, experts are up against two main difficulties:
1. Fallibility of human memory and attention - especially for mundane routine matters that are easily overlooked when there appears to be more important things to attend to;
2. The tendency to skip checks, even when remembered, because they don't always matter - things that could cause a problem but have never done so in the past.
Checklists help to overcome these difficulties because they provide a reminder of what needs to be done and they instil a "kind of discipline" that makes people do things they may otherwise skip over.
In 2001 a critical case specialist at John Hopkins Hospital names Peter Pronovost developed a checklist with the aim of reducing infections by making sure key steps were carried out when inserting a central line (tube inserted into a vein). At first nurses were asked to observe doctors and record whether the steps were performed. The result was that in a third of cases, at least one step was missed. Nurses were then authorised to stop doctors if a step was being missed. This was seen as 'revolutionary' because it gave nurses some power over doctors.
After a year of using the checklist the results were so spectacular Pronovost was not sure whether to believe them or not. So they kept monitoring. It was calculated that after a little over two years the checklist had prevented 43 infections, eight deaths and saved £2 million in costs.
Pronovost observed that checklists helped memory recall and clearly set out the minimum steps in a process. He was surprised with the results because he had not realised how often even very experienced people did not grasp the importance of certain precautions. His results were impressive, and he put a lot of effort into spreading the word across the country (speaking in an average of seven cities per month). But others were very reluctant to take up the idea, either because they did not believe the results or they simply thought that would not need a checklist.
A checklist can be useful, not just if there are lots of steps that can be forgotten or intentionally skipped, but if there are lots of people involved in the task. Taking construction as an example, Gawande explains that in the past the 'Master Builder' designed, engineered and oversaw all aspects of construction for a building. By the middle of the twentieth century this did not work any more, and instead a team of experts and specialists are required. This is where checklists start to become necessary.
The trouble is that some professions have been slow to realise that the nature of the job has changed and become more complex. In medicine doctors don't seem to realise that most patients receive attention from many different specialists. If a checklist or equivalent is not used the result is duplicated, flawed and sometimes completely uncoordinated care.
Subscribe to:
Posts (Atom)