Friday, November 22, 2013

How would you use the safety maturity index?

Earlier this week news about a safety maturity index (SMI) appeared on ControlGlobal. The SMI is calculated by a self-assessment tool (currently in beta) provided by Rockwell Automation. The SMI divide companies into 4 maturity levels:

  1. Companies, for whom the driver for safety is minimizing investments.
  2. Companies, for whom the driver for safety is attaining compliance.
  3. Companies, for whom the driver for safety is cost avoidance. 
  4. Companies, for whom the driver for safety is operational excellence.
My first reaction was: Does things actually improve when going from level 1 to 2 to 3 to 4?, and then what is the difference between minimizing investments and cost avoidance?, and finally what does operational excellence in safety mean?

Then I started speculating who would calculate this index? and why? And how would they use it? Which industries would use it? Discrete manufacturing? Oil and gas? Refining and chemical? Fine chemicals and pharmaceuticals?

Generally I am quite skeptical about indices. And I wonder what this new index brings to the table, which could not be achieved with existing sustainability indices? On the safety goal setting which the Dow Chemical Company did in 1985 and 1995 before the safety goals were changed to sustainability goals in 2005 (I hope they continue with those in 2015). 

Thursday, November 14, 2013

Is automation the solution to all problems?

I don't think so! Although from B. Lipták's recent blog "Automation can prevent the next BP spill" one is likely to get that impression. The blog correctly identifies the switch from high density cement to low density cement as a cause in the development of this disaster. It also correctly identifies that there was a rush to complete the cementing and sealing of the well. However, the description of the last minutes before the explosion and fire leaves something to be desired.

According to the B. Lipták's blog the sequence of events were:

  1. Around 9:40 PM a jolt was felt on the bridge.
  2. Rig shaking followed.
  3. Alarms activated due to the most dangerous level of combustible gas intrusion being detected.
  4. Electricity not turned off.
  5. Gas exploded.
  6. Oil and concrete blown of the well and ignited on deck.
Now according to BP's  Accident Investigation Report the sequence of events were:

  1. At 9:40 PM mud overflowed the flow-line and onto the rig floor.
  2. Approximately 1 minute later mud shot up through the derrick.
  3. Diverter closed and mud routed to mud gas separator and BOP activated.
  4. At 9:42 PM a support ship was advised to move away from the platform.
  5. Over a 5 minute period drill pipe pressure increased from 338 psi to 1200 psi.
  6. At 9:44 PM mud and water escaped through the mud gas separator vents.
  7. At 9:46 PM gas hissing noise heard and high pressure gas discharged towards deck.
  8. At 9:47 PM first gas alarm sounded, vibration felt, drill pipe pressure rapidly increased from 1200 psi to 5730 psi.
  9. At 9:48 PM main power generation engines going into overspeed.
  10. At 9:49 PM the rig lost power, and transmission of real time data was lost.
  11. Approximately 5 seconds after the power loss the first explosion occurred.
  12. Approximately 10 seconds after the first explosion a second explosion occurred.
  13. At 9:52 PM Mayday issued by Deepwater Horizon.

There are several difference in these two timelines. Firstly no jolt or vibration was felt around 9:40 PM. Secondly the BOP was activated, and thirdly power was lost prior to the first explosion. The BOP is designed exactly to deal with a situation like the one experienced by the Macondo well. It is a very large - about 3 floors - and complex valve, which is placed on the sea button to shot off the flow of anything from the well. The BOP failed to do what it was designed to do, and this has subsequently lead to improvements in BOP design.

B. Lipták advocate automating the control of density of the cement used to seal the well. He further advocate the use of redundant reliable sensors and smart annunciators to automatically respond. This is exactly what the BOP is designed to do. But the BOP failed. And how many reliable sensors exist, which work reliably at the temperatures and pressures at the end of a drill pipe?

I am not an expert on BOP's, but I believe that they are tested extensively before they are placed on the sea button, and that they also include sub-system testing after installation. However, since the BOP is designed to cut across the drill pipe testing it in place is not something which is desirable to do. And by its design intent the BOP also disengage the rig from the well. At least to me it is not clear exactly what additional automation can do to prevent something from not working as designed?




Saturday, November 09, 2013

More openess about injuries - and hopefully incidents

Yesterday abc-news reported, that US OSHA plans to make workplace safety reports from large companies public. I think that is good news. Workers should be able to investigate the safety performance of their potential new employer prior to seeking employment. I actually believe, that this will help reduce the number of and severity of workplace safety events, since now the company image will be impacted.

However, given that many events actually happens in companies with just a few employees I would lower the electronic reporting threshold from the suggested 250 to 50 or 25. Currently the Danish government is implementing electronic communication to more than 600.000 companies and other organisations here. This will significantly reduce the communications cost of the government. Here the electronic communication and reporting is rolled out to even the smallest of companies. When I had an active consulting company a few years ago I was required to submit quarterly reports electronically for my 1 person company.

I would also like to see electronic reporting of all process safety events, so the public gets access to the actual event information and not just derived statistics from the Amerian Chemistry Council.

What do you think about these ideas?

Thursday, October 03, 2013

Would you implement adaptive control on your process?

Today I attended a half-day seminar at the Technical University of Denmark on L1 Adaptive Control Theory and Applications. The feature keynote speaker was professor Naira Hovakimyan from University of Illinois at Urbana-Champaign (In my younger days I was actually accepted into the Ph.D.  program at that institution, but a week earlier I had already accepted an invitation to study at University of Alberta in Edmonton, Canada. However, I do recall that the applications required me to described the content of each course I had been taking at the Technical University of Denmark). Dr. Hovakimyan's keynote was titled "L1 Adaptive Control and It's Transition to Practice", and she started by mentioning the crash of the X15 in 1967, which was found to be caused by a fault in the planes adaptive controller according to Dr. Hovakimyan. Then she mentioned the successful flight of X-36 exactly 30 years later also using an adaptive controller. Dr. Hovakimyan's group have successfully used  L1 adaptive controllers on 5.5% scaled airplane models, and plans are in place to go to 15% scaled models next year. From the Q&A after the talk it became very clear, that the  success was to a large degree due to the practical talent of the pilots and students working on the system. There is a large difference between theory of adaptive control and the practice of adaptive control. This is something, I also experienced when implementing a Kalman filter on an industrial polyethylene reactor in the early eighties on my first industry job.

After the keynote Jussi Hermansen gave a practical demonstration of the capabilities of the L1 adaptive flight controller on a small quad-copter. The L1 controller was able to stabilize the quad-copter during rapid (full trost) accent and decent even under the windy conditions at DTU this afternoon. Jussi even balanced one arm of the quad-copter on a bench while the other three was in ear, and showed the plane hovering just a centimeter above ground. The win conditions today could be considered equivalent to large disturbances to process plants, so I feel sure we will see L1 adaptive control applications in the process industries. Appearantly collaboration with both  Statoil and Schlumberger are already ongoing.

The seminar finished by work the Automation and Control group at DTU had been doing with the Danish Navy on control of un-maned power boat for towing shooting targets. Dr. Hovakimyan recommends the book she has written for learning L1 adaptive control, but my fealing is that there is more to practice than is in the book. So to be successful with L1 adaptive control the best path is properly to hire one of Dr. Hovakimyan's students.

Tuesday, August 27, 2013

Them versus Us?

In the old days - in the early 80's - when I was a control engineer at a large complex manufacturing site in Canada, where we had staff functions such as process control or minor process changes in a separate engineering department, and not a part of operations, there was a bit of us versus them. However, during a major shake down in the mid-eighties the organisation was re-aligned, so process control people, instrumentation people and process design people became part of operations groups. The them versus us talk disappeared instantly - almost!

However, discussion on LinkedIn the talk about whether process safety should be a staff function or be part of operations appear again and again. Of course in large corporations there will be a need for experts for example in process safety, that serve the worldwide business. But even they are properly located at a plant site to be close to operations and other resources, and not in a corporate head office. Even in the largest of to-days corporations the need for such corporate wide specialist - or problem solvers is properly limited to less than a handful of people. This makes it a challenge to pick those persons and ensure they get the experience to become the worldwide process safety expert one day.

Such experts could in my view not function without being close to operations. If they tried to, then they would rely to much on theory and to little on practice. Experts are experts because they know how and when to combine theory and practice. Therefore is was a good thing, when our site engineering department disappeared in the mid eighties, and all engineers became part of operations. After the re-organisation the handful of control engineers on the site still had a need to talk with each other about process control problems and socialize together. We did that by introducing mentors and having lunch together each Friday.

Unfortunately since the another area with a them versus us attitude have been allowed to appear and group in many corporations. I am thinking about the IT-group. Lately I have been attending a number of one day events or conference about a particular aspect of IT, e.g. Big Data, Could Computing, Cyber security etc.

It has been a striking experience how much time is spent talking about IT and the LOB - i.e. us versus them! I simple don't understand why IT don't see themselves as part of the business, just like accounting or logistics. It is my impression, that IT people talk too much to each other and too little to others. Where this is less so innovation appear to thrive.

Maybe it would be good for business if IT people were closer to their customers within the organization and a bit further from other IT-people. But what do you think?

By the way today I attended a seminar, which reminded me how costly it can be not to pay attention to cyber security. In April 2011 Sony's PlayStation customer database was hacked using a known SQL-vulnerability. The cost to the company was half a billion dollars - equivalent to the cost of some of the largest process safety events. A few months later in 2011 the Dutch company DigiNotar lost its most valuable asset: customer trust! Three months later the company was bankrupt. But I believe it is easier to develop a security culture by having IT people integrated with their users, than by having an IT group or department attempt to communicate the culture message to other departments.

Sunday, August 11, 2013

Do you have housekeeping issues at your site?

Everyone knows that bad housekeeping can lead to incidents or near miss event. But it is often difficult to communicate this fact so it is understood. Here is a video made by Mafell for Holzfachmarkt Gerschwitz. The company apparently sell power-tools.  If you are discussing near miss at a safety meeting consider showing this video - it is less than a minute long. Here it is:

Sunday, July 28, 2013

Improved safety through cross learning from other fields?

This part week WebMD featured at story headlined "Tech Mishaps Behind 1 in 4 Operating Room Errors". The surprising conclusion of the study is that the researchers conclude that "surgical equipment checklist could cut the errors in half".  Maybe there should be operating manuals for operating theaters, just like there is for process plants or airplane. Maybe even an operating engineer to look after the equipment, while the doctors takes care of the patient?

In the airplanes and process plants checklist are commonly used before a major event such as a take-off or a start-up. It is not surprising then, that this tool could also be of value in the highly technical operating room environment. I actually believe, that such cross field learning is the best road to improved process safety.  What do you think?

Saturday, July 27, 2013

Profits before safety and image

Yesterday the Financial Times reported "Haliburton to plead guilty over Gulf spill", and that the company had to pay the maximum fine permitted under US law: US$ 200,000. That is indeed an insignificant sum for a company the size of Haliburton. Properly the amount is less than the minimum rent for a platform such as Deepwater Horizon.
Image of Deepwater Horizon fire from Wikimedia Commons.

This in my view is yet another example of a company, which puts profits above safety and image. The event with Exxon Valdez in Prince William Sound near Alaska almost 25 years ago gave Exxon Corporation a lesson or two, which changed the company forever. Few people recall that Exxon toke a line of credit of $4.8 billion from J.P. Morgan to protect itself. Later the company introduced the Operations Integrity Management System (OIMS) to help with ensuring, that the company would not again have to deal with disaster like the one in Prince William Sound. Maybe this and the ability to quickly decided to write-off the cost of a deepwater drilling such as Blackbeard is why ExxonMobil is today still involved in deepwater drilling in different parts of the world.

I don't know if the events in 1989 near Alaska led to any change on the Exxon board, but at least we know, that it led to action on the part of company across all its divisions and affiliates. I just hope, that Haliburton's board also take the small fine from the US Department of Justice as a wake-up call to ensure the company will take steps to ensure the world will experience another Deepwater Horizon even though drilling will progress to even more challenging areas than the Gulf of Mexico, e.g the areas off the east coast of Greenland. But that will just save a single company.

However, changing the public image of an industry like drilling and exploration cannot be done by a single company. Industry wide action similar to the chemical industry's ResponsibleCare program is in my opinion needed. In my view the priority should be safety, image, profits.

Thursday, June 27, 2013

How many different problems can you handle in a workday?

I currently have some problems with my spine. A month ago it was a pain just to sit down or to stand up. Then I went to a chiropractor. Now after half a dozen visits I am much better. Each visit is rather short, but it appears he schedule patients at 20 or 15 minute intervals. Which means, that during a normal workday he see between 20 and 30 patients or 3-4 patients per hour. This means that each hour the chiropractor deals with 3-4 unique problems. Why am I concerned with this? and what is the relation to process safety?

In recent years the process safety community have seen a new version of the EEMUA guide on alarm design and also a new ISA standard on alarms in the process industries. Both of these state that the maximum number of alarms an operator should handle in a workday should be less than 300 alarms per day, and that an acceptable number would be 150 alarms per day. That is equivalent to a maximum of 12 alarms per hour and an acceptable number of 6 alarms per hour. This means each hour the operator must solve 6 unique plant problems.

Now that we have the numbers let us compare. The workload of the plant operators in terms of unique problems to solve is 50-100% higher than that of the chiropractor. Do you think that is sustainable over a whole shift or over several shifts? I don't!

The EEMUA guidelines and the ISA 18.2 standard ask operators to handle one new problem every 10 minutes. I don't think process engineers can cope with this type of workload. So why do we believe that operators can?

My first process plant experience was with an integrated oil company in North America. The control room should be a quiet place for easy communication between the board operator, instrument technicians and field operators. The shift supervisor had a separate office. The logging printers were placed in a separate room. The only sound in the control room, which was not communication to or from the board operator was the sound of the coffee machine. In this environment the operator as far as I recall handled less than two alarms per hour!

At this plant a Honeywell PMX II process control computer was used together with TDC 2000. This made for very easy implementation of alarms both on computer control application points and on TDC 2000 image points. So the low number of alarms were not due to difficulty of implementation. The low number was due to management! No alarms on a TDC point was implemented without the process engineer specifying the required operator action. That turned out to be a very effective filter.

Now, that was 30 years ago or so. Since then there has been control room consolidation projects, control computer modernization projects, and many other project. I believe the control room also went from one board operator to two board operators during a normal shift. However, I don't believe the filter on alarm implementation has changed. - Oh, yes. These days a younger person have taken over.

Key to achieving the low number of alarms is in my view a well functioning system of primary control loops (flow, temperature, level) and secondary or supervisory control loops (quality, production) using best available technology.

Tuesday, June 11, 2013

Would you design a storage terminal using a checklist?

In the March issue of Hydrocarbon Processing Vinod Ramnath, who works for Aker Solutions in India publish an article titled "Key aspects of design and operation safety in offsite storage terminals". My first reaction was why distinguish between offsite storage terminals and other storage terminals? The design and operational safety issues should be the same - except for possibly the presence of in-house emergency responders at an onsite facility.

The article is 2½ pages long and start with an introduction which mention two major terminal accidents from the last 10 years: Buncefield in the United Kingdom on December 11th, 2005 and Jaipur in India on October 29th, 2009. The fire at Buncefield injured 40 people and burned for several days, but fortunately killed no one. It was head line news on TV stations across Europe, and the results of the investigation is still available on the Buncefield Investigation Homepage thanks to the UK government. The fire at Jaipur killed 12 people, injured more than 200 people and burned for more than a week, and unfortunately all we have is a wikipedia page. It was not mentioned on the TV stations in my country of residence.

After the accident descriptions the rest of the article are should do lists for
  • terminal design - prevention layers (that is a new term to me - what does it mean?)
  • basic control systems
  • alarms
  • safety instrumented systems
  • embankment
  • emergency responses
However, as far as I can see these lists are very general, and are pertinent for any storage facility for hydrocarbons. They are a good starting point for thinking about design of  a storage facility, but I don't think they are key aspects of such a design, nor are they key aspects of offsite design. So I really don't know the purpose of an article such as this one.

However, if each item on each one of the lists were supplemented with references to  the standard or standards relevant for that particular item, then indeed it would be a very valuable tool for design of any storage facility.

Sunday, April 21, 2013

Real time control squared!

Most engineers know what real time control is. For those few, who have forgotten click here to get Wikipedia's definition. But what is real time control squared? That is what I use to describe integration of new data sources in the real time control system. This could be for example weather data used to optimize plant performance. In the IT world such applications go under the label of either 'big data', 'business analytics' or 'business intelligence'. Recently Vesta - the windmill manufacture - described an application they used to schedule maintenance of wind mills, so the lost power was minimized. This application considered performance data from the wind mills, weather data for the area, and expected duration of work.

From running our process plants we know, that a summer thunderstorm has a significant impact on e.g. distillation towers. Suddenly the heat loss to the environment change from being subject to a metal to air heat transfer to be subject to a metal to liquid, i.e. rain water, heat transfer. This can be a significant disturbance. Usually weather forecasters can alert us such a thunderstorm some time before its impact on the facility. This information could be used to adjust the real time control system ahead of the storm. With the pressure sensors in modern smartphones, one could even envision locally used weather models based on onsite temperature measurements and pressure readings form co-worker smartphones.

This is just one example. Other more complex examples easily comes to mind. One could be run length optimization for cracking furnaces to avoid calling in maintenance personnel during the night shift. Although some former colleagues of mine consider furnace run length unpredictable. As control engineers we need to think about the missing information in our control applications, and how this information could be provided.

Saturday, April 20, 2013

Two tragic events within a week! - and a similarity

This week the world experienced two tragic events. One was clearly terrorism, and the other at the moment appear to be what most of us would call an accident.
The first was of course the Boston Marathon bombings last Monday, and the other was the fire and explosion at the West fertilizer plant in Texas. Both events killed innocent people and injured many more.
In Boston 3 were killed in the initial explosion and one more later during the hunt for the terrorists. More than one hundred suffered injuries - and many of these lost one or more limbs. Strangely enough thanks to medical developments as a result of the war against terrorism in Afghanistan these people today are better off than they would have been without the developments resulting from the war.
In the town of West the fatality count is currently at 14, but many are still unaccounted for. More than 200 suffered injuries from the explosion. The number of fatalities and injuries are what you will read in Wikipedia and other sites describing current events.
Although the two events are very different, in one aspect they are similar. Hidden behind the persons killed or injured are a much larger number: the number of people suffering a  loss of a loved one, of a father, or a mother, of a son, of a daughter or other relative or a good friend. That number is 10, 25 or maybe 100 times the number of fatalities and injuries. These people are also injured, but there injuries are not visible.
Next time you spend time considering the security of a major public event or participate in a HAZOP study of an existing facility, an expansion to an existing facility or a completely new facility, then think about all the invisible injuries, that will not occur if you do a good job!

Tuesday, April 16, 2013

Integrated Operations and Google Glass

If you are already into Integrated Operations (IO) or if you are considering implementing IO, then you definitely should get your hands on a pair of Google Glasses. Google have just made the specs public, and one of the features is real time HD (720p) video of whatever you are looking at.

IO is the idea of integrating experts on a site, e.g. a drilling platform or a remote plant, with experts at another location, e.g. corporate headquarters or a vendors expert. IO simple cut out the time needed to get the expert to the site, and hence significantly reduce problem solving time.
Photo of Google Glass. Round dot is the camera.

One of the elements of most IO implementation is real time video from the remote site to the relevant experts somewhere else. This video could of course be recorded with a standard portable video camera.  But that require at least one hand to operate the camera. With Google Glass the camera has moved into the glasses hence freeing up both hands for other tasks.

To get an idea what you can do with Google Glass take a look at the video of the opening keynote at Google I/O 2012. About 1 hour and 30 minutes into the video you will see what skydivers see when they jump from a blimp above the Moscone Center in San Francisco. Currently Google Glass is not generally available and they are rather expensive - about US$ 1500. Nonetheless I think you should take a look at what you can do with them here. Then start thinking about how this technology could improve your IO.

Tuesday, January 22, 2013

Ethics in oil companies and in drug companies


Image is important to both oil companies and drug companies. The image of a company can be quickly damaged by reaction of the public as in the story about the News ofthe World, which was developing as I started writing this, or by investigative journalism such as in the story in a Danish tabloid about GEHealthcare's Omniscan (Use Google translate if you want to read this Danish article) which can result in the rare but deadly decease NSF or nefrogene systemic fibrosis in patient with a weakened kidney, or by an unfortunate bad luck as in the case of B.P.'s deep water well in the Gulf of Mexico where the shears hammered into a solid connection between pipe sections instead of cutting the pipe.

Image is preserved by doing and being seen to do the ethically right things in the eyes of the public, which include both shareholders and customers as well as regulators in government or local authorities.

In the case of the drug company it is alleged that GE Healthcare withheld information from doctors testing their drug for use on patient with weakened kidneys. The withheld information is alleged to include experiments in 1989 with rats becoming ill from the drug, published in Investigative Radiology, and unpublished experiments in 1992 with rabits with similar results as well as experiments by the Belgian chemistry professor Robert Müller in 1994 using the active ingredient in Omniscan, gadolinium. Maybe the harm to GE Healthcare from the ongoing debate about this issue in Denmark will be minor. Only the future will show.

However, in 1989 Exxon experienced what is properly the worst environmental disaster in our life time. The tanker Exxon Valdez ran aground in Prince Williams Sound on Bligh Reef shortly after leaving Alaska for Long Beach in California. It has taken years for the nature in the cold Alaskan waters to recover from the huge spill. Exxon's board reacted. OIMS - Operations Integrity Management System was introduced. And more than 20 years later OIMS is still hard at work in Exxon. Maybe built in problem escalation procedures in OIMS was the reason Exxon did not years earlier experience an event like the blow-out of a deep water well experienced by BP in 2010!

I actually begin to thing the deference between the leaders in process safety and the followers are all in the ethical approach to running the company.

Sunday, January 20, 2013

Eliminate producing to storage!

In the June 2012 issue of Hydrocarbon Processing was an excellent article titled ”The Bhopal disaster” providing another view of this tragic event, which caused the loss of lives of thousands of innocent people around the chemical plant. Ever since reading the article I wanted to write a note to the editors of  Hydrocarbon Processing, but apparently this proud magazine don't deal with their readers. At least I have not found a letter-to-the-editor section in the magazine, or even an e-mail address to the editors.  In stead I have decided to publish my thoughts here. At the time of the event I was a relatively young process control engineer at a petrochemical facility in Sarnia, Ontario, and even though the disaster happened on the other side of the world we were very much influenced by it.

Can it happen in our backyard?

One of the issues I and my colleagues discussed in the days and weeks after the event: Can this happen in our plant? It can! Al it takes is one single management decision to reduce process safety. At Bhopal that management decision was to continue the production of MIC, while the MIC consuming process was not running. Why was this decision made? There was of course many other management decisions, that contributed to the scale of the disaster, such as the cooling system for the MIC storage tanks and the maintenance work initiated on other pieces of equipment meant to mitigate a release.

Trevor Ketch has many times stated, that what you don't have can't hurt you – or anyone.  I interpret this to mean, that you should not produce hazardous intermediates such as MIC to storage, just because the storage is there. At Bhopal the purpose of the two large horizontal storage cylinders was originally not storing locally produced MIC. It was storing MIC imported from the UCC plant at Institute in West Virginia. The huge storage capacity was designed with disruption of supply in mind. The lesson to be learned from this is: If a facility is no longer needed, then shut it down, and remove it. And also when you design flexibility into a plant by providing storage of intermediate hazardous materials, then consider the business trade off of a major process safety event involving this hazardous material and a restart of the unit producing the hazardous material, when the consuming unit is not consuming for whatever reasons.

A close call in Europe

Europe had a little known close call of a similar disaster during the flooding in major parts of central Europe in the summer of 2003. During the flooding a prior decision to produce chlorine to storage was a contributing factor in a release of about 80 tons of chlorine  The partly filled chlorine tanks were not designed for the buoyancy created when more than 10 meters of water covered the tanks. So one of the tanks were torn of its foundation and the buoyancy ripped apart the piping to and from the tank. The heavy chlorine gas spread along the ground, but lifted due to dilution a short distance from a population center as evidenced by pictures of bushes, that were red on one side due to chlorine exposure, and normal green on the other side. The chlorine consuming part of the plant was shut down earlier in that year after an explosion. But management decided to continue producing chlorine to storage.

Time for a new strategy

Maybe the chemical engineering community should start adopting the strategy used by Dow Chemicals at their diisotoluene plant at Freeport, which uses phosgene as an intermediate, to eliminate intermediate storage of hazardous materials consumed in the same facility. Dupont toke that same path already in 1984, when management decided not to start up a new facility which involved intermediate storage of MIC. The MIC storage was eliminated, and the facility later successfully started up. In contrast to this the former UCC facility at Institute produced MIC until a few years ago.

So in my view until the decisions on the management layer are corrected, then the decisions by the engineers in the plant will only have limited impact on process safety performance of chemical plants worldwide.

Tuesday, January 01, 2013

Can complex fire fighting be reduced to just 3 things?

You know how with time you get to trust certain publications more than others. Then it is almost unbearable when that trust is broken. For a long time I have considered Hydrocarbon Processing a good and reliable source of information about anything relating to the hydrocarbon processing industries, and of-course I am particularly interested in articles about plant safety. However, when reading an article in the November issue I became quite disappointed by the effort of the editor and author of the HP Special Report "Keep it simple: Three key elements to fighting complex flammable liquid fires".

The author of the article goes to great lengths not to reveal details about the event. Although he does state, that the fire occurred at a refinery in Baton Rouge, Louisiana in December 1989, and that an 8-inch product line was involved. A quick googling with the search terms: fire baton rouge december 1989, gave a results page on which the first six results indicated, the event which is the basis for the HP article is the December 24th, 1989 explosion and fire at Exxon's (now Exxon Mobil's) Baton Rouge refinery. Even though the HP article contain numerous details such as the amount of hydrocarbons released and the over-pressure from the initial explosion and also details about the what the fire chief of the refinery was doing at the time of the explosion no references are given. Also the authors relation to the event is unclear, but I guess he is working for company selling foam concentrate.
For a good description of what happened in Baton Rouge on December 24th,1989 I would recommend the article "Bad Santa" in Industrial Fire World based on a presentation at a conference in March 2006 by Jerry Craft, who was fire chief at Exxon's Baton Rouge Refinery at the time of the event, and the article "Tale of Two Cities: Baton Rouge, 1989 & Buncefield, 2005" from the website of Williams Fire, and the New York Times news story from December 25th, 1989 as well as the description in Roy E. Sanders book "Chemical Process Safety: Learning From Case Histories".

Now back to the HP article. The first sub-headline is "Power loss leads to fire". This appear not to be the case based on the 2006 conference presentation published on IFW. Neither does the New York Times article from Christmas Day 1989 mention anything about a power outage prior to the explosion and fire. Nor does Roy E. Sanders in his short description of the event. However, according to the 2006 conference presentation the explosion of the vapor cloud did knock-out power at the refinery and several other process plants in the area. The authors continue "Due to abnormal freezing temperatures, this power outage caused the facility systems to go into fail-safe mode". The abnormal freezing temperatures had nothing to do with going into fail-safe mode, but the power failure AFTER the explosion caused the refinery to go into fail safe mode. It is unfortunate, when such erroneous cause-effect relations are stated in an article without any source references.
Only two places in the article refer to three key elements. That is in the introduction, and according to the author the elements are 1) high quality foam concentrate, 2) simplistic equipment, and 3) deploying the right fire fighting method. I expected to read more about these elements in the article, but I have failed to find that.
In the last subsection "Post-incident analysis" I am presented with some rather general statements about such analysis and its virtue before the three key elements - simple equipment, high-quality foam concentrate and good understanding of hazardous situation - are repeated  However, fighting the Baton Rouge refinery fire successfully required complex resources such as areal images from helicopters to create overview of the many fires, creativity in connecting a 20-inch water supply to a 2-inch hose, creativity in laying out hoses, creativity in keeping the fires at bay until a foam attack was possible and several high powered pumps to apply water and foam. Simple it was not!
The second last subsection "Training is key", and I can only say "Yes, of course!". Having been associated with the petrochemical industry in Sarnia's Chemical Valley I learned firsthand how training together on major simulated events, such as a fully evolved tank fire, was a key to working together in a  real event. However, also here the author seem to attempt to hide or obscure information. For example "the refinery fire department" have become "the team permanently situated at the refinery", and "emergency situation" have become "dynamic emergency situation". Is there such a thing as a "non-dynamic emergency situation"? Similarly "The team that responded", when it is well known from public available sources, that Williams Fire and Hazard Control was CALLED-IN to assist with the fire fighting and especially the foam attack. Or "The potential of a fire on the magnitude of the 1989 Louisiana refiney blaze", when "The likelihood of a fire on the magnitude of the 1989 Louisiana refinery blaze" is properly what was meant?

Since the initial release in the Christmas Eve 1989 event was caused by the rupture of an 8-inch product line, I am surprised, that none of the sources mention anything about liquid relief valves. Could it be, that the pipeline designers had not expected temperatures as low as those experienced in December 1989 for extended periods of time, and therefore assumed the content of the pipeline would always be gas?

The problem with the HP Special Report in the area Plant Safety and Environment I write about here is that it attempts to simplify the fighting of a complex huge fire such as the one at Exxon's Baton Rouge Refinery on Christmas Eve 1989 to just three elements. The idea reducing fighting complex flammable liquid fires to just three elements is a good one. However, if I should select the three elements, then they would be: 1) Full scale exercises involving all potentially teams at least twice a year: once at the area level involving a whole site event and the whole community, and once at the plant level involving a unit event and the whole site with mutual aid assistance, 2) Table top exercises three to four times a year involving all fire chiefs at area plants focusing on strategies of cooperation and attack, 3)  Ensuring the area - together - have the necessary equipment and other resources, e.g. foam concentrate in sufficient amount, getting area views of emergencies in real time, or movement of rail cars, and 4) A system to keep track of what each member of each team is doing to facilitate personnel exchange during the event. Ups! That was four!

If one wants to sell more foam concentrate, then maybe it would be a good idea to write an article about its properties of different types of foam concentrate and the equipment needed to mix it as well as video clips of how efficient foam is in quenching a hydrocarbon fire. If your company handle hydrocarbons, then you must make certain you have the right foam concentrate and equipment. For example are not all types of foam is equally useful on fires involving alcohols.

I hope everyone will have safe 2013, and that some of us will meet at the 2013 Loss Prevention Symposium in Florence, Italy in May.