Monday, November 21, 2016

What is the problems with much of the engineering literature?

With the increasing pressure on academics to publish articles and the many journals available for such publication I have noticed that many academic publication show clear evidence of a lag of industrial experience among the writers. The result - in my view - is, that many academic articles don't have a clear focus on the potential reader.

The following quote is from an Elsevier peer reviewed publication:
"In a large process plant, there may be as many as 1500 process variables observed every few seconds leading to information overload."
You may ask, what is wrong here? The writer appear lag a) an understanding of how a DCS works, and b) an the difference between logging data and observing them. It is true, that a modern DCS or SCADA may have 1500 process variables, which are being logged every few seconds. However, even at very complex facilities the operator usually have less than a dozen key process variables, which she or he monitors continuously. So - in my view - the statement that the number of variables entering the DSC or SCADA leads to information overload is incorrect and misleading. I think the reviewer should asked the authors to a least modify this sentence in introduction. Unfortunately most reviewers don't go into such details. I think this is a major quality issue with the current system of academic publications.

This very well illustrate a problem with current engieneering literature in the sciences. The readers and the authors see things from different perspectives. By Chylld - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=13397795
I have however noticed, that the introduction to articles in the medical literature often are much more precise and to the point. In the engineering literature I find most introductions very loose essay like motivations for the work done after the work was performed.

Here is a second quite from the same publication:

"Using MFM to model the plant all that is needed is a basic understanding of chemical unit operations, their purposes and the fundamentals on which these purposes are bult, i.e. transport phenomena, thermodynamics and kinetics. This means that functional HAZOP study may be performed by less experienced personnel."
Unfortunately one of my publications is quoted to support this statement. However, the major problem in this quote is the assumption, that if you understand chemical unit operations, then you also understand how they may fail, which is what is needed for a HAZOP study. But operational principles of chemical unit operations are much easier to grasp than the failure modes of even a single unit operation, e.g. a distillation tower. In my article we only claimed, that a less experienced engineer could help with the pre-meeting tasks, if one divided the plant along functional lines. A distillation column would e.g. be divided into a reflux loop, a reboiler loop, a feed section and two separation sections.

Another area, where much engineering literature fails, is in comparing a presented approach or methodology to other approaches or methodologies for handling the same problem. This naturally leads to weak conclusions, such as the following:
"The results show the strength of this approach and can be considered as a useful strategy for dealing  with  complex  chemical processes."
However, the article contain no quantification of "the strength of this approach" or of  how "useful" the strategy is compared to what is already being done in practice. The result is, that practicing engineers, whether in design or operations or process control, are very reluctant in adopting new methodologies. This means progress is slow to move from academic research in engineering to engineering practice.

The question is if we can ever get both the readers and the authors on the same level. By Original image by Algr.Recreated, fixed isometric projection and vectorised by Icey. - Own work.This vector image was created with Inkscape., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1061744

So how do we get academic writers to focus more on their potential readers? Their audience! The current system provides no credit for number of readers to the authors. However, with the increasing number of open access online journals is should be relative easy for publisher to monitor which articles are being read more than others, and this information could then be feed back in the academic merit system. Currently such information about readers is only available through independent portals such as ResearchGate and others.

Friday, October 21, 2016

Why do you have too many alarms?

These days it happens several time every month, that I receive emails from ABB or GE about their offerings in the area of SCADA systems. I have started wondering if the decision about which SCADA systems to buy has moved from the engineers and operators, who will use it in their day to day work, to the supply department, just like buying paper for the photocopier/printer or coffee for the office coffee-machine. The latest email from GE had a link to their so-called blog shown below:
This made me think: Why is it, that we have too many alarms in our chemical plants and refineries today?

When I worked at a major facility in Sarnia many years ago, the process engineers in charge of the plant had an alarm policy, which among other things stated, that if you implemented an alarm, then you needed to specify what action you wanted the operator to take, when that alarm came in. The result was, that our Honeywell process control system only generated 2-3 alarms every hour on most days, and that it took the process engineer just a few minutes to scan the alarm log from the night shift.

To be fair to today's plant and control engineers we also were two control engineers to ensure, that the process control system did control the plant most of the time. Great efforts were taken to maintain the basic flow-, level- and temperature-control loops as well as the supervisory loops on cracking furnaces or a distillation towers. My guess is, that today this manpower has been significantly reduced, and hence the basic and supervisory control loops are not as well tuned as 30 years ago.That is a shame! Because without a well tuned set of basic control loops any attempt to implement a unit wide model based control, such as MPC, to operate the facility as close to a constraint as possible will most likely fail.
Because of our well tuned supervisory control loops the facility did not implement MPC until the mid nineties - many years after I left. However, when those MPC's were finally implemented they were equipped with operator displays based on our experience from the supervisory control loops. This meant, that the operator decided which constraint the MPC should take into account, and that the operator could see which contraints were active. That transparency made the MPC's a huge success - even with the mentioned manpower reductions.

I am certain Alicia Bowers, who wrote the blog for GE, is an intelligent writer, but I don't buy her suggestions, that the many alarms can be handled by
  • Using analysis tools to reduce the number of alarms that occur.
  • Drive response on the alarms that matter.
  • Leverage HMI/SCADA design best practices.
I think a more fundamental look at who and what decides when and where an alarm is implemented is needed together with a look at the tuning of the basic control loops. After all most alarms are properly implemented to notify the operator about a deviation, which the basic control loop is unable to cope with. So let us go back to basics:
  • Tune your basic control loops well. This can take time. I recall one of our instrument engineers spending several weeks tuning our polyethylene reactor temperature control loop.
  • Implement only alarms for which a specific operator action can be identified. This properly require input from experienced operators.
When that is done, then consider providing the operator with displays that are consistently designed either according to best practices such as defined e.g. by the Abnormal Situation Management Consortium or by a company display design guideline.

And now about the question in the title of the blog. I think the many alarms that many operators have to copy with is a result of it being to easy to implement alarms in a modern SCADA system. For example it is not uncommon for a supplier of a pump to request and get implemented several dozen alarms on a simple large pump. Most if not all of these alarms are irrelevant for the operator during day to day operations, and in my view should never be implemented as alarms. These pump related events are relevant for maintenance of the pump, and hence should just be logged to an event file, which the maintenance engineer could then review and take action on.

One approach to limit the number of alarms implemented on a SCADA system would be to simply require, that all new alarms are subject to a MOC review - even those implemented during a project. After all most SCADA systems don't come with any alarms preconfigured. So the implementation of an alarm is a change to the SCADA system, which should be subject to MOC review.

Friday, October 14, 2016

Do Electrical Area Classifications Require Detailed Release Calculations? No!

In the August issue of Hydrocarbon Processing publish an article titled "Consider post-design changes to confine a hazardous area". From the title it is unclear whether we are looking at toxicity hazards or flammability hazards. However, in the introductory paragraph it becomes clear, that the subject is electrical area classification, since it is stated, that objective is to avoid that one in the same area have an ignition source and a flammable mixture. The article is written by Sanjay Bapat from Petrokon Utama Sdn Bhd in Brunei, and it contain three sections: Introduction, Analysis of HAC classification, and Recommendations.

Mr. Bapat states, that a hazardous area represents the volume of the plant, which contain significant quantities of flammable mixture during normal operations, startup or shutdown. This statement is not directly wrong, but as far as electrical area classification the classification depends on the likely of hydrocarbons being present in the area. For example an area in which hydrocarbons are present all or most of the time during normal operation is classified as Zone 0. Zone 1 are areas, where hydrocarbons could be present during normal operations, and Zone 2 are areas, where hydrocarbons may be present during abnormal events or operations.

Mr. Bapat propose to perform numerous release calculations from potential sources, such as flanges, and states that areas are designated Zone 0, 1 or 2 depending on release duration and type of ventilation. He further states, that areas are classified to minimize the likelihood of flammable mixtures spreading over ignition sources. According to Mr. Bapat the key steps in area classification are:
  1. Identify the release sources and establish the size.
  2. Identify the fluid catagory that could be released through each source, along with its operating temperature and pressure.
  3. Estimate the hazard radius from the standard, or by performing dispersion calculations.
  4. Establish duration of release and nature of ventilation, and determine the zone type.
  5. Identify the cloud limits and build the hazardous area boundaries.
  6. Perform analysis.
  7. Recommend the gas group (allowable energy) and the temperature class (allowable maximum temperature) for electrical equipment.
These seven steps to me looks what one do, when using the Dow Chemical Exposure Index and/or the Dow Fire & Explosion Index. Recalling from Atex presentations I have attended electrical area classification do not involve release calculation at all.

The figure above is titled "Example of a hazardous area classification". Some of the elements are structure one encounter in on shore facilities, such as control room, internal and external roads (properly should have been separated), admin. building, and some are mostly encountered on off shore facilities, such as helideck and boat landing (properly also should have been separated). Other such as "Unrestricted vehicle movement" and "Unrestricted public movement" seem to be properties of some of the already mentioned items, such as Internal and external roads or Admin building.

The section "Analysis of HAC classification" list five situations in which changes to area classifications may occur:
  1. During the engineering phase, or while performing the detailed design.
  2. During capacity revamp, or rejuvenation phase (brownfield projects).
  3. During temporary operations phase that may determine the HAC.
  4. During the reassessment phase, or during the "legacy-as-building" phase.
  5. During the drawing preparation phase, due to human error.
Some of the statements which make me uncertain about the purpose of Mr. Bapat are "..the detail of each source of release is available.", "sometimes a few flanges are intentionally introduced..", "Sometimes... the flare load is increased.", "..the increased quantity of released gas increases the zone area.". Again this look more like issues in connection with Dow Index calculations, than issues in electrical area classification.

As far as I can see, are the purposes of the five subsections listed above not clear. The final section list nine recommendations relating to confining hazardous area and zone sizes:

  1. Involve competent personnel.
  2. Plan the piping routing study early.
  3. Consider procedural controls.
  4. Select the right fluid category.
  5. Select normal operating conditions. 
  6. Size of release source should be precise.
  7. Consider performing dispersion calculations.
  8. Consider operation of pressurization unit.
  9. Isolate all ignition sources.
As far as I can tell none of these so-called recommendations have anything to do with electrical area classification in plant or laboratory. Also some provide advice, which is not in line with proper safety practices, such as e.g. selecting process conditions, which limits the area influenced by a release.

The article provide two references to respectively a website about the US electrical area classification and a website about the UK electrical area classification. Neither of these have any information about performing release calculations in connection with electrical area classification.

I hope, that Hydrocarbon Processing will soon re-visit the topic of area classification at facilities processing hydrocarbons both from the point of view of electrical areas and from the point of view of fires and explosions. But writer competence is as important here, at maintenance competence is for keeping our production facilities safe.

Friday, September 30, 2016

50 Year Old Technology Comes South

In the June issue of Hydrocarbon Processing one finds an article written by three people from Valero Energy Corp titled "Utilize an optimizer to blend gasoline directly to ships". Valero's Pembroke refinery build in the 1960's was due for a DCS upgrade, and organisation decided to simultaneously upgrade their gasoline blending system. A simplified graphics of the system is shown in the figure below.

The idea of blending directly to shipment is however not new. Already in the 1970's Imperial Oil blended product directly to pipeline for shipment east from Edmonton to customers in Ontario. There were no laboratory check prior to the blend entering the pipeline, and in those days graphics displays were still in the research labs.So in a way 50 year old technology is coming to Pembroke.

However, it appears the optimizer at Pembroke do take things one step further. First they can blend to 20 different parameters - although most of the time only 5 are used. The article also indicate that the operators have a single display on which to interface with the blending system and start or stop transfers to ship.

As a former control engineer I would have liked to see a copy of the actual operator DCS display of the blending optimizer. I find that such displays are key to acceptance. Especially when there are up to 20 parameter you need to make it transparent to the operator which ones of these are on spec and which ones still have some give away.

Transparency is important especially if you want the operator to take action during a partial system failure, such as the failure a single pump in the blending system. They key is to maintain operator awareness during such events. It also requires the optimizer to be able to run with any subsets of pumps to cope with such a situation.

Some refineries have gone all the way, and completely eliminated the QC lab and gone to online analyzers. This give significant savings and also more reliable data to the process operators. Are you moving in that direction?

If not, I think you should!

Wednesday, August 17, 2016

The reporting about a flash fire at an oil terminal

Yesterday my Hydrocarbon Processing Daily News reported on a flash fire at the Sunoco Logistics Terminal in Nederland, Texas last Friday. This what the story looked like in the email I received from Hydrocarbon Processing:
:

A Sunoco spokesperson stated to Hydrocarbon Processing that the workers were either preparing to weld or welding a pipe to a new crude storage tank.

But what do we know?
We know, that a spark ignited some hydrocarbon vapors in and around the workers resulting in a flash fire, which critically injured four of seven contract workers on the particular job. We also know, that the other workers on the job suffered minor injuries, and that the critical injured were transported to different Texas burn centers.

Unfortunately Hydrocarbon Processing was not sufficiently concerned about the status of the critically injured workers to provide an update on their condition in the story distributed yesterday - four days after the fire. However, local officials talking to media on Saturday morning didn't have updates on the status of the critically injured either.

And what do we not know?
We also know that local sheriff's offfice issued the following statement: "We would like to reassure the public that there was no danger to residents who live near the plant.". I would say that statement is wrong because, when the fire started it could have spread to nearby crude storage tanks and resulted in a crude storage tank fire. So because the fire was quickly controlled, then there was no danger to the public.

Could we have avoided this event?
Properly were easily by monitoring the perimeter of areas with workers with hydrocarbon detectors or just having the workers wear hydrocarbon detectors. Such detectors could properly have warned the workers sufficiently to result in lessor injuries.

This flash fire is an example of not paying attention to the process around you when working in refinery or chemical plant, and it is in my view the responsibility of facility management to ensure, that all workers both employees and contractor employees focus on process safety - and not just personal safety.

As always is comes down to cost and who pays. Who pays for hydrocarbon detectors for contractor employees? The contractor? or the company hiring the contractor? Who has most at risk? Clearly the company hiring the contractor. Who pays for the treatment of the injured workers at the burn centers? Texas taxpayers? Or the company hosting the incident?

Saturday, August 13, 2016

CSB Safety Alert Should / Can Be Improved and Increase Impact

During the past week the CSB issued a Safety Alert about high temperature hydrogen attack. You can read this here. In this safety alert the CSB use the following structure

  • Background and Investigation Findings
  • CSB Recommendation No. 2010-8-I-WA-R10
  • Catastrophic HTHA Equipment Failure Can Still Occur Using the New API Nelson Curve
  • CSB Safety Guidance to Prevent HTHA Equipment Failure
I think the CSB would have much higher impact with the following structure:
  • CSB Safety Guidance to Prevent HTHA Equipment Failure
  • Background and Investigation Findings
  • Catastrophic HTHA Equipment Failure Can Still Occur Using the New API Nelson Curve
An just for inspiration here is what this could look like (Please send comments!):


CSB Safety Alert:


Preventing High Temperature Hydrogen Attack (HTHA)
Based on the findings in the latest its Investigation Report (CSB Report 2010-08-I-WA, May 2014) about the Catastrophic Rupture of Heat Exhanger at the Tesoro Anacortes Refinery on April 2nd, 2010 resulting in seven fatalities and insufficient guidance from the API in their revised API RP 941 (2016) on the CSB issue the following recommendation to industry on preventing high temperature hydrogen attack (HTHA):
CSB Safety Guidance to Prevent HTHA Equipment Failure
1. Identify all carbon steel equipment in hydrogen service that has the potential to harm workers or communities due to catastrophic failure;
2. Verify actual operating conditions (hydrogen partial pressure and temperature) for the identified carbon steel equipment;
3.  Replace carbon steel process equipment that operates above 400 °F and greater than 50 psia hydrogen partial pressure;
4.  Use inherently safer materials, such as steels with higher chromium and molybdenum content.

Background & Investigation Findings

The U.S. Chemical Safety and Hazard Investigation Board (CSB) has found that inadequate mechanical integrity programs, including preventive maintenance to control damage mechanisms and aging equipment at chemical facilities, have been causal to incidents investigated by the CSB. The CSB’s investigation into the catastrophic failure of a forty-year-old heat exchanger at the Tesoro Refinery in Anacortes, Washington, determined that the fatal explosion and fire was caused by a damage mechanism known as high temperature hydrogen attack (HTHA), which severely cracked and weakened the carbon steel heat exchanger over time, leading to a rupture.1
Industry uses a standard for determining vulnerability of equipment to HTHA, known as American Petroleum Institute (API) Recommended Practice (RP) 941, Steels for Hydrogen Service at Elevated Temperatures and Pressures in Petroleum Refineries and the Petrochemical Plants. The standard uses “Nelson Curves” to predict the operating conditions where HTHA can occur in different types of steels. The curves are based on process data voluntarily reported to API, and are drawn beneath reported occurrences of HTHA to indicate the “safe” and “unsafe” operating regions. The CSB investigation identified that Tesoro, like others in the industry, used API RP 941 to predict susceptibility of equipment to HTHA damage. The CSB found that HTHA occurred in the Tesoro heat exchanger in the “safe” operating region – where API RP 941 did not predict HTHA to occur.
Predicting and identifying equipment damage due to HTHA is complex. The CSB concluded in its investigation of the Tesoro Anacortes incident that using inherently safer materials of construction is the best approach to prevent HTHA. The carbon steel Nelson curve has repeatedly proven to be unreliable to predict HTHA. For example, the 2016 edition of API RP 941 reports 13 new failures below the carbon steel Nelson curve. In addition, inspecting for HTHA is difficult because the microscopic cracks can be hard to localize and hard to identify. The CSB concluded that inspections should not be relied on to identify and control HTHA, as successful identification of HTHA is highly dependent on the specific techniques employed and the skill of the inspector, and few inspectors were found to have this level of expertise.
As a result of its findings, the CSB made a recommendation to API to further prevent the occurrence of HTHA by revising RP 941 as follows (CSB Recommendation No. 2010-8-WA-R101):
Revise American Petroleum Institute API RP 941: Steels for Hydrogen Service at Elevated Temperatures and Pressures in Petroleum Refineries and Petrochemical Plants to:
  1. Clearly establish the minimum necessary “shall” requirements to prevent HTHA equipment failures using a format such as that used in ANSI/AIHA Z10-2012, Occupational Health and Safety Management Systems;
  2. Require the use of inherently safer materials to the greatest extent feasible;
  3. Require verification of actual operating conditions to confirm that material of construction selection prevents HTHA equipment failure;
  4. Prohibit the use of carbon steel in processes that operate above 400 oF and greater than 50 psia hydrogen partial pressure.

Catastrophic HTHA Equipment Failure Can Still Occur Using the New API Nelson Curve

In February 2016, API published the 8th edition of RP 941. Though this updated guidance does provide incremental improvements, it does not address important elements of the CSB’s recommendation. In the 2016 version, there are now two carbon steel Nelson curves, distinguished by whether the equipment has been post- weld heat treated (PWHT). API’s curve for non-PWHT carbon steel is drawn below the 13 newly reported failures. This Nelson curve does not, however, take into account all of the estimated process conditions where catastrophic failure occurred due to HTHA at the Tesoro Anacortes Refinery. As a result, the new curve allows refinery equipment to operate at conditions where HTHA severely damaged the Tesoro heat exchanger. The use of a curve not incorporating significant failure data could result in future catastrophic equipment ruptures.
In addition, the updated standard does not establish minimum requirements to prevent equipment failure due to HTHA or require the use of inherently safer materials. API already identifies materials that are not susceptible to HTHA failure in API 571.2 The CSB ultimately believes that the stronger option for industry to protect against HTHA is to focus on upgrading equipment susceptible to HTHA with inherently safer materials of construction rather than simply relying on administrative controls. Not only is HTHA very difficult to detect but equipment inspections and post-weld heat-treating rely on procedures and human implementation, which are low on the hierarchy of controls. These options are weaker safeguards to prevent HTHA failures than the use of materials that are less susceptible to HTHA damage. As a result of these noted deficiencies, the Board voted on July 13, 2016, to designate its Recommendation 2010-08-I-WA-R10 with the status of Closed – Unacceptable Action.
Inadequate mechanical integrity programs were causal to several recent incidents investigated by the CSB. In its “Most Wanted Safety Improvements,” the CSB identifies Preventive Maintenance—which includes actions to effectively control damage mechanisms—as a critical industry-wide improvement to prevent catastrophic incidents. The CSB also calls on regulators to modernize U.S. Process Safety Management regulations, including requiring inherently safer systems analyses, as a way to prevent catastrophic equipment failures. More information about these safety topics is available at: http://www.csb.gov/mostwanted/.

References

  1. Further information on the CSB’s investigation of the Tesoro Anacortes Refinery Explosion and Fire can be found at: http://www.csb.gov/tesoro- refinery-fatal-explosion-and-fire/.
  2. American Petroleum Insittute (API) Recommended Practice (RP) No. 571 (2003): “Damage Mechanisms Affecting Fixed Equipment in the Refinery Industry” states on page 5-83:  “300 Series SS, as well as 5Cr, 9Cr and 12 Cr alloys, are not susceptible to HTHA at conditions normally seen in refinery units.”

Monday, August 08, 2016

Process Reliability Starts with Knowledgeable People

Some time ago J.D.Stroup of Solomon Associates explained in a Hydrocarbon Processing Viewpoint "What characteristics define the world's best refineries?" that refinery performance correlates with maintenance execution. That was in the May issue two years ago, and at the time I questioned if there would be a correlation between refinery performance and process safety performance, but unfortunately Solomon Associates don't collect process safety performance. At the start of this summer in the June issue of HP associate editor H.P.Bloch followed up with an excellent advice to all process industry management, that process reliability - a key to excellent performance - starts with knowledgeable people.

In the June issue of this year Heinz Bloch describe a situation were a number of small deviations in the use of bearings resulted in major pump problems at two different refineries in the maintenance and reliability article "How small deviations and lack of management access compromise reliability". In the article six minor deviations, which added up to a big problem are described. The deviations were: 1.An open oil return notch at the 6 o'clock position of the housing bore resulted in some oil mist being bypassed and was no longer available for the dual purposes of lubrication and cooling, 2. An unusually wide bronze cage acted as a restriction orifice for the remaining oil mist, 3. Not using directed classifiers when the velocity at the shaft periphery exceeded 2000 fpm,  4. A large distance from reclassifiers to bearings at high windage (angular contact), 5. High viscosity oil created a small puddle of oil at the 6 o'clock position of bearing's outer ring race slightly slowing the many rolling elements as they dip into the puddle causing skidding or smearing in the machined pockets of the brass/bronze cage, and 6. A shaft-to-bearing interference fit at the upper limit of the permissible range possibly resulting in bearing preload adding to an already existing high temperature. To understand the details of the article you have to be a bearing and pump specialist, and I am not even close to that.

Nonetheless I have no problem understanding the key message, that the increased cost of implementing deviation-avoiding steps would be more than balanced by increased reliability. But to get there you need respected and experienced subject matter experts (SMEs), who have access to high management. At the company I used to work for such technical experts were called "associates", and the had among other things the freedom to travel worldwide, as they deemed necessary for using their skills. Without such knowledgeable people with access to corporate level management, any field experience will likely go unheeded, and blame for an event shifted to a lowly employee. Heinz Bloch conclude, that the process industry can only hope that good managers truly wish to do something about the indifference to learning from the mistakes of others, and that such manager insist of facts and accountability. I think this is true both in process reliability and in process safety.

While you are reading the June issue of HP, then don't skip the reliability blog in which H.P. Bloch discuss the current biases and challenges facing the process industry, such as cash outlay versus cost over time as well as the absence of groomed and nurtured experts.









Friday, August 05, 2016

Have you learned the lessions from Stuxnet?

Stuxnet managed to damage centrifuges at a plant in Iran, which was supposedly remotely inaccessible. None the less the creators of Stuxnet managed to get their software to the control network at an Iranian facility and execute it to damage some equipment.

This shows us, that if there is a vulnerability inside your control network, then it really does not matter if this network is directly connected to the internet or not. If there is a transport path to the control network, then it is vulnerable. Such a path exist in all control networks, because the nature of Windows based software systems is, that they need to be regularly patched to eliminated bugs and vulnerabilities. Regularly could be as often as every six months or as infrequent as each turnaround. Stuxnet showed us, that potential attackers have the patience and time to wait.

Recently Joe Weiss published an opinion in the Unfettered blog at ControlGlobal titled "Process safety and cyber security - they are not the same", which I totally agree with. I used to work as a process control engineer at a major Canadian petrochemicals producer, where we enjoyed the freedom to adjust the gains on control loops on a day to day basis, when we or an operator or a technician judged, that a loop needed attention. Today at some facilities are subject to MoC reviews and signoffs. We sure have complicated operations life by not behaving properly.


Recently Mike Basidore reported from CONNECT 2016 about ExxonMobil's views on configurable I/O. Sandy Vasser explained at CONNECT 2016, that in the not to distant future ExxonMobil would install a new HART enabled field device, and have the configuration of the device automatically downloaded from the process control system. With this the vulnerability issue around process control systems expands from the control room to the computers and interconnections used to create the configuration file. Bugs in the configuration program could be potentially exploited to change the functioning of field instruments. Also at issue is who has access to the configuration program and file. Nonetheless I believe, that ExxonMobil's view about configurable I/O is the way towards more effective plant commissioning and operations. The bottom line is, however, that the security of the process control system is no longer just a plant issue.

So there is every reason to participate in the discussion of digital safety systems for critical, high, risk applications, such as those found in our refineries and chemical plants including often overlooked level monitoring and alarm systems.

Wednesday, August 03, 2016

When will the US process industry wake up?

Early this morning (or late last evening - depending on were you are living) the CSB released their Case Study of about a series of sulfuric acid releases more than two years ago at the Tesoro Martinez Refinery in California. The CSB news release state, that the report is on Facility Safety Culture. This emphasized on the report cover, where it is stated "A strong process safety culture is necessary to help prevent process safety incidents and worker injuries". I agree 100%!

So do the Case Study contain any advise about how one achieve "a strong process safety culture"? Or how to know if a particular facility has that? I think these are important questions for company board members to ask themselves and discuss with fellow board members at board meetings. After all according to the Baker report THEY are responsible for process safety.

The case study include excellent analysis of the two sulfuric acid releases in early 2014 well illustrated with pictures. It also points to many concerns about the safety culture at the refinery. However, as many previous CSB Case Studies and Investigation Report this case study also end calling for more effective regulation from the authorities.
What the hell is a construction crane doing in the middle of an operating refinery?
But in this case study one does find any help on how to create a strong process safety culture. I know some, which exhibit it, e.g. ExxonMobil or Dow/DuPont (I unfortunately don't know what the combined entity is called). From what the CSB calls a list of issue with the process safety culture I can only conclude, that a safety culture did not exist at the Tesoro Martinez Refinery. In my view the issues are clear indication of the absence of process safety knowledge from the management group at this refinery.

The CSB also calls for proactive inspections by authorities to help companies, such as the Tesoro Martinez Refinery implement good safety practices. However, to require that the authorities employ people more knowledgeable about refinery operations, than the people employed by the refineries. I don't believe that is the situation in California or any other place in the world. The issues highlighted by CSB in the executive summary clearly indicate a lag of knowledge about importance areas of process safety among the leaders at the Tesoro Martinez Refinery. That in my view can only be fixed by hiring people with the relevant knowledge.

Process safety incidents continue to take place, and they continue to result in injuries to workers and losses for these workers families. I think it is time to focus in those who can change thinks at the refineries, and that is the boards of the companies operating them in cooperation with the managers they have put in charge of the daily operations. Under the current system these persons go free. Their salaries are not reduced after incidents. They don't go to jail after fatalities. What has happened to management responsibility?

Each incident, such as ones detailed in the current case study damage the image of the whole process industry. So were is the process industry leaders, who can fix the current state of affairs? Those with a vision about incident free operations. What has happened to process industry leadership?

Does size matter? The six refineries operated by Tesoro process less crude together than one of ExxonMobil's refineries on the Gulf Coast. Can small refineries be operated safely? It is rather unfortunate, that we don't have studies, which compare refineries safety performance the same way that their operational excellence and maintenance excellence are compared by Solomon Associates. When do we get that?




Tuesday, August 02, 2016

Do your board operators walk around your plant?

Recently Jerry J. Forest from Celanese published the article "Walk The Line" online in Chemical Processing. It is about a corporate wide initiative at Celanese to eliminate loss of containment incidents due to equipment not being properly lined up prior to e.g. a startup. In the article Mr. Forest discuss taking the root cause analysis beyond the usual conclusion "operator error" to also answer the question "why did the operator commit this error?".
Mr. Forest and his team at Celanese deserves credit for recognizing that the key to eliminating line up errors, when equipment was started up or re-introduced after maintenance was knowing the causes of past incidents beyond the common "operator error". In fact my personal belief is that "operator error" cannot be a root cause, but only an intermittent cause. The figure to the left indicate the decline of events at Celanese without a properly identified root cause. The result of Celanese's focus on root causes was the identification of three primary root causes for incidents related to line up: expectation for energy control not set, lack of continuity of operations, and deficiencies in operational readiness. Unfortunately Celanese chose not to include any indication about the number of LOPC  events without an assigned root cause in years prior to 2014. Why not include the numbers on the ordinate?
I would not be surprised if the original manuscript included figures with numbers on the ordinate, but during the normal legal review which company publications undergo, they were removed by the legal department - most likely without any discussion with author. Both in the EU and the USA companies have to report certain incidents to authorities, so one could properly - with a bit of work - discover how many reportable LOPC events Celanese experienced in the years before 2014. Some of these properly had root cause beyond "operator error" identified, but the trend would likely be similar to the trend in the figure. So let us get the numbers!
The figure with the green bars apparently show the decline in the number of line-up errors after the program implementation in 2013 (It is not unusual to initially see an increase, as is seen here). The lack of numbers also mean we don't know if the changes indicated are statistically significant, or random year on year variations (I believe the former).

I like the focus on a particular type of event, i.e. in this case LOPC. However, I believe that in order to get lasting results from such an effort, then you have to analyse also the minor events of this type, and not just those large enough to be reportable. A very successful program to eliminate minor fires was implemented by a major Canadian petrochemicals company about 20 years ago. That program involved treating minor fires as event the CEO should have knowledge about within 24 hours. At one company I used to work for there was a focus on reporting first aids, and therefore the first aid kit was in the shift supervisors office.That ensured reporting of even minor events.

It seems clear from Mr.Forest description, that operators at Celanese makes regular rounds. We used to call them housekeeping rounds, because one their purposes was to ensure, that housekeeping would not become a problem. However, as I continue to read the article, I see a clear focus on the operators. Unless an effort also have impact on supervisors and managers, then I doubt its sustainability. What do you think?



Thursday, June 02, 2016

Was it the intention, that OSHA PSM should be a continuous improvement activity?


I first time I heard about OSHA's PSM standard 29 CFR 1910.119 was in discussions with van Rijn from Shell during a trip to Osaka and Nagoya to participate in the 1st Production Control in the Process Industry conference arranged by van Rijn and O'shima. I vividly remember discussion about making the complete plant operating procedures available electronically through the control room computers to the board operator. 

I the time I had just left major Canadian petrochemical producer to take up an academic position at the Technical University of Denmark, and I recall that complete operating documentation for our polyehylene plant was more than 5 meters of ringbinders. I could not envision al that on a computer in the control room. But IT have improved, so today I have no problem with envisioning the complete documentation for a chemical plant as one hyperlinked online document. Later I was again confronted with PSM as I attended a local conference on process safety in Houston, Texas, where I was introduced to SAChE by professor Ron Darby from Texas A&M. During a luncheon speech a regional OSHA director stated, that at the time in mid 90's OSHA envisioned circles of visits to process industry facilities to assess their PSM performance. During the first circle all companies were just to be rated, and fines only giving for obvious negligence. However, he added, that during the next circles OSHA would set the bar at the performance of the best companies during the previous circle. To me that was a clear signal, that OSHA was looking for continuous improvement in PSM performance, and not just compliance.
Hence it was nice to discover Michael Marshall's article "Enhance PSM design with metrics-driven best practices" in the February issue of Hydrocarbon Processing. In this article Michael Marshall clearly states, that is OSHA's aim, that PSM should be a continuous improvement circle with Plan-Do-Check-Act as in other management activities, and he mentions recent developments in Flare and Overpressure Management Systems (FOMS) as an example to follow. Mr. Marshall goes on to argue, that the continuous improvement of PSM should be sustained through KPI's relevant for the facility and task, and states that API 754 Measuring Process Safety may not have all the answers. This are shortcoming which also the Chemical Safety Board have pointed to.
Michael Marshall goes on to state, that the development of the relevant KPIs is a four phase process involving:

  • Phase 1: Where are we now?
    • Identify and engage process owners and stakeholders
    • Compile available documents and information
    • Flowchart current processes, tasks and procedures
    • Identify current tools and technology
    • Understand strengths, weaknesses, opportunities, opportunities and threats (SWOT) in existing processes
  • Phase 2: Where do we want to go?
    • Engage process owners and stakeholders for vision, objectives and value drivers
    • Baseline processes and perform gap analysis
    • Evaluate gapsand tradeoffs (costs)
    • Redesign processes and functionalities
    • Specifiy tool and technology needs
  • Phase 3: How are we going to get there?
    • Identify needs and objectives
    • Develop strategy purpose
    • Establish team leadership
    • Perform root cause analysis
    • Design metrics, KPIs, reports, automation tools
    • Initiate training programs
    • implement transition plan, pilot and then role out
  • Phase 4: How do we improve, grow and keep going?
    • Implement and validate redesigned process
    • Initiate ongoing metrics and management systems
    • Monitor, evaluate and report on new processes
    • Review targets and performance
    • Audit and adjust for continuity, sustainability and growth
(I have in the above only included the top two levels of the more than 1½ page long list of bullet points in Mr. Marshall's article). In my view the bullet points could be more specific to the task at hand, i.e. development of KPIs to be used in process safety management (PSM). Especially since Mr. Marshall already the start of this section state, that he believe the KPIs should be based on the data in enterprise asset management systems in order to allow drill drown deeply enough to find root causes.

So unfortunately, this very well motivated article on continuous improvement of PSM leave me without congrete advise except a list of bullet points to consider in developing my own KPIs. Can't we do better than that? I certainly believe so!

Saturday, May 14, 2016

Can you learn to prevent overfilling of storage tanks by reading HP?

After the Buncefield explosion and fire there has been a significant increase in attention to  preventing overfilling events in connection with hydrocarbon storage tanks. As an example, already less than a month after the event one of the refineries in Denmark had already, checked that the dual level measurements on their storages tanks were not subject to common mode failure. The refinery is located 2-3 kilometers from the center of the city and 1½ kilometer from a major power plant. Therefore, I was happy to see Amjad Dokhkan's article "Prevent the overfilling of storage tanks" in the February issue of Hydrocarbon Processing. (Picture - from Wikipedia - shows the Buncefield fire moments after the explosion.).

After reading the article I am less enthusiastic, since in my view it contains a number of significant mistakes and incorrect statements and advice. I cannot refrain from reaching the conclusion that the problem is related to insufficient peer review of the articles. Nonetheless I enjoy each issue, and especially the insight of ones by Heinz Bloch on maintenance, such as "Transform ODR-OPPM into a worthwhile initiative" in the April issue or the article "How much fireproofing do we need" in the same issue. Such articles make each issue worth while.

I don't understand why Amjad Dokhkan already in the first section of the article state "Site owners are strongly advised to assess the risks posed by their stored inventories before they start planning to install these expensive automated protection systems". Why? In their simplest form I don't believe a semiautomatic overfill protection system or even an automatic overfill protection system can be considered. After-all these days most operators would like to have information about the inventories of all their tanks in the control room. So they would normally have at least one remotely monitored level measurements on each tank. Turning this level measurement into a semiautomatic overfill protection system requires just the implementation of an alarm on that level measurement. In modern DCS that is cheap and straightforward - once the appropriate alarm engineering has been done, which ensures the operator have sufficient time to react when the alarm is activate before it turns into an overfilling event.

Neither do I understand the statement in the section "Consequence analysis" that "They should also be able to determine the likelihood that an overfilling event could occur in the first place." If  there is a line feeding the tank, then an overfilling event can occur no mater what measurements and alarms the tank is equipped with. Measurements and alarms can fail when needed, automatic shutdown systems can fail - especially if only a single actuator is used, operators may overlook or forget to act on an alarm. However, if the tanks are designed to inherently avoid overfilling event - however I don't know how that could be done - then an automatic overfill protection system is not needed. (The picture - form Wikepedia - shows the firefighting in connection the splitter overfilling event at the BP Texas City Refinery in March 2005.)

Further on in the same section it stated "Obviously the larger the tank, the larger the risk". Mr, Dokhkan makes the common mistake to equal consequences and risk. Naturally the conseqences of overfilling a large tank can be larger than that of a smaller tank. However, the risk is a function of also the reliability of the automatic protection systems,which the tank is equipped with. I would argue, that a larger tank is more likely to be equipped with sophisticated protection systems, than a smaller one. In the same paragraph it is incorrectly stated "Smaller tanks have a lower storage capacity and, consequently, a greater likelihood of overfilling than larger tanks". The likelihood of an overfilling event depends on how often the tank is filled to capacity, not on the size of the tank. But naturally the consequences of an overfill event depends on the size of the tank.

At the end of the section "Consequence analysis" it is mentioned, that commercial consequence modeling software allows the simulation of overfilling events, but unfortunately it is not mentioned, what the such simulations could be used to in connection with preventing the overfilling of storage tanks - the subject of the article.

In the following section titled "Likelihood analysis and protection layers" the author argue, that likelihood analysis should start with the identification of events, which could initialize an overfilling event. Among initiating events is mentioned "filling an already filled tank". If a properly designed automatic overfill protection system was in place, then that system would not allow alignment of valves to fill an already overfilled tank. That was a standard feature of a polyethylene transfer system, which I helped implement in the early 1980's. Only if one envision - and I have great difficulty with that - a completely manual tank operation with field manipulated valve movements and pump starts can I visualize someone filling an already filled tank.

As the next step the authors want us to analyse existing protective systems. I begin to wonder what the scenario the author is discussion is? If it is analysis of a existing tank farm, then I believe the natural starting point  for an analysis would be a list of existing protection systems, level measurements, remote and manual valves, and a drawing of the piping network.

The author further argue, that a protection system must satisfy four criteria, i.e be dependable, be independent, be specific and be audit-able. These are not properties, which I have come across when reading about functional safety interlocks. Naturally a protection system must be dependable, and that I believe is what we calculate as PFD - probability of failure on demand. Generally we also a protection system to be independent, i.e. that the level measurement used in the interlock system does not come form the same sensor, as the level measurement used for level control. Although lately it appears the community is relaxing a bit on this requirement by allowing a measurement of sufficient reliability to be used but as input to a level controller and as input to a safety interlock. I think this is due to improved reliability of sensor technology.

However, I do indeed wonder how a certified functional safety professional and certified occupational safety practitioner can ever assign a reliability of 0.01 to an operator response, as he argues for in the section "Weighing operator response". Many of the safety professionals, which I know would even consider a PFD of 0.1 as too high for operator response to an alarm. They are more likely to give the PFD for operator response as either 0.5 or 1. Neither can I understand why the lighting of the tank area has any influence on a system to protect a tank against overfill. Is the author thinking about manual operations? In my view this has nothing to do with overfill protection systems, although it could be relevant in connection with a manually operated storage facility. Such a facility I have some difficulty seeing in the hydrocarbon processing industry.

Finally I don't believe it is correct to say, that one assigns PFDs to protections systems. The PFDs are calculated based on the reliability and failure modes of the components involved in a given protective system partly based on the plant experience with the involved components. Even here the issue of state of the tank pops up again in another discussion of the tank already being full. Again I would argue, that the operator should not be able to select a full tank, i.e. not be able to open the inlet valve to a full tank.

With knowledge about the tank level, the tank capacity and the flow rate into the tank it is rather simple to provide the operator with an indicator showing e.g. seconds or - better - minuts to a full tank based on the current scenario. I really don't see operators doing calculations of this kind in the facilities I have been in contact with. They monitor and optimize the operation of the facility based on e.g. weather and demand.

As a closing note I would consider tolerable risk of 0.001 or 0.0004 as rather high in connection with probable loss of life from an event at a hydrocarbon processing facility. In fact I think they are several orders too high, but may be tolerable in connection with a complete ALARP study. The operators I have been connected with would consider them as intolerble.

So to answer my own question. You cannot learn to prevent overfilling of storage tanks by reading HP unless the peer review is considerably improved. Just my honest opinion.

Sunday, May 08, 2016

Do online analyzers improve quality and safety?

Personally I have no doubt, that online analyzers improve quality. However, implementation and usage is key for operator acceptance. I used to work at a polyethylene facility, which has the luxury of having dual online process analyzers on the re-circulation stream to the fluidized bed reactor.
Initially we thought, great! By properly spacing the start of the sampling we could double the sampling frequency. However, at that time we failed to understand, that even though an attempt to create two identical analyser systems there were small differences in the sample lines and the columns used in the analyzers. The result was different results from the two analyzers.

Properly if our analyzer system had been subjected to a proper HAZOP study, like the rest of the plant, many of the operational issues, which we encountered would have been uncovered during such a study. There are properly at least two reasons why such a study was not performed when this system was put in operation in the mid eighties: i) dual systems were installed (redundancy to avoid quality issue during analyzer outtage), and ii) lag of expertise on online analyzer systems (just one analyzer engineer and control engineers with insufficient experience).

I was reminded about this experience by reading the article "Solve online analyzer time delays by improving sampling system design" by W. Tanthapanichakoon and K. Suriye in the january issue of Hydrocarbon Processing. While this article focus on a particular aspect of online analyzer design, it draws attention to the fact, that such system also should be designed with care and have performance issues. Hence it appears appropriate to ask: "Why are online analyzer systems not subjected to a HAZOP study like other parts of a refinery or chemical plant?". Note, that if the analyzers are used for online process control, then the systems are subject to MoC requirements.

In their article Tanthapanichakoon and Suriye shows, that sampling systems associated with online analyzers can be improved by paying attention to sampling system design. Other performance aspects of online analyzers could properly be equally improved. One aspect is the practice of regularly running a standard sample through the analyzer ones a week, and then adjust the analyzer based on the result. Hopefully this is no longer done, since we now know, that only when the result of the standard sample is outside the control limits on the control chart should the analyzer be adjusted,

A nice example of improved operator confidence in analyzer results when a facility moved from off-line laboratory measurements twice a day to online analyzer measures every two or four hours. The operators started to adjust the process, when they saw  a trend in these slow online measurement, With twice a day lab measurements the trends were not apparent to the operators since no one cared to plot the numbers and pass the results from shift to shift.

The article by Tanthapanichakoon and Seriye clearly shows, that even analyser systems benefit from having a team involved in their design. In the mid eighties an analyzer project was a one man project, and usually that was also the person, who once the analyzer was commissioned would be responsible for the day to day maintainance activities - possibly with an instrument technician lending a hand now and then.

I think that today with the focus of HAZOP being more on the hazard side, than the operations side, a large opportunity to improve the performance of online analyzers are missed during their design. This give improved analyzer, which result in improved control, which result in improved safety.

It is about time, that online analyzers get the same focus as instrumentation generally get in our project. They deserve it, and they will reward us with improved online analyzer performance. What do you think?

Friday, May 06, 2016

How do one get more focus on process safety?

Veronica Luna attempted with the article "Improve facility safety by understanding process and personal safety" to get people to focus more on the big process safety events. The article was published in the January 2016 issue of Hydrocarbon Processing, which is widely available to professional - including management - in the hydrocarbon industry. Unfortunately I don't think it got the impact it deserves. The problem is the title and especially the little word "and".

I think the title should have been more focused on the differences between process safety and personal safety, which is what I believe Veronica Luna attempts to describe in the article. Hence I suggest a more attention getting tittle would be "Improve facility safety by understanding the difference between process safety and personal safety".  With this title it become much clearer, that the article compare two concepts, and not a physical entity, i.e. a process, with an abstract concept "personal safety" - abstract in the sense John Searle explains in his first lecture of the course "Philosophy of Society". The meaning of the concept "personal safety" is observer relative, i.e. it does not exist in the physical world, whereas the process exist independent of the observer, and hence is observer independent. I believe that practicing engineers need some knowledge of the social reality they are working in.

In the first paragraph of her article Veronica Luna correctly states that knowledge and understanding of process safety is lacking in many hydrocarbon processing facilities. I think, that part of the reason for this state of affairs is that we often talk about "process and personal safety", when we should be talking about "process safety and personal safety". There is a big difference.

Veronica Luna continue by mentioning what she calls established personal safety metrics, and mention total recorded injury rate (TRIR) and lost time injury (LTI) as examples. The problem is, that LTI is not a metric. LTI is a label for a certain type of injury. A quick Google search reveals, that there are two types of metrics w.r.t. personal safety in use: i) injuries per 100 employees in a given time period, and ii) injuries per 1000000 hours worked in a give time period. Which is relevant depends on your focus, i.e. the individual or the facility.

In my view it would also help by pointing to similarities between process safety and personal safety: They are both concerned with avoiding hazards turning into incidents. More precisely are we concerned with two types of hazards: i) personal safety hazards, and ii) process safety hazards. Hazards are not just associated with the process. Hazards are also associated with e.g. the field operator climbing a tower (Some of my friends would argue, that a person should not climb an operating tower, and hence ladders should not be provided). The common point in dealing with both personal safety and process safety is the identification of the hazards. associated with a job task for personal safety, and present in the facility in case of process safety. Examples of personal safety hazards are poor housekeeping in an area, which leave trip hazards laying around, or insufficient rails on raised platforms leaving a fall hazard to mention just two. Examples of process hazards are a vessel not being properly purged before opening or corrosion of a vessel so it no longer can keep the hazardous material inside.

I think it would be very helpful have available list of potential personal hazards and potential process hazards in a given type of facility. Especially for young professionals. It seems that current practice is to create these list based on the knowledge and experience of the participants each time they are needed. This in my view does not facilitate professional development and building the knowledge of the group of professionals as a whole. List of potential hazards should be shared freely with others, since it is how you handle the hazards that make the difference.

Veronica Luna appear to argue, that a personal safety event have smaller consequences and affect fewer people, than a process safety event. I disagree with this viewpoint, as do properly also the families of the 90 workers killed each week on US workplaces in 2014. Losing a main source of income in a family is from the family point of view a high consequence event. It is only on the surface, that a personal safety event only affect one worker, since it also affects the workers family and friends. So both personal safety events and process safety events have significant consequences. It is to eliminate these consequences, that we work on both personal safety and process safety.

I also disagree with Veronica Luna's argument, that higher engineering and management skills are needed to deal with process safety than with personal safety. How do you measure a level of skills? I don't know. You can measure a persons ability to perform a task, whether that is an operations task or an engineering task. Naturally the skills needed for dealing with process safety and personal safety are different. However, I would claim,that both require engineering skills and management skills.

Further on in the article it is argued, that process safety incidents are major accidents events. Some process safety incidents are indeed also major incidents. However, at one Canadian company the CEO demanded a report on event the smallest fires on his desk within 24 hours. Why? He wanted to reduce the number of small process safety incidents in order to reduce the likelihood of a major process safety incident.

I think it is a good idea to relate major pieces of new regulations to the process safety incidents, which preceded them, as is done in the article. Although I don't think there is a one to one relationship. For example, it is doubtful, that the Macondo well blow-out in the Gulf of Mexico had any link to OSHA's Refinery National Emphasis Program.

In the discussion of safety performance the article would benefit from mentioning the process safety metrics recently developed by CCPS, and already modified and adopted by several European companies in the hydrocarbon processing industry.

The article end with a discussion of 3 process safety incidents from the past 20 years: Esso Longford gas release in Australia in 1998, BP Texas City Refinery explosion in 2005 and Gulf of Mexico well blowout in 2010. Unfortunately the purpose of choosing these events is unclear, and the discussion is in my view too superficial to allow the reader to decide if there is any learning relevant for her site. At Longford it is well known, that engineering staff, which could have advised operators during the abnormal situation they encountered was moved several hundred kilometers away to a major city - without the communications facilities that we today take for granted, such as a Google Hangout or Skype video call. At the BP Texas City Refinery HAZOP studies had years before recommended modifications, which were never implemented although opportunities had been there. Why not discus either Buncefield or Fukushima?

The takeaway is right on: "Understanding process safety hazards - and their fundamental difference from personal safety hazards - is a crucial step towards achieving a good level of safety in the facility." Now, I am just left with an easy question for  you: "What is a good level of safety?" Could someone please answer that simpel question!

Thursday, January 21, 2016

Safety promotion in chemical plants anno 1985

While cleaning up some old keepsakes I discovered this comb from the days in the mid 80's when I was earning my living as a process control engineer with Imperial Oil's Esso Chemicals site at Sarnia in southern Ontario.

On the comb it says "Comb the area for hazards". I guess that was a rather naive statement, and in retrospect it now appear to indicate a wrong focus. It appears the focus is on the hazards, which in my opinion never can be eliminated from a chemical plant. I think - I hope many others. that the focus in chemical plants and other process facilities should be on reducing the risks of the day to day operations to an acceptable and manageable level. On a few occasions this can be done by using inherently safer technology to eliminate a hazards. But more often than not the hazard is inherent in the materials used or the products produced, so the best we can hope for is to reduce the risk.

Another trinkets from those days is screw driver set on which the inscription read "The Esso Safety Key - Stay Injury Free". That also appear rather naive.

I wonder if we in those days really belived, that distributing things like a combs and screw driver sets would improve safety at work or home. Sometimes these trinkets were distributed after a certain period without a first aid injury or a lost time injury.

The only item, I can say really improved safety in my home - and continues to do - is a fire extinguisher, which I won at a company Christmas event in Sarnia. We still have it just outside our kitchen.

I think it would be more beneficial for safety if local leaders and managers recognized good safety behavior of employees by going around in their plants, and on the spot giving an employee a pad on the shoulder. That you can be proved of, and tell your friends and family about.

What do you think? Are you still receiving trinkets for safety performance?