Saturday, November 07, 2015

Innovations in Advanced Process Control?

Over the years I have come to enjoy and appreciate Heinz P. Bloch's one page articles in Hydrocarbon Processing about reliability. They are always full of insight I can learn from whether the particular issue relates to the benefits of attending conferences or a specific maintenance issue, or a staff issue as in Octobers column titled "Hire or train a reliability professional?". This column was based on a question from a reader about how to satisfy the need for a reliability expert at the company. The answer was not a clear train or hire, but provided insight on the issues involved with either choice.

Turning the page, was like a cool shower. The next article was titled "APC technology adapts quickly to economic changes" and written by Tushar Singh from AspenTech. Finally a topic in HP on which I was not completely blank. When I started reading the article my initial enthusiasm turned wondering how knowledgeable the writer was about APC - and I really have not been active in the field for more than 20 years. Although I over the years have kept contact with active acquaintances.

The first paragraph except for mentioning the need to optimize operations is empty talk. The second claim companies are helped by ground-breaking innovations in APC. However, those ground-breaking innovations are not mentioned. I believe the ground-breaking innovations in APC occurred in the early nineties, when Henrik Brabrand at DTU developed MIMOSC - multi input multi output self tuning controller - and applied it to a heat integrated distillation column pilot plant of semi-industrial scale. At the same time Jacques Richalet of Adersa applied similar ideas to aluminium sheet metal production. Mr. Singh owes us to mention, which ground-breaking innovations he are writing about.

Mr. Singh goes on to claim APC is crucial to maximizing profitability. I wonder what definition of APC should be used here? Normally I think about APC as the layer between the business planing software on the mainframe and the single loop controllers in the plant. The business planing software maximize profitability and then tells the APC what to do to execute the plan.

The next section jumps to the very different question of the perceived skill shortage, and manage to write "If the controllers react in a manner, that the operators do not understand, the controllers could perform incorrectly and be turned off, as the controller model will not reflect the plant behavior at a particular point in time". I have no idea  what Mr. Singh means here. The author further states " align the controller objectives to tuning parameters". Again I have no idea what the meaning should be.

Finally there is a section related to the title of the article. The section heading is "Responding to changes in economic objectives". However, at the end of the first paragraph I am lost by the statement " keep pace with a highly competitive global landscape". I wonder what input the APC gets from the global landscape or why undesirable scenarios need to be accommodated?

Luckily articles as superficial and empty as this one is very rare in Hydrocarbon Processing, and I keep looking forward to each new issue. But what you think about vendors contributions to HP?

Saturday, September 12, 2015

Can the process industry learn from the telephone industry?

In the July issue of Hydrocarbon Processing there is an article by E. Spiropoulos from Yokogawa titled "Use advanced automation and project management to simplify refinery construction". It really is about how automation projects are traditionally executed and how industry - and in particular ExxonMobil - would like to see them executed in the future.

Traditionally automation projects have been executed sequentially: Design --> Application --> Hardware --> Field Installation --> Loop check. ExxonMobil and others argue, that this can be done better through improvements in project management and technical improvements by moving from sequential to parallel in the task to be performed, and this will make projects shorter and avoid schedule overruns. The mentioned article also reproduce a contribution by T. Madden from ExxonMobil Development Company in Houston. This is titled "ExxonMobil's plan for self-configuring field devices". The concept is called DICED and involve the following steps:
DICED work if the pieces are made to work together.

  • DETECT: Automatic detection of new HART-enable devices on the control network.
  • INTERROGATE: Automatic transmission of HART command requesting device tag - using HART 6 or HART 7 devices with long tags. In case of failure the use automatic get a message about a change in field wiring.
  • CONFIGURE: Automatic configuration with engineering range, engineering units and other configuration information. Field devices are purchased with just the tag name preconfigured.
  • ENABLE: Engineering, configuration and testing of a new field device is done in a virtual environment, so the system knows about the new device and the control strategy it is associated with. At this point the devices and it associated logic is enabled for use.
  • DOCUMENT: If above steps are successful, then it will be recorded in the control system event log. The expectation is that some commissioning activities can also be automated.
This process will automate a number of tasks, which current and in the past require field trips by technicians and sometimes also contact over radio to an engineer and an operator in the control room.

But why are ExxonMobil stopping here? In the past making a phone call between to parties required, that they were connected with physical wires in order to talk to each other. Cell phones have changed that. Today phone companies can connect any to persons from and to most points on earth. They can even handle connecting to individual phones in a stadium filed with 40.000 or more people, many of whom simultaneously use there phones to take photos or video and upload that to somewhere for sharing. This is possible because each phone has a unique identification - much like a tag in a control system. The many phones connect to a providers phone central, which then connect to users at nearby or far away places.

The phone from which the phone call originates is much like a field measurement signal in a process plant, and the phone being connected to is much like an actuator in a process plant. So why do the automation providers like Yokogawa and the users like ExxonMobil stik with one to one wiring of signals from field to control system and back to the field. Would it be possible to save large amount of wiring cost by transmitting signals over the air? I think so! What do you think?

If we can connect people correctly on a stadium like the one to the left, then we should be able to use similar technology to connect transmitters and actuators in our process plants, and save the wiring to better thinks, e.g the emergency shutdown system. I even thing, that the cost of device with the phone connection will not be much more than those with wired terminals. After all the vendors, and save the cost of the terminals.

Thursday, August 20, 2015

The need for new employees are nothing new - but it does influence process safety

It was a pleaant surprise to see Hydrocarbon Processing address the management of organisation change (MOoC) in the article "Update: Personnel changes introduce new considerations for PSM" by R.L. Gather in the June issue. I first encountered MOoC at a Technical Steering Committee (TSC) meeting in EPSC auspices, when we had to decide which project to priorities for management approval of funding more than 10 years ago. One of the proposal was for the development of standard procedures for personnel changes. However, although there were several in favor, the TSC decided, that this was not a technical issue, and hence the proposal was dropped from the list.

Those in favor at the time argued that MOoC was indeed relevant, because nowhere else in the organisations were the process safety impact of the changes listed in table 1 of Gather's article being considered. What do you think? Should the impact of personnel changes be analyzed for their possible process safety impact prior to implementation?''

I was a control engineer at a major Canadian site, when two of our control engineers at another unit quit to work for another company. Their quitting resulted in me being moved almost immediately to the unit they used to work at. I was a cool move for me - in retrospect - because the unit I moved to had the operators with the most positive attitude to testing new control strategies and even suggesting improvements to existing ones. At the time my move was a result of the unit being left with just one control engineer, which at the time was considered to thin coverage. This was back in the eighties, when call from headhunters was almost a weekly occurrence for control engineers at this site.

A related issue is addressed in the article "Overcoming the challenges of the 'great crew change' " by tech editor M. Rhodes. That is the eminent retirement of the post WWII baby boomer generation.  When I think back this was also a hot issue when I joined the industry about 35 years ago. So what has changed? I am really not sure.

Well generations change. Many of the people, whom I started by carrier with in the HPI still work or have recently retired from the company we all worked at 35 years ago. 35 years ago people joined a company an worked there almost for life. From talking to my now grown up kids, whom all four belong to the Millenials none of them expect to work for the same company all through their carrier. Even some of the people I studied engineering with in the early seventies today work for the same company, which they joined after graduation. There are naturally exception, but very few have had more than five jobs. So today companies cannot expect people to be with them for their whole working life. That has changed!

I have also noticed, that for the Millenials work and life mix much more. They are not eight to five workers.

However, I think one thing have not changed. When you get you first job, many would argue, that university have not prepared you properly for the job. I think that also was the case 35 years ago. But suddenly we realized that university problems solving skills could easily be converted to work places problems solving skills. Of course there were people to get to know and work with. But so there was in your university courses. You arrive at your first job loaded with the latest problem solving skills from university, and within a relatively short time these skills are applied in the new corporate environment.

I think one of the misconceptions is that new graduates are filling positions of retiring employees on a one to one basis. That was definitely not the case when I entered the HPI, and I don't believe it is today either.

Nonetheless the HPI as pointed out in the article has a challenge to attract a sufficient number of new graduates, but that in my opinion is due to the image of the HPI. By many graduates it is seen as a dirty and polluting industry. That was also my view until I have a chance to visit Imperial Oils Edmonton Refinery, while I studied at University of Alberta. You could walk around the refinery in a white shirt, and it wouldn't get dirty! (Of course today, that behavior would be considered unsafe and not allowed.)

Mittvakkat icebreaker photographed August 15th, 1933 and again Juli 31st, 2010
So in order to attract a sufficient number of new employees I think the HPI must changes its behavior. That means taking process safety much more seriously so we don't see headlines in the news such as after the recent explosion at ExxonMobil's California refinery or the explosion at a warehouse a few days ago. That also means taking sustainability more seriously. Fortunely resent announcements by Dow Chemical points in the direction. That also means talking climate change seriously. I think the two pictures above from Greenland says it all.

Wednesday, August 19, 2015

Cloud Computing will NOT revolutionize process safety!

I was a bit surprised reading the title of the article by J.Lucas and S.Whiteside in the June 2015 issue of Hydrocarbon Processing. It was "Cloud Computing: The next revolution in process safety!". The HP editor was a bit more modest on the cover of the issue by stating "PROCESS SAFETY Using 'the cloud' can improve programs and reporting".

Illustration courtesy of Wikipedia.
Let me make it clear: In my view the term "cloud computing" is a marketing gimmick created by the IT-industry. It covers the fact that in stead of running on servers in your own data-center your applications run on a service-providers hardware somewhere in some country. The service provider can be companies such as Google, Amazon, IBM and other who allow you to access their hardware and dynamically change the amount you use by the day, hour or even minute. The new thing is that you can configure and access the hardware you rent a right to use through the internet using a secure channel. Some providers even call the process of configuring such systems for orchestration. Once the applications have been deployed on the providers hardware you can grant others access to them using the same channel used to configure them.

However, that would also be possible using your own hardware in your own data-center."The cloud" in my view is just a term which indicate you have less knowledge about where your application and data physically are located (I know the IT-industry also use concepts such as private cloud, hybrid cloud and public cloud, but that just has to do with whom you are sharing the hardware with).

To the best of my judgement Lucas and Whiteside are not subject matter experts in process safety. That is clear already from Table 1 in the article with the heading "History of Process Safety Management", which starts with the 1984 MIC release in Bhopal and apparently end with the 1998 publication of  the standard IEC 61508 according to the authors. Process Safety Management in my view started when ICI began using a tool called hazard and operability (HAZOP) analysis to identify safety issues in their new plant designs before the facilities were constructed, and shared this idea with the world. And properly the most significant recent event for process safety management was the publication in early 2007 of "The Report of the BP U.S. Refineries Independent Safety Review Panel" (.a.k.a. The Baker Panel Report). This report has indeed created a revolution in how company boards address process safety management.

Illustration courtesy of Wikipedia.
The article by Lucas and Whiteside describe how process safety engineers started using desktop applications, such as spreadsheets, in the eighties and then moved to document management systems in the nineties (I actually experienced the arrival of the IBM PC as a tool for control engineers in the mid eighties, but even at that time it was a tool to run GUI applications on. Such applications could not be run on the mainframes of that time. However, the mainframe still served as a file repository for the PC).

Lucas and Whiteside further state, that the lack of standardized file naming conventions made it difficult to identify relevant files in e.g. a document management system. I certainly agree with that. However, "the cloud" does not introduces any standardized file naming convention. However it is most certainly true, that cloud based technology such as Google Apps for Work, Microsoft OneDrive and Drop Box have made it easier to share files across organizations and even highly distributed groups (I am currently involved in a commercial project, which use Drop Box for file sharing since we are currently a rather small group and it was easy to set up). However such sharing was possible possible through the mainframes using character based terminals in the nineties.

Further on Lucas and Whiteside describe what they call safety management with live data. However, without access to the plant data historian in the DCS there will be no live data. As far as I know such data sharing directly from the plant data historian in the DCS is still rather experimental technology. Without such live data access it is difficult to see how the cloud can provide a plant safety boost or a revolution in process safety. Such a revolution would in my view need to access real time plant data.

In my view a key to improving process safety is to develop tools to assist the plant operators in responding to deviations or even potential deviations from normal operations. That is an active area of current research in which I am a bit involve. You could argue, that hence I am biased.

What do you think?

Monday, May 11, 2015

Lost olefin production and other happenings you don't want

In the April issue of Hydrocarbon Processing I found two articles about olefin production. The first was titled "Top seven causes for lost olefin production" written by Claire Cagnolatti from HSB Solomon Associates. HP claims it is an upgrade version of a present of a presentation at the AIChE T4 Topical Ethylene Producers 2014 Conference Spring National Meeting in New Orleans. A little googling reveals, that the material was also presented at the 2015 American Fuel & Petrochemical Manufacturers International Petrochemical Conference (IPC). The link to the latter version is here.

At the headline level the major area of lost - I would rather call it missed - olefin production is according to Cagnolatti: process problems; mechanical, electrical or instrumentation failures, utility supply problems and non-operational causes. Among the latter one finds "lack of product demand". Among utility supply problems one finds power failure and steam failure as well as other utilities. And among process problems "pyrolosis furnace decoking" is listed as a cause of downtime or lost production. I at this point start to wonder if the author has even been on an olefin unit for more than a brief visit?

The study of lost olefin production covers the period from 1999 to 2011. On the figure to the left the losses are shown relative to the losses in 2011, which is arbitrarily set to 100%. Based on this figure the authors states, that the losses have been on a general decline since 2011. I read the figure somewhat differently. 2001 is an unexplained outlier, and the losses have been on a increase until about 2009, when the high margins enjoyed by North American plants made lost production more costly. It should be noted, that the study is performed by HSB Solomon Associates every two years, and figure to the left is based just on data from North American plants.

The article also define a major event as one involving more than 5% of total lost production in that study year. Each study except the most recent contain 1, 2 or 3 such events. And no trend is evident in the number of major loss events.

Then the top seven causes are listed, as shown in the figure to the right. Unfortunately the causes listed in this figure are not in complete agreement with the causes listed in table 1. Pyrolysis furnace failure or availability is found to be responsible for the largest amount of lost production. That is really not surprising, when one know this include pyrolysis furnace decoking. I find is difficult to conclude anything from this figure, except that the contribution of control system and instrumentation failures are minor.

The author goes on with a detailed analysis of the top seven causes. Unfortunately the labelling again is not consistent with the used previously in the study, e.g. in the figure to the right or in Table 1. Neither is the factors which could e.g. cause cooling tower constraints analyzed. That is cooling tower constraint is considered a root cause.However, maybe the figure do indicate why the 2001 study showed higher losses that both the previous study in 1999 and the next in 2003. Changed decoke procedures?

The study also included a linear regression analysis showing that production loss was lower, when a higher percentage of maintenance was dedicated to preventive/predictive maintenance. Further work is however, needed on this regression before anything is concluded. However, there appear to be evidence, that plant suffering major loss events had more corrective or responsive maintenance and less preventive/predictive maintenance.

My conclusion is that one should be careful not seeing the trend, which one hope to see in some data-set, and that the relationship between preventive/predictive maintenance and negative events at a facility should be further explored - not just for olefin plants. Maybe here is another indirect indicator of the safety culture at
a facility.

The second article was titled "How Cr compounds discolor refractory brick walls of an ethylene cracking furnace" by M.Maity from Saudi Basic Industries Corp. The picture on the left shows the refractory brick wall in the radient section of an ethylene cracking furnace at one of Sabic's petrochemical plant, and based on the article this is something you don't want to see in your plant. It turned out, that the pink material was a chromium compound, and its appearance was properly caused by the spontaneous formation of chromium oxide on the tubes at high temperatures followed by oxidative vaporization likely due to increase of the tube metal temperature to values above design. Unfortunately the article does not confirm, that the chromium came from the tube metal, but just stated that tube samples was analyzed for chromium concentration along the thickness to determine if the tubes were truly loosing chromium, and hence useful life. Unfortunately the answer is left hanging in the air.

Thursday, February 12, 2015

Cyber Security - Read beyond the headlines!

Lately I have been attending a number of data security events to update my knowledge about the area, and also to see how it relates to process control and process safety. Over the past two decade much process control software have moved to standard of-the-shelf commercial hardware and operating systems. This has brought with it exposure to the cyber threats, which before were only a concern of the corporate IT department, and hence people in charge of maintenance of process control software are faced with similar threats to the process control systems as the office systems has for many decades.

I a recent event I picked up a report from one company about IT security in 2014. On the front page was the picture to the left, which I think quite well describe the current situation. The hard sphere in the middle is the corporate IT systems or process control systems. The spikes are the defenses implemented to protect these systems against the many threats from the outside. The picture clearly indicate, that the defenses are not perfect. There are places - properly many where threats can hit the systems.

But let us take a closer look at what is actually written in a report such as the one I have read, and what it could mean for process control and process safety systems.

In the area of data loss the report comes with the following statements:

  1. In 25% of  healthcare and insurance institutions examined, HIPAA-protected health information was sent outside the of the organization.
  2. In 33% of financial institutions scanned, credit card information was sent outside of the organization.
My first question is in these two situations how often was the transfer of information part of a legitimate business transaction, e.g. transfer of information to a customer? Secondly what was the frequency of such transfers in the institutions examined or scanned? What does examined or scanned mean?

What does this mean for process control and process safety systems? I think it means, that we should be careful with permanent electronic communication paths form and to these systems. I would be more concerned about the to path. However, the from path could give competitors information about your control strategies.

Another of the five areas discussed in the report is what is called high-risk applications.  About these the following statements are made:

  1. In 86% of organizations at least one high-risk application was used.
  2. In 85% of organizations Dropbox was found.
Again my first question is how often was a high-risk application used? And was there a legitimate business case for using it? Such as fx using a remote administration tool through an SSH-tunnel from an employees home in order to avoid a trip to company during off-hours. And about Dropbox: Was it found because used it to share private photos with co-workers? What was the frequency of Dropbox used in the organizations, which were using it? Was there a business case for using it, e.g. more secure than a USB-stik?

In the area of malware the following statements are made:

  1. In 84% of organizations a malicious file was downloaded.
  2. Every minute a host accesses a malicious website.
  3. Every 10 minutes a host downloads malware.
  4. 30% of hosts do not have updated software versions.
  5. 70% of organizations had at least one bot detected.
 Again my first question would be about the frequency of download of malicious files? The frequencies under point 2 and 3 are completely meaningless without information about the number of hosts involved in the study. And who downloads malware in the middle of the night? Point 4 is rather positive, since it means that 70% of the computers in this "research" have the latest updated software versions installed. Similarly, it is encouraging the defenses in 70% of the involved organizations actually worked.

Obviously one should not be downloading any files directly to process control or process safety systems. I think the report I picked up at that data security conference recently is attempting to paint a very black picture of the security situation, and that simple procedures and education of employees can deal with at most of the issue considered so far.

The real problem for the people attempting to make process control and process safety systems secure is the explosion in the number of unknown malware, i.e. malware that no-one have seen before. From 2012 to 2013 the number of new pieces of malware more than doubles. This is malware, which your defense systems don't yet have a defense against. Here I would think the best would be to limit the number of paths into the system, and only have paths available when needed, e.g. by using SSH-tunneling.

Even in the area of unknown malware the report comes with statements such as these:

  1. 2.2 pieces of unknown malware hit an organization every hour.
  2. 33% of organizations downloaded at least one infected file with unknown malware.
  3. 35% of files infected with unknown malware are PDFs.
The first piece of data is useful, since we can use it to discover how many organization are involved in the study. The report states that in 2013 83,000,000 pieces of unknown malware were created. That is about 227,000 each day or about 9,500 each hour. Hence the "research" involve between 4,000 and 5,000 organizations.

I find it a bit amusing, that IT-people have to create statements, such as the onces exemplified here to get the attention of management. It is somehow equivalent to having fires, explosions or leaks  to get management attention to process safety. I don't think that is a good road to take. What do you think?

Sunday, February 08, 2015

Functional Safety and Functional Modelling - Is there a synergy?

Functional Safety is the international norm for how a single safety function is designed, implemented and maintained. Functional modelling is an AI tools for building models of engineered systems, which allows one to reason about the behavior of the system. On the surface one might think the two are related, but they are not!

However, the question is weather the application of functional models during the design, operation and maintenance of process safety systems would allow a deeper insight?

The IEC 61508 Standard

The IEC 61508 standard is the defacto norm for how a safety function in new or modernized process plants are designed, tested, operated and maintained. On the website of the International Electrotechnical Commission Functional Safety is explained based on the following definition of safety:

Safety is freedom from unacceptable risk of physical injury or of damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment

IEC provide the following two definitions of functional safety:
  • Functional safety is the part of the overall safety that depends on a system or equipment operating correctly in response to its inputs, or
  • Functional safety is the detection of a potentially dangerous condition resulting in the activation of a protective or corrective device or mechanism to prevent hazardous events arising or providing mitigation to reduce the fight consequence of the hazardous event.
They also provide the following two examples of what is functional safety:

  • The detection of smoke by sensors and the ensuring intelligent activation of a fire suppression system, and
  • The activation of a level switch in a tank containing a flammable liquid, when a potentially dangerous level has been reached, which causes a valve to be closed to prevent further liquid entering the tank and thereby preventing the liquid in the tank from overflowing
They also provide examples of what is not functional safety:
  • A fire resistant door or insulation to withstand high temperatures are measures that are passive in nature and can protect against the same hazards as are controlled by functional safety concepts but are not instances of functional safety
From this I conclude, that functional safety systems in the eyes of the standard IEC 61508 (and its process industry companion IEC 61511) are systems, which perform an action to prevent something occurring in a process, such as a power plant, a refinery or a chemical plant, from having a negative impact on the process (the artifact). However, since passive safety features often are important fx in mitigating the impact of a process safety event, then I think process safety overall would benefit from tools, which allows one to consider both active and passive safety functions.

Be careful where to read about things!

Just out of curiosity let us look at what the Wikipedia page "Functional Safety" say about Functional Safety: 
  • Functional Safety is the part of the overall safety of a system or piece of equipment that depends on the system or equipment operating correctly in response to its inputs, including the safe management of likely operator errors, hardware failures and environmental changes.
This definition - to me - makes very little sense. I can understand that functional safety is part of overall safety. Maybe even that it is related to the system or equipment operating correctly in response to its inputs. But I don’t understand the limitation to safe management of likely operator errors. What about unlikely operator errors? Does likely also apply to the hardware failures and environmental changes? This shows the danger of not being very critical of which internet resources one use in a professional capacity.

The Wikipedia website goes on to state that the objective of functional safety is:
  • Freedom from unacceptable risk of physical injury or of damage to the health of people either directly or indirectly (through damage to property or to the environment). (A)
That is simply a statement of what safety in general is all about. Nothing specific to functional in that objective. Further under the objective headline the following is stated:
  • Functional Safety is intrinsically end-to-end in scope in that it has to treat the function of a component or subsystem as part of the function of the whole system. This means that whilst Functional Safety standards focus on Electrical, Electronic and Programmable Systems (E/E/PS), the end-to-end scope means that in practice Functional Safety methods have to extend to the non-E/E/PS parts of the system that the E/E/PS actuates, controls or monitors.
Following these - in my view - rather muddy definitions Wikipedia goes on to describe how functional safety is achieved. It states that functional safety is achieved through a five point process, which include: identifying, 
assessing, designing, verifying and auditing.
Wikipedia unfortunately don't describe these steps to a reasonable extent before the site goes on to describe how functional safety is certified and finally list a fairly large number of standard said to be related to functional safety.

The concept of functional safety

I think it would be beneficial if there were a clearer distinction between function safety as a concept and a standard for designing, implementing and maintaining a functional safety function such as IEC 61508 or IEC 61511.

Functional modelling, such Multilevel Flow Modelling (MFM), which has been used to study loss of cooling in nuclear power plants, allows the designer to reason about possible causes and consequences of a functional safety function deviating from its design intent.

The Functional Safety Life Cycle

Further searching shows, that there is something  called the functional safety life cycle. The figure below is borrowed from Rockwell Automation

This figure from more clearly show the 5 steps involved in the establishment and maintenance of functional safety systems or rather function. The first step is not unexpectedly assessment of the risks and hazards to be dealt with. The second step is the specification of the safety requirements, and the third is the dual steps of designing the safety system and verifying the design against the requirements from step 2. The fourth step is the installation or construction of the system and validation, that it is working as required. Finally, the fifth step is maintenance and improvement - continuously, i.e. by at appropriate times repeating steps 1 to 4. However, unfortunately, as evident from the figure, the focus is on machinery applications.

Another source of information about functional safety is The 61508 Association , which is concerned with the effective achievement of compliance with the IEC 61508 Functional Safety standard. On the associations website a clear definition of a functional safety system may be found:
  • a functional safety system detects a potentially dangerous condition and causes corrective or preventative action to be taken
This also makes it clear, that functional safety is the tasks involved in the identification, design, installation and maintenance of such systems. And as is pointed out further down the page the only difference between a process control system and a functional safety system is the reference to danger in the above definition.

Back on the IEC 61508 website it is stated about functional safety that: "It is fundamental to the enabling of complex technology used for safety-related systems. It provides the assurance that the safety-related systems will offer the necessary risk reduction required to achieve safety for the equipment."

So essentially we have a standard way of establishing safety functions, such as the one described at the start of this page. So the standard is simply a norm for how things should be done in order to be in compliance. But is it enough to be in compliance?

The IEC 61508 is the international standard for electrical, electronic and programmable electronic safety related systems. It sets out the requirements for ensuring that systems are designed, implemented, operated and maintained to provide the required safety integrity level (SIL). Four SILs are defined according to the risks involved in the system application, with SIL4 being used to protect against the highest risks. The standard specifies a process that can be followed by all links in the supply chain so that information about the system can be communicated using common terminology and system parameters.

SIL are ways of defining how much extra protection the functional safety systems is to provide compared with the base process system including the normal control functions as seen here:

The above table is from the website of the IEC 61508 Association. The IEC 61508 standard has eight parts:
  • Functional safety, 
  • General requirements, 
  • Requirements for E/E/PE safety related systems, 
  • Software requirements, 
  • Definitions and abbreviations, 
  • Examples and methods for the determination of safety integrity levels, 
  • Guidelines on the application of IEC 61508-2 and IEC 61508-3 and 
  • Overview of techniques and measures.
The standard is the basis for a number of industry specific standards, such as
  • IEC 61511 Process industries
  • IEC 61513 Nuclear power plants
  • IEC 62061 Machinery sector
It should be noted, that the standard also include guidelines related to the competence of those involved in the safety lifecycle and on the management of this life cycle. I wonder if these guidelines are a consequence of the more flat organisational structures used in which management no longer are able to judge the competencies of the people, whom they employ without specific field related help. Similarly the general duties of management have to be supplemented with guidelines specific to the management of the safety lifecycle. What has happened to the general skills of manager?

An interesting reference in this connection is The Safety Lifecycle Workbook. A cursory reading indicate, the there is more focus on documenting that you have done all the work according to a specific standard or norm, than ensuring the management and operation is continuously improved from day to day in an ongoing learning process. The focus appear to keep the lawyers at bay!

Each functional safety system appear to be designed to protect against a particular initiating event to avoid one or more possible consequences of that event. Such an approach could mis complex process safety events, such as material or energy balances experiencing large deviations.


To me it seems very clear that the functional in functional safety, functional safety competency, functional safety system and functional safety management have unfortunately very little relation to the subject of functional modelling in general and multilevel flow modelling in particular.

However, I am quite certain that the safety life cycle would benefit from the use of functional modelling, such as e.g. multilevel flow modelling (MFM) during design, operation, and maintenance. The functional models would allow the qualitative simulations to uncover why a goal is missed and what the possible consequence may be.

Functional models, such as MFM, would in my opinion allow exploitation of defense in depth strategies in a systematic way.

References for further reading:

  1. IEC Functional Safety website This reference has clear definitions and FAQ pages for both the current and previous edition of the standard.
  2. Wikipedia: Functional Safety website This reference is poorly written and not structured for a person unfamiliar with the topic.
  3. B. Stone (2013): “How to navigate the ISO Functional Safety Standards”, The Journal, Rockwell Automation, September. Online publication available at This reference is focused on machinery safety, which have there own series of standards.
  4. The 61508 Association website
  5. C. Miller and J.M: Salazar (2010): “The Safety Lifecycle Workbook”, Emerson Process Management. This reference gives good picture of the huge amount of documentation needed during the safety lifecycle steps.

Wednesday, January 14, 2015

Are we managing the hazards of hydrogen sulfide at the right place?

In the december issue of Hydrocarbon Processing there was an article titled "Manage hydrogen sulfide hazards with chemical scavengers". The article describe quite a number of chemicals for reacting hydrogen sulfide to create less toxic compounds. The compounds, which react with hydrogen sulfide are called scavengers. The idea is to reduce the concentration of hydrogen sulfide to a level, where a release is less serious.

From the article it is clear, that the place where in the chain from downstream exploration to upstream products at which the hydrogen sulfide is removed varies quite a bit. It can be done in the bitumen, the crude oil, the naphtha, the gas oil, the fuel oil. I would of course like to see the hydrogen sulfide problem managed / solved as close to the well as possible in order to reduce or eliminate the risk of exposure during subsequent upstream processing. However, the article seem to indicate, that this is not always possible or desirable, since the chemical scavengers used sometimes make upstream processing more difficult.

With the many smaller shale oil wells coming on stream the hydrogen sulfide problem has increased. There is two reasons for this. One is that the sulfur content from these wells according to some source appear to increase as the well has been in production. Another is that the crude from these wells are often not transported by pipeline to refining, but by rail cars. There are properly many reasons for this, but it appears a contributing factor has been the speed at with this type of production has increased.

Even though significant improvements in rail car construction has happened over the last decades, so it is no longer unusual to see a rail car flipped over without a single drop being released, the statistics appear to indicate, that there is higher risk of a release during rail car transport, than by pipeline transport. If transport of crude oil by rail car continue to increase and hence increase the risk of exposure to people, who live along the transport corridor, then oil industry most improve the downstream treatment for hydrogen sulfide removal.

If industry does not act we will likely see more events like the one in Quebec, and the result will be a public demand, that the authorities take action. What do you think?