Tuesday, February 14, 2012

Is process safety all about execution?

Last evening I attended a presentation at the IDA - the Danish Society of Engineers - about the Deepwater Horizon. The presentation was by Graham Bennett from DNV. Unfortunately the full report on DNV's investigation is no longer publicly available. Yesterdays presentation ended with a description of the ongoing effort by authorities both in the US and the EU to create new regulations which aim to prevent another similar event. That has been the modus of operation of authorities since Bhopal, since Piper Alpha, since Seveso. But does it work?
One could argue that the regulation have worked, since the world have not seen another Bhopal - although during the flooding in central Europe in the summer of 2002 we came very close to such an event during a major toxic gas release. In his presentation Graham Bennett also pointed to similarities in the lag of effective emergency management and communication on Piper Alpha and Deepwater Horizon. Both were supposedly designed to survive the type of events they experienced. They did not due to lag of efficient decision making during the initial phases of the emergency.
In his presentation Graham Bennett also mentioned, that in 2006 ExxonMobil was drilling a deepwater well not far from the Macondo formation. That was Blackbeard, which was abandoned at a dept of more than 32000 feet because Exxon drillers felt it was not safe to continue after the rig experienced pressure shocks. At the time the decision to abandon the well escalated to the level of the CEO within hours. The CEO had the guts to make the decision to abandon the well  with a loss of almost 200 M$ and not risk employee lives and company image.
I think this is the main difference between companies like ExxonMobil and Dow Chemicals and other major players in the industry: A lag of effective means of escalating a decision to the top level of the company. I base this on my initial chats about OIMS in the 90's. I learned about OIMS during an afternoon patio conversation with a friend from the days at University of Alberta. He explained the basic ideas to me, and through former colleaques at Imperial Oil I got in contact with people who was able to explain OIMS to me both in a research laboratory environment and in the settings of a refinery and chemical plant. At one of these meetings after talking about OIMS for several hours I was told, that the OIMS manual had been declared company proprietary, so I could not have a copy even for teaching purposes, but that BP had a very similar system, and their manual was freely available on that company's website. Therefore my conclusion: The difference is not in the operations management systems - or whatever each company calls them - as such, but in how the execution works in day to day operational decisionmaking throughout all levels of the organisation. It's about the connection between the ground floor and the top floor!

Friday, February 03, 2012

Are you paying attention to Oracle?

Do you remember that Oracle a few years ago acquired Sun Microsystems? What has happened since and should the process control community pay attention? These questions pupped into my head when I attended "The Extreme Performance Tour" hosted by Oracle at the Thyco Brahe Planetarium here in Copenhagen a few days ago.
A very short time after the take over was legally completed Oracle announced the Exadata - the first end-to-end engineered system to run the Oracle database. However, in my view Oracle was just playing catch up. For years you have been able to buy IBM mainframes custom engineered to run the DB2 database. So the news is that now there is competition on this very special market of large databases with easy access to the data from anywhere. Since then Oracle have also engineered the Exalogic; properly to compete with Websphere. So also here competition have increased.
The process control community is increasing its usage of simulators and modelling. Often these are custom made systems to a particular plant. At least one large user of process control systems, such as those from ABB or Honeywell, have several years ago decided, that they are better off adding hard disk capacity for history data than spending any money on consolidation of these data. However, usually the history data still reside on the process control network. This locations of the process history data to some extend limits the access to the data and the use of the data.
If the process history data resided on a large corporate computer, such as an IBM mainframe or an Oracle Exadata, then controlled access to the data both in-house of for collaborators in engineering companies and universities would be much easier. Even though process control computers today are standard off-the-shelf hardware, and run standard off-the-shelf software, are many user for good reason limiting external access to the process control network.
However, such large process history databases be used? They could be used for example to compare refinery performance over the last two turn-around circles. Retail companies for many years have used so-called business intelligence software to compare sales during the last two Easter periods. Such analysis of process data could reveal periods of improved or degraded performance. Another possible usage of high frequency process history data is development of process models that are fitted to the actual history data from the plant. So I think the process control community should pay attention to Oracle! or their competitors.
Some large companies may already have the necessary data processing capacity in house to explore the information hidden in the process history data. So what are stopping you?