Tuesday, February 25, 2014

Have your online analyzer developments come far enough?

On ControlGlobal Paul Studebaker have an editorial about the last 25 years development in process analyzers. The editorial is accompanied with a timeline of the key developments over the past 25 years. However, there are at least two developments, which I find missing from the list.

The first of these properly goes further back than 25 years. That was when we learned to not tweak the analyzer settings each time we ran a standard test through it. Overadjustment of online analyzers was properly a significant cause of extra process variation in the late seventies and early eighties. Each week a technician would take an analyzer offline and run a standard sample through it, and based on the result adjust the analyzer settings.
In the eighties we learned the basic about quality control, and started only adjusting the analyzers when the result from the the standard sample was outside the control lines on our control chart. Thank you Dr. W. Edwards Deming! Read about the basic of statistical quality control (SPC) including control charts on Wikipedia.

The second happened after I switched form industry to academia about 25 years ago. My former employer in Canada eliminated the quality control laboratory in their chemical plant. No, they did not eliminate quality control - only the lab. All necessary analysis was converted to online analyzers, and in the few cases, where this was not possible a small lab was established close to the control room to perform the necessary tests. The reasons the change were many, but a key one was the delays involved in taking a sample, getting it to the lab, getting it analyzed, and getting the result back to the control room. Each of these steps were also a source of error. The result was, that the process operators did not make any process adjustment based on the quality control measurement - neither when necessary, nor when not necessary. The conversion of the quality control laboratory to online analyzers had many positive effects. For example that many impurities instead of being measure on a single sample every 24 hours was now measured by online analyzers every 2 hours, 4 hours or 8 hours. This naturally increased the operators awareness of these measurements.

So have you also moved your quality control laboratory online? If not I think you have not gone far enough! 

Monday, February 24, 2014

Trends in incidents in February 2014

I have done a highly unscientific survey of spills, fire, explosion etc. in fixed chemical facilities during the first 3 weeks of February this year. For this I have primarily used a RSS feed called "Recent Chemical Incidents at Fixed Facilities" created by Meltwater News. This RSS feed is available on the CSB website, and may also be followed on Feedly.

I have found 25 articles about events, which I would characterize as process safety incidents. Unfortunately all these are about events in the USA. So the picture is properly biased. Not unexpectedly events are first reported by local news media, and some days later they are picked up by NGO's such as Nation of Change or Think Progress.

However, the 25 articles covers explosions, fires, spill and blowouts in 12 different states. On the positive site only a single fatality is mentioned across these 25 stories. Spills are involved in more stories than any other type of event, and coal slurry from old and also shut down facilities. I guess the industry and society have not learned the lessons from Love Canal.

My greatest concern is however the failure of a blowout preventer at a fracking well in North Dakota. In Denmark we are currently discussing whether to allow preliminary search for oil and gas using this new technology, and the opposition to this possible new is quite well organized here. So any incident with fracking anywhere else will not facilitate development here. And failure of a blowout preventer brings memories back of BP's Deepwater Horizon.

Last years incident in Quebec was a wake-up call for the industry involved in the transportation of crude oil by rail. Nonetheless in 2013 more crude oil was spilled than in the previous four years the NGO's report. Does this mean, that transportation safety has decreased? Not necessarily! It could be that both the number of trains and the number of rail car with crude oil could have increased significantly in 2013 compared to the previous three years. Both the transportation sector and those, who own the crude being transported owe it to the public to come out with more information about rail transportation of crude oil.

Accidents or Incidents?

Many people use the terms incidents and accidents interchangeably. This in my view is rather unfortunate. In my view incidents are undesired events which could have been prevented. For example a fire in a pot of oil in the kitchen can be prevented - or at least it's likelihood reduced - by using a heating source, which don't involve open flames. Most chemical engineers have learned this in their first laboratory course in organic chemistry. Accidents cannot be prevented. For example you cannot prevent another driver from running a red light and crashing into your car. However, you can reduce the consequences using the techniques of defensive driving. 

I get quite upset, when officials are very quick to label an event as an accident. A recent fire a scrap metals plant in Iowa is a very good example. The scrap metal fire at Rich Metals on February 11th has been labelled an accident, because according to the Blue Grass, Iowa police chief "a worker was grinding metal when a spark or hot metal ignited a pile of metal turnings covered  with oil". My immediate question upon reading this story was: Why was a pile of metal turnings covered with oil left close to a potentially spark producing process?

So the so-called accident could have been prevented by placing the pile of metal turnings covered with oil further away form the potentially spark producing grinding process. This makes the event preventable, and hence it is an incident, and NOT an accident.

So it appears clear that housekeeping at Rich Metals could be improved by a proper investigation of this incident. But is there anything else, which could be improved by an incident investigation? What about working training? Did the working know about the dangers of the sparks produced by the grinding process? Was the worker trained evaluated the safety of the work area prior to starting the work? 

In Denmark employers are by EU law required to perform work-place-safety-assessment (Danish: arbejdspladsvurdering) before any work is performed. This assessment is to ensure the work can be carried out with minimal harm to the employee. The assessment can be carried out either by the employee, who will perform the job, or by another company employee or even an external consultant. The assessment must be documented.

So let call every undesired event in our plants for an incident or process safety incident until a proper incident investigation confirm, that it is really an accident. 

Even if lightning strikes - a so-called act of God - as it did on Tank 11 at Sunoco's Sarnia Refinery one summer night in 1996, it is still possible to learn from the event by using proper accident investigation techniques.

Thursday, February 06, 2014

Cloud - now much clearer!

Yesterday I attended an HP Software event here in Denmark titled "HP Discover Brush Up" at the Danish HP headquarters in Allerød. You can watch videos from HP Discover in Barcelona last December here. It was a rewarding afternoon. Country manager for Sweden and Denmark Lene Skov welcomed us before Rolf - unfortunately I have forgotten his last name - gave us HP's version of current trends in IT, which they call "The New Style of IT". This is all about mobility, security, cloud and big data, and Rolf presented a vision for the enterprise of 2020. A significant part of this was naturally HP's big data platform HAVEn. HAVEn just like other big data platforms - except maybe Tableau - is founded on the open source software HADOOP.

After the keynote lecture there were 4 tracks. The one I attended was cloud and automation. A key new product offering was apparently Cloud Service Automation, which allow you to configure a deploy a system including networking, storage etc. The system can be deployed in your own cloud or HP Cloud or a number of other providers, e.g. Amazon S3 or Dell with the HP Marketplace Portal functioning as a broker. HP Cloud provides a 90 day free trial - really nice for small independent consultants, who see the benifits of deploying in the cloud in stead of on premise.

The HP CSA is based on TOSCA, which is an open standard for defining and describing cloud service offering. This makes what whatever you define using CSA transferable among the different service providers. If you want to get your feet wet without spending any money and learn more about the technology under the elegant HP portal you could download the latest openSUSE 13.1, which include access to a an openstack implementation including orchestration tools.

Orchestration is something I have heard about some years ago before everyone started talking about cloud. At the time Novell had a product called ZenWorks Orchestra. It was about virtual machine definition and deployment, and you can read about it here. Today the offering have developed to Platespin Orchestra.

When I think back on IT hardware development over the past 40 years, then is appears we have come full circle. When I studied at the Technical University of Denmark in the early 70's our main computing facility was called NEUCC for Norther Europe University Computing Center. It started with some IBM 7000-series systems and later evolved to the 360's and 370's. You delivered the stack of punch cards to the machine room and picked up the printed output some hours later. Later a remote terminal was created at which you had your card deck read in one end of the room and picked up the output at the other end some minutes later. In the late 70's during my studies at University of Alberta we accessed the Amdahl mainframe through remote terminals and DECwriters, but still had to pick-up the output near the actual machines. That is we needed to know were the machine were!

In the following decade the mainframe was declared dead many times. Especially by IBM's competitors. However, the mainframe technology is still with us, and has advanced, so today the systems can be repaired and expanded without outage. And thanks to virtualization technology you can deploy a new OS instance within a few minutes. Then around the turn of the millenium we started seeing something new: so-called blade servers. At the same time IBM started running linux on their mainframes. In the beginning a few hundred virtual linux instances on a single System z, but today more than 60.000 virtual linux instances on a single System z without extender.

During the past 5-6 years the major buzzword had been "cloud computing". I think is started with IBM saying the network is the computer, and then it became less and less relevant where the computing power is, than how much you had access to. Today all the major players on the market offer you the ability to buy a cloud computer. However, there is very little information on what hardware is actually involved here. My guess would be, that is a simply a container with some blade servers, som storage and some networking hardware, which can be configured over the internet to perform the services you need. So what is the difference between this private cloud and the IBM mainframe? I would say the supplier, and possibly the mainframe has more compute power per square foot than most cloud offerings. But basically both the mainframe and the cloud computer, is just a large amount of computing power!

The other big current question is public versus private cloud. I.e. should you own the hardware and have it located on you premises or should you rent the compute power when needed either from Amazon S2 or Coogle Compute Engine? For many uses, e.g. university teaching and often also research, it makes little since to have the compute power on premise, since there will be many hours of the day, when it is not needed. It would make more sense to access e.g. Google Compute Engine when the compute power is needed. That same compute power could then benefit European universities, when American students are slipping and so on. Just like we shared mainframes in the past! What do you think?

One final note! Currently I am migrating our scanned mail to Google Drive. This means I will no longer physically know where a given  document is located. However, the benefit is that I can quicker search for a document on Drive, than I can currently click my way to the document through the folder structure on my harddisk. I will also be able to access my stuff at any location with a Wi-Fi connection. I can easily download a  document to my phone or tablet if I need it while off-line, e.g. when travelling by plane. I decided to go full out after reading an interview with Google manager in charge of security and testing the connectivity at my home office location - about 15 meters and several internal wall from my access point. Additional benefit is not loosing my data in case of break-in at my home - properly more likely than a Google 24 hour outtage. Only one thing concerns me: I am much older than Google!