Some control loops cannot be improved by tuning. In fact, you might be wasting
your time, or you might even make matters worse by tuning.
This article discusses four types of PID control loops that you absolutely should
Why NOT to Tune
It may seem odd that a company named ExperTUNE is giving advice on when
not to tune a controller. But there are many good reasons to avoid controller
1. Tuning will not make the problem any better.
2. Tuning will make the problem worse.
3. You might completely miss the real problems.
Don’t Tune If:
Case 1: The Loop is at a Limit.
Case 2: The Instrument is Failing.
Case 3: The Loop is Already Well-Tuned!.
Case 4: The Root Cause is Somewhere Else.
Case 1: The Loop is at a Limit
If the controller is already at a limit, then there is no point in tuning the controller.
So, what do we mean by “at a limit”? Usually, it is one of the following:
· The Control Valve is 100% open
· The Control Valve is 100% closed
· The Variable-Speed Drive is at max speed
· The heater is on (or off) 100% of the time
· The control output is limited by a soft limit
· The Process Variable is at 0% or 100% of its span
In other words, the controller has no ability to control, no matter what tuning you
put in place.
What to Do Instead,
Find out why the loop is limited. Once you figure that out, try to get the controller back into a normal controllable range.
Finding Limited Loops,
Monitors the condition of all your controllers. By watching the control output signals, it can instantly find all controllers that are at a limit.
There’s another benefit to this exercise: Loops at a limit are often bottlenecks to
production. When you resolve these problems, you may also be able to
increase production rate.
Case 2: The Instrument is Failing
Instruments fail in a lot of ways. And when they fail, they can affect the controller in many different ways.
Several modes of instrument failure:
What to do Instead,
Fix the instrument ! :D
Case 3: The Loop is Already Well-Tuned!
While this seems obvious, it isn’t really so obvious. Do you know how to tell if a
loop is well-tuned? Is it:
Any one of these might be correct, but you can’t have all of them! The answer depends on the needs of your process.
So, you have to start by asking the question “What does good tuning look like?”,
then look at the current performance of the loop in question. If a loop is already well-tuned, then any changes will make it worse!
What to Do Instead,
Tuning is always a balance between fast response and “robustness”. Robustness is a measure of the loop’s performance under changing process conditions. If the tuning is not robust, the loop could become unstable.
In most process plants, it is wise to build in a safety factor to ensure robustness under a range of conditions.
If a loop is well tuned, with a good balance between performance and
robustness, leave it alone!
Finding Loops with Poor Robustness,
The effect of tuning can lead to a process that is slower, more stable, and more robust.
A safety factor should be set to keep loop stable, even if there is a shift in process conditions.
Case 4: The Root Cause is Somewhere Else
You can waste a lot of time trying to fix the wrong problem. Because PID control is a form of feedback control, you simply cannot “tune out” all the upsets that come to your controller. Any upset that comes along will upset your controller,
no matter what the tuning!.
Process plants are complex places, full of dynamic interactions that spread from the source, throughout the plant. What you need is a way to find the true root cause of process upsets, so that you can eliminate the problem at its source.
What to do Instead,
Find the true root cause of the variation, and eliminate the problem at its source.
Finding the Root Cause,
Using cross-correlation analysis on your existing historical data, the Process Interaction Map can pinpoint the upstream root cause of the problem.
There are four types of PID control loops to avoid tuning. These are:
1. Loops at a Limit
2. Loops with Failed Instruments
3. Loops with Good Tuning
4. Loops with Another Root Cause
In each case, there are alternate actions that can be taken to improve overall
The performance of industrial control loops directly affects the stability, robustness, and safety of the process.
Optimizing the performance of regulatory controls requires knowledge of the process and control systems, the right tools for the job, and importantly, a systematic approach to follow.
This article outlines a systematic, seven-step approach for improving control system performance. It also describes the key points to follow for ensuring each step is implemented successfully.
Attempts to improve the performance of a control system are commonly done in an ad-hoc fashion, mostly by adjusting controller settings of loops which are cycling or have poor response. Rarely is a systematic, holistic approach followed, and rarely are the results optimal. Important factors, such as control valve performance, process interactions, control strategy design, and process capability, are often overlooked while these could be the very problems preventing the loop from reaching the desired level of performance.
The basic work process associated with optimizing a regulatory control layer can be condensed down to seven effective steps:
1. Develop and adopt a Process Control Philosophy.
2. Assess current performance.
3. Diagnose root cause of poor performance.
4. Fix poorly performing hardware.
5. Optimize controller performance.
6. Implement advanced control strategies.
7. Maintain performance over the long term.
STEP 1: Develop and adopt a Process Control Philosophy
A Process Control Philosophy is necessary to provide guidelines and standards for all aspects of control loop optimization and to ensure the effort will be effective and the results consistent on a site-wide basis.
The philosophy is a written document, developed as a collaborative effort between control engineers and the operations and maintenance groups.
At a high-level, the document should cover the
following aspects at a minimum:
• Personnel roles and responsibilities, as well as the skills and training required.
• Documentation and software tools to be used.
• Guidelines for determining loop importance and setting control loop performance criteria based on control loop objectives.
• Standards for selecting different controller configurations, options, and tuning rules based on loop types and performance objectives
• Guidelines for setting scan interval, using derivative, filtering, cascade, and feed-forward control.
• Implementation strategy and a project plan.
• Reporting requirements and key performance indicators to be used.
• Ongoing maintenance guidelines for sustaining improvement gains.
STEP 2: Assess current loop performance
Computer software should be used to automate the assessment and reporting of loop performance. As the first part of this step, such software should be acquired and installed. The software should then be configured with a list of all control loops and several data collection and assessment parameters for each loop. Note some software tools can do almost all the configuration automatically by reading the DCS database, while others require a significant amount of configuration effort.
Although all control loops should be assessed by the software, human effort should be focused on improving the most important control loops of which the performance is the worst. To manage this effectively, control loops should be classified according to their relative importance, and performance criteria should be assigned to the loops of high importance (as defined in the philosophy document). Again, some software tools can do this automatically, requiring very little human effort.
Controller output, set-point, and process measurement data should be collected from the DCS using OPC DA. Data should be sampled fast enough to capture loop dynamics. A rate between 1/25th and 1/50th of the loop settling time normally provides good results. Collecting data from process historians is not recommended, because the data is seldom sampled fast enough to analyze the performance of fast control loops, and the data is often compressed which negatively affects the accuracy of the performance analyses.
Enough data should be collected to be representative of the loop’s performance and to support complex analyses such as cycle detection and settling time calculations. A time span of 25 to 50 times the loop settling time should be sufficient. The software should validate the data and detect problems like spikes, static signals, missing or bad data values, and outliers.
As a minimum, the controller’s response and the loop’s tendency to cycle should be analyzed. For controller response, it requires prior knowledge of the process dead time, and compares loop performance against minimum variance control. Process dead time is not always a readily available number and minimum variance control is a theoretical and very aggressive performance measure.
Once the performance of the loops has been analyzed, it should be compared against the specified performance criteria as outlined in the philosophy document. The software should combine individual performance comparisons into a single composite Control Loop Performance Index (CLPI). The loops should then be ranked according to the individual CLPI and filtered according to loop importance, thereby producing a report of worst performing, high importance loops.
STEP 3: Diagnose root Cause of poor control performance
It is important to establish if the poor performance is due to controller tuning, hardware, or other factors, so the appropriate corrective action can be taken. The list of poorly performing control loops should be inspected and the root cause or causes of poor performance be determined. The assessment software potentially provides diagnoses automatically from process data already captured, or single-loop diagnostic and tuning software can be used by analyzing data from specific diagnostic tests done on the process.
Typical diagnoses can include hardware problems with final control elements (stiction, hysteresis, nonlinearity) and instrumentation (static sensors, spikes, noise), or tuning problems (aggressive tuning or sluggish tuning).
STEP 4: Fix poorly performing hardware
If the control loop has defective hardware, the problem should be placed on a maintenance list and fixed as soon as possible. In some cases, the maintenance cannot be done while the process is running and will have to wait for the next shutdown.
Note, stroking a control valve in 25% increments (a basic performance check done by many instrumentation technicians) does not reflect how the valve will perform in normal operation when the changes required are often in fractions of one percent. This concept can sometimes be overlooked by a maintenance technician.
Although some controller tuning “tricks” are believed to compensate for control valve hysteresis and stiction, the loop performance will ultimately not be as good as with a properly functioning control valve. In many cases, controller tuning
with bad hardware is a futile exercise. Due to the hardware problems, process characteristics can be misidentified, leading to improper and even dangerous controller settings.
STEP 5: Optimize controller performance
If the hardware is reliable, the controller can be tuned to obtain the desired speed of response and/or robustness appropriate for the loop’s role in the process.
Controller tuning is essentially matching the static and dynamic characteristics of the controller (gain, integral, and derivative) to those of the process (gain, dead time, and lag). Hence, before a controller can be tuned, it is necessary to establish the dynamic characteristics of the process. This is done by making small step changes to the controller output (or setpoint) and analyzing the response of
the process to obtain the process gain, dead time, and time constant.
Once the process characteristics are available, the controller tuning constants can be calculated by plugging the former into a set of equations. Software makes the identification of process characteristics and the calculation of the controller settings very simple and error-free, and provides simulations of predicted loop response and robustness. Using a robust software application to optimize controller performance significantly improves accuracy and saves time.
STEP 6: Implement advanced control strategies
Even after the hardware is fixed and the control loops are properly tuned, the performance o some control loops may still not meet desired specifications. Feedback control loops have inherent performance limits that cannot be exceeded, regardless of how well they are tuned. Processes could be interactive, nonlinear, and inherently slow, which also limits what a single controller can do. In cases like these, the control performance can likely be improved by implementing a more complex control strategy. These advanced control strategies include cascade, feed-forward and ratio control, gain scheduling, decoupling, and ultimately model predictive control.
STEP 7: Maintain performance over the long term
To sustain the benefits after optimization, it is essential to continuously monitor and report loop performance so control problems can be identified and rectified before it adversely affects profitability.
This way, the benefits associated with optimal loop performance will not diminish over time.
A best practice with many industry leaders is to compile control loop performance metrics and provide weekly/monthly reports to various stakeholders such as automation and operations management team members.
The same software used for initially analyzing the control loop performance in Step 2 can normally be used for this function as well. Control loop
performance should be analyzed periodically, with intervals between a few days to a week. Continuous assessment is rarely of any use and is considered bad practice because the data collection places additional load on the OPC server and the data
storage wastes hard disk space.
Also, as an iterative approach to loop performance improvement, less important control loops can be optimized in a second round of Steps 2 to 6.
As the first line of defense against process disturbances, PID control loops play a significant role in plant safety and reliability. There are large benefits associated with improving control loop performance. Optimizing control loops can be done effectively by implementing a seven-step approach. First, a process control philosophy is developed. Loops are classified according to their and performance criteria are assigned. Data is collected for each loop, the performance is analyzed, and the loops are ranked according to importance and performance. The problems are diagnosed as hardware or tuning problems and addressed accordingly. If performance is still not adequate, advanced control strategies can be implemented. Once the loops have been optimized, periodic monitoring and reporting is used to ensure any further performance problems are identified and addressed early.
AutomatProcess Simulation Software for system testing and operator training is a valuable tool for process automation projects. Users in validated industries are interested in how the use of Process Simulation Software for Validation of Automated Systems.
When used correctly, Process Simulation Software can help companies with validated systems by providing an:
• Environment for more comprehensive testing of the process control system, resulting in better automation system performance.
• Environment for accelerating the System Acceptance Testing phase of a project, resulting in shorter project cycles and quicker time to market.
• Effective method for mitigation of risk by more comprehensive system testing.
Topics covered here:
• The use of Process Simulation Software.
• Impact of system testing and qualification using Process Simulation Software.
• Impact on project risk mitigation using Process Simulation Software.
• GAMP Requirements for suppliers of Process Simulation Software.
The Use of Process Simulation Software
The use of process simulation is acceptable for system testing. The factory testing may be carried out without connection to field instrumentation and may include an agreed level of process simulation.
Process simulation is therefore an acceptable tool for testing validated systems and is an effective tool for ensuring completeness of testing.
There are several requirements for system testing that lead to the use of process simulation for validated systems testing:
• The software is frozen prior to testing, It is important that final software modifications are completed prior to testing and that the tested software is not modified or changed after testing. This applies in particular to the Software Integration Testing and System Acceptance Testing phases of the project.
• The application software, which is no longer needed, should be removed prior to system testing, in order to avoid “dead” code.
“dead” code is defined as application software that is left over from development or code changes.
The only instance in which unused code can be kept in the application software is when it is used for future purposes of testing or for later diagnosis during support work, in which case it should be labeled, commented out and documented.
Application software used specifically for simulation has no purpose once testing is complete and must be removed prior to Software Integration Testing and System Acceptance Testing, violating the first requirement of “frozen” code prior to testing.
These two requirements dictate the use of a non-intrusive testing interface, especially for Software Integration Testing and System Acceptance Testing. A non-intrusive testing interface addresses the above requirements in the
• It allows the application software to run in a normal mode without any modification during testing. This same tested application software can then run in a normal mode in the process controller without modification.
• It provides an external interface to the application software that does not require any additional code for testing. When testing is completed, the non-intrusive interface can be shut down and no removal of “dead” code is required.
Impact of System Testing and Qualification Using Process Simulation Software
Testing is a critical aspect of implementing a process control system.
The cost of testing and validating a process control system can equal or exceed the application software development.
The use of Process Simulation Software can assist the validated user and supplier during the Software Integration Testing and System Acceptance Testing phases of Installation Qualification (IQ) and Operational Qualification (OQ).
The correct use of Process Simulation Software can have a positive impact on system testing by providing an environment for:
• More complete and accurate testing of the process control system. Medium fidelity process models (mass balance, heat balance) have been used to provide realistic testing of validated control systems, resulting in installed automation systems that make better quality product with higher yields as soon as possible after System Acceptance Testing. The result is the user of a validated system meets production and project goals quicker.
• Compression of project schedule and reduced time to market through reduction of on-site System Acceptance Testing. During the Software Integration Testing phase, It is allowed for testing using Process Simulation Software to be included as “…part of subsequent IQ/OQ evidence if adequately controlled and documented.
Impact on Project Risk Mitigation using Process Simulation Software
One of the benefits of following the GAMP guidelines is a structured way of assessing and mitigating risk for the validated system.
The use of Process Simulation Software is an effective tool for addressing identified areas of risk by providing a realistic environment for:
• Training operations staff on critical or risky process operations without effecting process integrity.
• Testing failure modes of batch processes or critical batch phases without effecting process integrity.
The investment in Process Simulation Software for process control system testing and operator training can usually be justified by the benefits of mitigating risk alone due to the high value of product and cost of downtime in the validated industries. Typical justifications may be:
• Reduction of off-spec batch.
• Reduction of unscheduled downtime.
• Reduction of FDA violations, OSHA violations, and EPA violations.
GAMP Requirements for Suppliers of Process Simulation Software
The GAMP guidelines distinguish between the process control system validation requirements and the validation requirements of tools such as Process Simulation Software. “Tools supporting the system development and management process, rather than the business processes themselves, are not GxP applications, and do not require formal validation…” However, the guidelines state that the tools and the supplier of the tools should be chosen carefully.
In general, the requirements of Process Simulation Software for the validated industries are:
• The supplier should have a documented software development and quality program that complies with industry best practices quality systems.
• The Process Simulation Software should be applicable for process control system testing and operator training and not designed primarily for other uses (such as process design).
• The Process Simulation Software should be a “…commercially available, standard tool…” that is not “… highly-customized for use…”.
The application should be delivered as object code or executable programs
and not require the user to modify source code under standard use. If the tool requires the user to modify source code.
A Good and Suitable Process Simulation Software can help companies with validated processes comply with GAMP guidelines and improve their process performance when it provides:
• Environment for more comprehensive testing of the process control system, resulting in better automation system performance.
• Environment for project schedule reduction and quicker time to market through reduction of on-site System Acceptance Testing.
• Effective method for mitigation of risk by more comprehensive system testing.
• Non-intrusive simulation interfaces, allowing testing to be completed on “frozen” application software and avoiding “dead code” issues.
• Easy-to-use, realistic process models that can be quickly configured without the need for source code development.
• A Process Simulation Software tool designed specifically for process control system testing and operator training.
• Proven use and results at validated sites and Processes.
PID Control for PWM using LabVIEW
A regulatory guidelines concerning the use and validation of automated systems existed they had been subjected to less scrutiny than is the case today. In addition, as automated systems became more complex and more widely used the need to improve the overall understanding and interpretation of the regulations also increased.
It promote better understanding and interpretation of the regulations and improve communication within the Automation industry.
It consists of 6 phases:
During the Planning and Definition SMB Validation & Compliance Services Group Inc is available to provide assistance to the equipment user in order to develop:
Summarizes the project, identifies the criteria for success and defines the criteria for project acceptance.
Once the project is completed a Validation Report can be produced to summarize the project, measure success and clearly signify acceptance.
User Requirement Specifications
Prior to accepting vendor bids on a project, a list of requirements for the planned system is created. The list incorporates the needs of the end users of the system as well as any regulatory requirements, division and local requirements. This document details what the customer wants the system to do. It does not provide design detail unless that detail represents a design requirement. Validation testing will be based partially on this document.
For many projects, a supplier assessment or vendor evaluation (audit) utilizing a pre-approved supplier assessment procedure and criteria will be performed.
Risk Assessment + Critical Parameters
Develop and manage the project Risk Assessment processes and help define the critical parameters of the project.
Operation Qualification Protocols
Develop an Operation Qualification Protocol, which provides a detailed written test protocol to verify the system operation against the various approved design specifications. All control functions are verified. E-stops and interlocks are checked for functionality. All machine functions are tested and all devices challenged to ensure that design criteria have been achieved.
Performance Qualification Protocols
In process or performance qualification protocol are developed and performed. SMB can develop and offer assistance during this process.
PLANNING & DEFINITION PHASE 1
Phase 2. Supplier Quality Plan
Developing a Supplier Quality Plan is a key aspect of any project. Prepare and monitor a project quality plan.
Functional Design Specification:
The Functional Design Specification provides a written definition of what the system does, what functions it has and what facilities are provided.
Acceptance Test Specification:
The acceptance criteria for this test is based on requirements originally agreed upon and documented in the Functional Requirements Specification and detailed in an approved written test protocol. Depending upon the complexity of the project this activity may be divided into Factory Acceptance testing and Site Acceptance testing.
Hardware Design Specification:
The Hardware Design Specification provides a detailed written definition of the system hardware, how the hardware interfaces to other systems.
Hardware Test Specification:
The Hardware Test Specification provides a detailed written test protocol to verify the system hardware against the design specification.
Software Module Specifications:
The Software Module Specification provides a written definition of what the system should do, and the interfaces between the modules.
Software Module Test Specifications:
The Software Module Specification provides a detailed written test protocol to verify the individual software modules against the design specification.
Note: Depending upon the complexity of the project, the functional design specification, hardware design specification and software design specifications may be consolidated into a single detailed design specification.
Installation Qualification Protocols:
Develop an Installation Qualification Protocol, which provides a detailed written test protocol to verify the system installation against the various approved design specifications. Also, develop an Operation Qualification Protocol, which provides a detailed written test protocol to verify the system operation against the various approved design specifications. All control functions are verified. E-stops and interlocks are checked for functionality. All machine functions are tested and all devices challenged to ensure that design criteria have been achieved.
Performance Qualification Protocols:
In process or performance qualification protocol are developed and performed. Develop and offer assistance during this process.
Develop an Operation Qualification Protocol, which provides a detailed written test protocol to verify the system operation against the various approved design specifications.
DESIGN & DEVELOPMENT PHASE 2
Phase 3. Hardware Acceptance Testing
Hardware Acceptance Testing provides a detailed written test protocol to verify the system operation against the various approved design specifications. Correct operation of the hardware and instrumentation as defined in the hardware design specification is verified.
Software Module Testing
Software Acceptance Testing provides a detailed written test protocol to verify the system operation against the various approved design specifications. Correct operation of the software as defined in the software design specification is verified.
Integration tests can be developed and performed prior to execution of a factory acceptance test. Integration tests verify that the requirements for the system are met. The acceptance criteria for this test is based on requirements originally agreed upon and documented in the various design specifications.
Note: Each GAMP test stage is duly considered during validation and test planning and may, depending upon the customers Quality Assurance function, meet the requirements of the IQ, OQ & PQ as summarized above.
Note: Depending upon the complexity of the project these test phases may be consolidated into a single Site Acceptance test.
DEVELOPMENT TESTING & SYSTEM BUILD PHASE 3
Phase 4. Factory Acceptance Testing
Once the system is constructed and just prior to shipment, we will perform a factory acceptance test in which we verify that the requirements for the system are met. The acceptance criteria for this test is based on requirements originally agreed upon and documented in the various design specifications and detailed in a previously approved written test protocol.
DESIGN REVIEW & ACCEPTANCE PHASE 4
Phase 5. Installation Qualification Implementation
All data pertaining to the identification of the equipment (model number, serial number, power rating, control systems I.D. numbers, etc.) is recorded to ensure full trace-ability. PLC addresses are verified for correct wiring. Verification for compliance against the Functional and Design Specifications is carried out. Additional information such Standard Operating Procedures listing, Purchase Order verification and Preventive Maintenance review is also typically included.
Operation Qualification Implementation
All control functions are verified. E-stops and interlocks are checked for functionality. All machine functions are tested and all devices challenged to ensure that design criteria have been achieved.
Performance Qualification Implementation
Bracketing is used to qualify ranges of products. Performance Qualification is also to validate a complete integrated process (e.g. a packaging line consisting of several pieces of equipment).
Once the system has been successfully qualified, a Validation Report will be prepared to confirm that the system is ready for use for purpose.
COMMISSIONING & QUALIFICATION PHASE 5
Phase 6. Change Control
Change Control is fundamental to maintaining the validated status of a process and/or products and as such will be defined in the Quality and Project plans. The point of transfer from Project Change Control to Operational Change Control is typically defined in the Validation Plan. Develop, detail and maintain Change Control throughout the system lifecycle.
Periodic Reviews and evaluations of automated systems should be conducted jointly by the System Owner and user Quality Assurance group. Although the final responsibility for ensuring the compliance and correct operation of a system lies with the System Owner, Provide compliance input and provide an independent perspective to the process.
The User Company Management is ultimately responsible to ensure that automated systems and the data produced or contained within are adequately and securely protected against both willful and accidental loss, damage or unauthorized change.
Back Up & Disaster Recovery
Develop and detail a Back Up and Disaster Recovery Plan.
Provides a trace-ability matrix to establish the relationships between the various products of the development process. Trace-ability matrices detail design relationships and the verification of each. A trace-ability matrix is a live document used throughout the development process, beginning with the User Requirement Specification and maintained through to the approval of the final test specifications.
OPERATION & DECOMMISSIONING PHASE 6
Traditionally, DCSs were large, expensive and very complex systems that were considered as a control solution for the continuous or batch process industries. In large systems this is, in principle, still true today, with engineers usually opting for PLCs and HMIs or SCADA for smaller applications, in order to keep costs down.
So what has changed? Integrating independent PLCs, the required operator interface and supervisory functionality, takes a lot of time and effort. The focus is on making the disparate technology work together, rather than improving operations, reducing costs, or improving the quality or profitability of a plant.
Yet a PLC/ SCADA system may have all or part of the following list of independent and manually coordinated databases.
* Each controller and its associated I/O
* Alarm management
* Batch/recipe and PLI
* Redundancy at all levels
* Asset optimization
* Field-bus device management
Each of these databases must be manually synchronized for the whole system to function correctly. That is fine immediately after initial system development. However, it becomes an unnecessary complication when changes are being implemented in on-going system tuning and further changes made as a result of continuous improvement programs.
In a PLC implementation there is no automatic connection between the PLC and the SCADA/ HMI. This can become a problem during start up of a new application, where alarm limits are being constantly tweaked in the controller to work out the process, while trying to keep the alarm management and HMI applications up to date with the changes and also being useful to the operator.
Today’s DCS, which are also sometimes called ‘process control systems,’ are developed to allow a plant to quickly implement the entire system by integrating all of these databases into one. This single database is designed, configured and operated from the same application.
1: System design
PLC/ SCADA control engineers must map out system integration between HMI, alarming, controller communications and multiple controllers for every new project. Control addresses (tags) must be manually mapped in engineering documents to the rest of the system. This manual process is time consuming and error prone. Engineers also have to learn multiple software tools, which can often take weeks of time.
DCS approach: As control logic is designed, alarming, HMI and system communications are automatically configured. One software configuration tool is used to set up one database used by all system components. As the control engineer designs the control logic, the rest of the system falls into place. The simplicity of this approach allows engineers to understand this environment in a matter of a few days. Potential savings of 15 ‐ 25% depending on how much HMI and alarming is being designed into the system.
PLC/ SCADA control logic, alarming, system communications and HMI are programmed independently. Control engineers are responsible for the integration/ linking of multiple databases to create the system. Items to be manually duplicated in every element of the system include: scalability data, alarm levels, and Tag locations (addresses). Only basic control is available. Extensions in functionality need to be created on a per application basis (e.g. feed forward, tracking, self-tuning, alarming). This approach leads to non‐standard applications, which are tedious to operate and maintain. Redundancy is rarely used with PLCs. One reason is the difficulty in setting it up and managing meaningful redundancy for the application.
The DCS way: When control logic is developed, HMI faceplates, alarms and system communications are automatically configured. Faceplates automatically appear using the same alarm levels and scalability set up in the control logic. These critical data elements are only set up once in the system. This is analogous to having your calendars on your desktop and phone automatically sync vs. having to retype every appointment in both devices. People who try to keep two calendars in sync manually find it takes twice the time and the calendars are rarely ever in sync. Redundancy is set up in software quickly and easily, nearly with a click of a button. Potential savings of 15 ‐ 45%
3: Commissioning and start-up
Testing a PLC/ HMI system is normally conducted on the job site after all of the wiring is completed and the production manager is asking “why is the system not running yet?” Off line simulation is possible, but this takes an extensive effort of programming to write code which will simulates the application you are controlling. Owing to the high cost and complex programming, this is rarely done.
DCS benefits: Process control systems come with the ability to automatically simulate the process based on the logic, HMI and alarms that are going to be used by the operator at the plant.
This saves significant time on‐site since the programming has already been tested before the wiring is begun. Potential savings are 10 ‐ 20% depending on the complexity of the start up and commissioning.
PLC/ SCADA offers powerful troubleshooting tools for use if the controls engineer programs them into the system. For example, if an input or output is connected to the system, the control logic will be programmed into utilizing the control point. But when this is updated, did the data get linked to the desperate HMI? Have alarms been set up to alert operators of problems? Are these points being communicated to the other controllers? Programming logic is rarely exposed to the operator since it is in a different software tool and not intuitive for an operator to understand.
The DCS way: All information is automatically available to the operator based on the logic being executed in the controllers. This greatly reduces the time it takes to identify the issues and get your facility up and running again. The operator also has access to view the graphical function blocks as they run to see what is working and not (read only). Root Cause Analysis is standard. Field device diagnostics (HART and field-bus) are available from the operator console. Potential savings of 10 ‐ 40% (This varies greatly based on the time spent developing HMI and alarming, and keeping the system up to date.)
5: The ability to change to meet process requirements
PLC/ SCADA: Changing the control logic to meet new application requirements is relatively easy. The challenge comes with additional requirements to integrate the new functionality to the operator stations. Also, documentation should be developed for every change. This does not happen as frequently as it should. If you were to change an input point to a new address or tag, that change must be manually propagated throughout the system.
The DCS way: Adding or changing logic in the system is also easy. In many cases even easier to change logic with built in and custom libraries of code. When changes are made, the data entered into the control logic is automatically propagated to all aspects of the system. This means far less errors and the system has been changed with just a single change in the control logic.
Potential savings of 20 ‐ 25% on changes is not uncommon. This directly affects continuous improvement programs.
6: Operator training
With PLC/ SCADA operator training is the responsibility of the developer of the application. There is no operator training from the vendor since every faceplate, HMI screen or alarm management function can be set up differently from the next. Even within a single application, operators could see different graphics for different areas of the application they are monitoring.
The DCS way: Training for operators is available from the process control vendor. This is owing to the standardized way that information is presented to operators. This can significantly reduce operator training costs and quality due to the common and expected operator interface on any application, no matter who implements the system. This can commonly save 10 ‐15 percent in training costs which can be magnified with the consistency found across operators and operator stations.
7: System documentation
PLC/SCADA documentation is based on each part of the overall system. As each element is changed, documentation must be created to keep each document up to date. Again, this rarely happens, causing many issues with future changes and troubleshooting.
The DCS way: As the control logic is changed, documentation for all aspects of the system is automatically created. This can save 30 ‐ 50 percent depending on the nature of the system being put in place. These savings will directly minimize downtime recovery.
Time saving estimates are based on typical costs associated with a system using ~500 I/O, Two controllers, one workstation and 25 PID Loops.
If you are using, or planning to use, PLCs and HMI/ SCADA to control your process or batch applications, your application could be a candidate for the use of a DCS solution to help reduce costs and gain better control. The developer can concentrate on adding functionality that will provide more benefits, reducing the return on investment payback period and enhancing the system’s contribution for years to come. The divide between DCS and PLC/ SCADA approaches is wide, even though some commonality at the hardware level can be observed; the single database is at the heart of the DCS benefit and is a feature that holds its value throughout its life.