Data holds the key to refining processes
Nathan Sheaff of Sciemetric Instruments highlights five things you can easily do with your process data right now.
The term “smart factory” means different things to different people, but one of the goals is to leverage process data to drive the repeatability and reliability of every machine on a production line. This puts the emphasis on the quality and consistency of your first-time yield. By striving for perfection with every part and assembly at every stage of its production, you can spot in near real-time if processes, machines or test stations are drifting out of spec or even if operators are doing their jobs properly.
The most telling form of data is the digital process signature, or waveform, generated by each cycle of a process or test. But it of course isn’t the only form of data. Many plants rely on scalar data to monitor the health of their production lines. Machine vision systems are also used for quality inspection. When taken together and indexed by part or assembly serial number, these datasets provide the means to identify a problem and quickly trace and address its root cause, too. But they must be collected into a single database where they can be searched, correlated and visualised with algorithms.
So let’s look at five ways you can use your data now with the right databasing and analytics tools.
Take the guesswork out of limit setting: Many manufacturers still rely on archaic methods of trial and error, wading through piles of spreadsheets to do manual calculations. We visited one component company where it was taking weeks to find the correct test limits for an automotive sensor – it even took days for a simple calibration. By adding signature analysis capability to the test system already installed on the line, scalar data and the associated signatures could be collected and analysed together. The data analytics software did the work to automatically calculate statistically based limits and the correct processing algorithms within just 30 minutes.
At another manufacturer, a fuel rail leak was detected at a vehicle plant. This slowed production and caused the quarantine of thousands of vehicles. Analysing the test data revealed that all failures were marginal passes. This meant they had just barely passed the quality tests. In this case, the test limits being used on the test stand were those originally supplied by the part designer and had not been monitored after production startup.
The quality manager used one week of manufacturing test data to determine the impact of applying more scientific statistically-based limits. It was determined that tightening the test limits would have caught the faulty fuel rails and yet would have had a very minor impact on throughput. Two months’ worth of part data was re-analysed by applying the new limits to identify other suspect parts – and three additional “failures” were found.
Optimise test cycle times: It’s easy to also review such historic data and run simulations to see where and how a test cycle can be shortened without impacting quality assurance. This boosts productivity and can reduce the number of test stations. In one example, a manufacturer was only looking for peak breakaway torque as part of a torque to turn test. Analysis showed that aborting the test cycle sooner would have no impact on quality. By adding a new test and terminating the cycle based upon this result, a seven-cycle savings per eight-hour shift could be achieved. For this plant, that amounted to a production increase of 132 parts per month.
Trace the root cause of defects: A quality engineer in an automotive powertrain plant uncovered a higher than normal failure rate of electronic throttle bodies at a cold test station. This failure rate of 1.27% affected 180 engines per month. Using manufacturing analytics software to create a picture of the digital process signatures from the test data on all the throttles, the engineer could see which parts passed, which parts failed, and the shape of the normal failures with anomalies. These anomalies were within the overall pass/fail limits set for the process but they clearly represented the parts that caused issues later at the engine cold test station.
The engineer then conducted further analysis to identify the source of these problems: 77% were due to stuck or sluggish throttles that the current test stand algorithm couldn’t catch and 23% were due to upstream process failures that had not been identified until this point.
This intelligence was then applied to the production floor to prevent the reoccurrence of these issues: the force was increased in the operation involving the return spring on the throttle valve, while algorithms and limits on upstream test stations were adjusted to reduce the number of false rejects. The failure rate was reduced to 0.07%. Monthly yield increased by 170 engines for an additional $1 million in revenue.
Predict maintenance requirements: The whole concept of process monitoring can be applied to asset monitoring as well. Take a pressing operation. In addition to monitoring the process through each cycle, the asset doing the pressing can also be monitored by installing sensors to collect digital signatures for such metrics as vibration, the electrical current draw on its motor and hydraulic pump pressure versus pump rotation.
Small changes in these signatures, or in the typical signature of the process cycle, can be an early warning sign that equipment is wearing or drifting out of alignment. The need for equipment maintenance can be recognised and completed before quality suffers. Preventative maintenance at a scheduled time is much less time consuming and disruptive than sudden, unanticipated problems. It takes an investment of only a few thousand dollars to add the necessary instrumentation and the connection to the plant’s digital backbone. That cost is very low when amortized over the life of a machine.
Launch machines and lines faster: This same insight for asset monitoring can also be used to troubleshoot and dial in new equipment and new lines faster. This is particularly valuable for large OEMs that may be launching lines with 50, 100 or even 500 machines strung together. One weak link will hold up the entire line. By collecting and analysing process data and the performance of each machine, bottlenecks can be identified immediately. Root causes can be diagnosed and eliminated systematically. New control limits can be verified and easily adjusted. Digital process signatures from the new line can be matched against existing ones to give a strong indication of conformance. This enables innovations in one plant to be reliably applied to other plants, providing a many times increase in yield.
We worked with one client that could launch new lines around the world an average of four times faster, for estimated average savings of US$4 million per plant, using data in this way. Industry now has the means to easily and economically measure any pertinent metric with new levels of accuracy and speed, digitise this data, flow it across a digital backbone and organise it for easy retrieval, analysis and visualisation in a centralized database.
This allows quality engineers to find trends and patterns that reveal the “how” and “why” of decreases in yield and quality, then test and apply refinements to test limits or other upstream quality control benchmarks. This gives them the insight to improve quality, reduce scrap, rework and warranty claims, and bring new lines and new plants online much faster than ever before.
Latest news about SCADA