How much is too much when it comes to piloting multiple condition monitoring solutions?
In your opinion, what are your rules of thumb when it comes to finding and evaluating multiple condition monitoring solutions while balancing the need to maintain normal operations and impact on your team and budget?
Comments
-
Here are some thoughts on this topic…
When determining "how much is too much" for trying out multiple condition monitoring solution pilots, you'll need to consider several factors, including your company's resources, time, and the complexity of the platform.
Here are some factors to keep in mind:
Time: Software pilots typically take time to set up, run, and evaluate. This includes the time taken to learn the software, train staff, integrate it into existing systems, and collect and analyze the results. If you're running too many pilots simultaneously, it could become difficult to give each one the attention it deserves.
Resources: Running a software pilot often requires the allocation of human and financial resources. If your team is overwhelmed by the number of pilots, or if the cost of running these trials is exceeding your budget, you may be trying out too many at once.
Quality of Evaluation: The main goal of a pilot is to determine whether a particular software will meet your needs and bring value to your organization. If you're running so many pilots that you're unable to effectively analyze the data and make sound decisions, it's likely you're running too many.
Operational Impact: Implementing too many new software solutions at once can be disruptive to your regular operations. This could confuse your staff or disrupt workflow if they're constantly having to learn new systems.
Neglected Regular Work: If the rest of your work is being neglected because you're focusing too much on pilots, you may be doing too many.
In your opinion, what are some best practices when it comes to software evaluations?2 -
I think it depends upon the maturity and culture of the operation. Pilots require a conscious commitment of engaged and dedicated resources in order to properly manage. For this reason, they cannot be treated as a side hustle or hobby by those already wearing many functional hats. This is why I'm primarily an OPAT practitioner (One Pilot At a Time). This doesn't mean you can run multiples, but I wouldn't burden a single operation with more that one and instead spread these across the manufacturing enterprise while assuring resources and pilot objectives are not compromised.
Also, I would refrain from selecting the best/top and worst/bottom operational performers when conducting a pilot if one of the key objectives is understanding the representative and requisite elements necessary to scale. Rarely will either of these performance levels uncover the areas that must be addressed (people, processes) and the potential pitfalls that will impact adoption.
2 -
This is fantastic insight @Scott Reed !! Question - when conducting OPAT - how do you optimize to pick the best solution? Do you do the first pilot, test if it meets the biz requirements and then move to the next pilot if it does not?
0 -
Keep the list of competitive digital solutions small - preferably down to two, no more than three. Develop a selection criteria related to the digital application and how it's intended to benefit the overall operational performance as well as the impact on current roles and responsibilities of the workforce who will engage (both directly and indirectly) with the inputs and outputs of the solution. I also recommend a formal Situation Appraisal (SA) be conducted to help establish the selection criteria to ultimately rank order the options available for pilot. From there a Decision Analysis (DA) can be executed for determining the best/appropriate application to pilot first.
I would also caution against running multiple, back-to-back pilots for the purposes of chasing costs. Time-to-value and the factoral benefits related to time-to-scale should be primary. Competitive cost evaluations can always be managed after the adoption learning curves have been overcome.
2 -
This is such great advice.. thanks @Scott Reed! 🙏
0 -
Based on my experience with condition monitoring, simplicity is key. Ensure that your maintenance plans are aligned with failure modes. Once you grasp these modes thoroughly, you can then choose the optimal tool to detect their activation. When multiple tasks target the same failure mode, there's room for plan optimization. Avoid getting overwhelmed by various initiatives to keep a clear focus.
1 -
Love this @Jorge Murillo !!
0 -
Thank you @Hari Viswanathan for initiating the disscution around this important topic!
0 -
@Hari Viswanathan We have multiple predictive maintenance initiatives in our facility. It is very easy when you see the benefit of something to try to replicate it across other solutions. Augury is our biggest predictive maintenance initiative but we also have monitoring on compressed air, steam, and other equipment/processes that are not covered by Augury. We even have predictive maintenance solutions on our mobile equipment! The main driver for us was Criticality Analysis of our equipment and processes.
The key is to create SME and process owners before it gets out of hand. There needs to be a set of Operational standards with each initiative. Who responds, when do they respond, what action is take, how is tracked, who interacts with the platform, etc.
2