Operational efficiency has become a key differentiator in the highly competitive markets that exist today: ensuring higher quality standards, products and services that are ultra personalized, and all the while delivering this in a shorter and shorter time frame, is the challenge now faced by every COO.
Ensuring higher quality standards, products and services that are ultra personalized, and all the while delivering this in a shorter and shorter time frame, is the challenge now faced by every COO.
At the moment, global businesses are required to design much more agile and resilient supply and production chains in order to comply with new trends of consumer behavior (sustainable, green, localised etc.), while also having the ability to adapt on short notice to major pressures, such as the sanitary crisis of 2020.
The business purpose must be specified upfront, and this is not easy when addressing operational excellence. Indeed, optimizing a production chain can mean making it less costly, quicker in execution, stronger in KPI quality, more resilient and secure or even providing a higher mitigation against risks. Most of the time however, it is a combination of all of these. We have distinguished two types of approach: on the one hand we seek to optimize the physical production chain (with transport and transformation of products or goods), and on the other we look to better the unmaterial production chains, typically for services-based business.
Regarding operational excellence, the business statement requires a complex process. It must always satisfy the stakes of those involved, and then look to work these into the right mathematical equations; from here it must ensure an optimum resolution is found using performance algorithms.
An example would be the supply chain for a retirement residence company in response to a crisis. It may be necessary to decrease supplier dependence through the creation of massive stock levels in intermediary warehouses. This would require careful control over supply delivery journeys; it would need to account for factors to reduce both cost and delivery times, with all of this possibly happening over a very large territory.
In this instance or any other, the different node specificities have to be considered from the global level (central systems/applications, global/macro processes etc.) right down to the local one (site/branch capacities, environmental constraints, regulatory constraints, market constraints, operational processes etc.). Insurance companies for instance often find their operations are constrained by a variety of complex, intermediated & lengthy processes (risk management, claims handling, regulation requirements etc.).
Simplifying the operating model will mean creating a competitive advantage, through the reduction of internal frictions & costs and more direct access to the customer.
Once the business purpose is specified, then comes execution of the data science, which itself requires a highly specialized approach. There are several technical issues we face using data science for operational excellence.
The first is the complexity of data itself: data sources (applications, IoT or machines for industrial chains, pictures for claim management etc.) are often very heterogeneous. They may contain evolving patterns, and they can be filled with numerous errors (these often occur in measurement of physical quantities, or interpreting unstructured data). So the data collection and cleansing processes need to be very robust, and the data scientists dealing with them experienced.
The IA models designed must be resilient enough to manage small data, incomplete data and data containing errors, and because of this the solution often lies in using a blend of technological modules suitable for chaining everything together. Worth mentioning also is that some of these models might need to be embedded into edge systems, and their coding techniques will have to be adapted to low power chipsets, and asynchronous batch exchanges.
Finally, recommendations issued by the model’s outputs will have to be applicable in the simplest form possible, yet over a large part of the systems and applications park, and accounting always for the possible heterogeneity. One method to achieve this is to liaise with some of the operators (field staff, back-office agent, etc.), and integrate their knowledge and experience, potentially creating dedicated variables in the datasets and models off the back of this.
Thank you for your interest in Ekimetrics. Please fill out the form to ask your question.