How Can Businesses Create a Benchmark AI Framework?
By Ilya Smirnov - Last Updated: October 11th, 2024
If you’ve been following recent technology news, you know that artificial intelligence (AI) today is fairly widely used to solve important practical issues. However, in my experience to date, I see AI implementation as fragmented. That is, companies may use it locally as part of business optimization processes, but fail to consider the enterprise AI landscape.
For example, my team and I at Usetech have developed various AI-based models for energy consumption control at oil fractionation plants, but not as part of the entire technological process. AI plays only a minor role. Likewise, we implemented projects to search for hydrocarbons and ore deposits, but without considering their effective production.
In terms of computer vision, we have solved many practical problems in recent years. For example, we created tasks for recognizing granules on a conveyor belt to reduce mill downtime. We developed algorithms to remotely monitor the condition of power lines.
We implemented AI to automatically design a contract template for a client with more than 1,000 contractors. We created algorithms to develop a dynamic evacuation plan in case of fire smoke or gas leaks in a building and to model the expansion of gas contamination exhausted by moving objects.
This just begins to scratch the surface of what AI can accomplish. But, in general, the range of projects can be divided into three groups according to integration processes:
Projects involving necessary integration of information flows
Projects involving data from several systems, which can be integrated in different ways, for example, through API or data bus
Projects requiring significant rebuilding of the IT infrastructure, developing the data management strategy and implementing modern approaches to data management and storage such as DWH, Hadoop, and Data Lake
Most often, we implement the third type of projects.
AI workloads impose new requirements on computing and network resources compared to traditional applications and systems. With resource conservation in mind, developers tend to take a fragmented approach to creating the appropriate AI models. Architects develop only the solutions needed for individual projects for their teams, rather than systems that would serve the larger enterprise IT landscape.
As a result, these disparate systems make it difficult for companies to implement AI best practices. These structural barriers make technological changes less effective.
AI Implementation Challenges Across Business Processes
A piecemeal, or fragmented, approach does not guarantee that the developed AI solution will actually be adaptive to possible changes in the business process. So, the companies have to invest in new AI models that take advantage of all business data, rather than maintaining multiple standalone models.
The AI reference architecture, which enables complex and flexible AI introduction, combines a multilevel approach and module AI development to eliminate any dependencies on underlying technologies and ensure that all AI stakeholders can participate in the development process.
AI architecture should consist of five modules; each can be developed independently having its own users, interface, technologies, services, and deployment scenarios. The implementation of each module is related to the company's technology stack. This allows users to implement the best solutions, rather than depending on a single technology or vendor.
Let’s explore the five modules and the role each plays in successful AI implementation.
AI Infrastructure Reference Model: What does it consist of?
Knowledge base of implemented AI models. This module includes a unified view of all AI artifacts: descriptions of cases, frameworks, models, source data and other artifacts. This level is primarily designed to communicate successfully implemented AI solutions to all stakeholders, including end users, testers, data scientists, operations teams, infrastructure teams, and IT managers.
AI services and classifier of implemented AI models by types of processed data stored in the knowledge base (speech, text, computer vision, tabular data). This module uses a single API and model classifier to enable access to AI service for users. Microservice architecture allows each API to provide a limited and well-defined function. This module also makes it possible to use one AI model in multiple applications.
Environment for developing new AI models and customizing existing ones (managing full life cycle of AI models). This module includes tools and development platforms to standardize the AI lifecycle. It collects AI artifacts (versions and metadata) in models for reuse. This level allows data scientists to use machine learning development tools to create and deploy models across the company. In addition, it helps to check the performance of models and configure the policies of the AI solution.
Server and network infrastructure for data storage, training, and execution of AI models. This module optimizes infrastructure for multiple vendors providing sufficient computing power for model training. These models are developed by cross-functional teams, so they are relevant to departments of the enterprise. This level manages data storage, hosts applications (local and cloud), trains AI models, and executes models.
Center for management and monitoring of implemented AI models. This service ensures consistency and optimization of AI systems across all business functions, collects AI metrics, and compares them with key business performance indicators. This level allows business to evaluate the effectiveness of the model and take actions if the AI model is overtrained or does not reach the expected goals.
Key Takeaways for Successful AI Integration
The role of AI in business is increasing due to the technology’s ability to reduce costs and improve operational efficiency. In the digital transformation era, using the best available technologies is no longer a matter of competitive advantage, but a matter of survival. Artificial Intelligence not only increases human productivity – it can also completely automate many business processes.
By exploring and developing these five modules independently, businesses can avoid duplicating costs and can develop local AI solutions that can be later implemented across the enterprise.
Ilya Smirnov is the Head of AI / ML Department at Usetech, visiting lecturer at the Massachusetts Institute of Technology, and author of more than 50 scientific publications, Smirnov regularly speaks at international conferences and technology podcasts. He is, additionally, the author of patented technology for trajectory analysis of vector 3D seismic fields.