In my previous article: https://digitaltesseract.com/designing-a-continuous-learning-framework/ we learnt a small use case to understand how continuous learning of an automated machine learning pipelines works.
Here, we can automate the ML production pipelines to retrain the models with new data, depending on our use case:
- On demand: It’s the ad-hoc manual execution of the pipeline, based on out need. There is not any scheduler running to execute the pipeline on periodic basis.
- On a schedule: Pipeline execution depends on how frequently the data patterns change, how expensive it is to retrain your models and based on which ML pipeline execution happens on a daily, weekly, or monthly basis.
- On availability of new training data: Pipeline execution happens when new data is collected and made available in the source databases.
- On model performance degradation: The model is retrained when there is noticeable performance degradation.
- On significant changes in the data distributions. The model is retrained when we notice significant changes on the data distributions of the features that are used to perform the prediction.
Challenges:
What happens when the new implementations of the pipeline aren’t frequently deployed with new ML ideas?
This setup is suitable when we deploy new models based on new data, rather than based on new ML ideas.
But in real world ML environment, we need to try new ML ideas with new models and algorithms and rapidly deploy new implementations of the ML components, for which we need a CI/CD setup to automate the build, test, and deployment of ML pipelines.
For a rapid and reliable update of the pipelines in production, we need a robust automated CI/CD (Continuous Integration/Continuous Deployment) system, which helps data scientists to explore new ideas around feature engineering, model architecture, and hyperparameters and can build, test, and deploy the new pipeline components to the target environment.
The following diagram shows the implementation of the ML pipeline using CI/CD, generally called MLOps, which has the characteristics of the automated ML pipelines setup plus the automated CI/CD routines.
This MLOps setup includes the following components:
- Source control
- Test and build services
- Deployment services
- Model registry
- Feature store
- ML metadata store
- ML pipeline orchestrator
Phases of MLOps:
The following diagram is a process flow of the ML CI/CD automation pipeline:

The pipeline consists of the following stages:
- Development and experimentation: We build, train, evaluate and validate new ML algorithms iteratively where the experiment steps are orchestrated. The output of this stage is the source code of the ML pipeline steps that are then pushed to a source repository.
- Pipeline continuous integration: We build source code and run various tests. The outputs of this stage are pipeline components (packages, executables, and artifacts) to be deployed in a later stage.
- Pipeline continuous delivery: We deploy the artifacts produced by the CI stage to the target environment. The output of this stage is a deployed pipeline with the new implementation of the model.
- Automated triggering: The pipeline is automatically executed in production based on a schedule or in response to a trigger. The output of this stage is a trained model that is pushed to the model registry.
- Model continuous delivery: We serve the trained model as a prediction service for the predictions. The output of this stage is a deployed model prediction service.
- Monitoring: We collect statistics on the model performance based on live data. The output of this stage is a trigger to execute the pipeline or to execute a new experiment cycle.
Continuous Integration: In this stage, the pipeline and its components are built, tested, and packaged when new code is committed or pushed to the source code repository. Besides building packages, container images, and executables, the CI process can include the following tests:
- Unit testing our feature engineering logic, different methods implemented in your model and Testing of the model.
- Testing that each component in the pipeline produces the expected artifacts.
- Testing integration between pipeline components.
Continuous Delivery: In this stage, our system continuously delivers new pipeline implementations to the target environment that in turn delivers prediction services of the newly trained model. For rapid and reliable continuous delivery of pipelines and models, we should consider the following:
- Verifying the compatibility of the model with the target infrastructure before you deploy your model.
- Testing the prediction service by calling the service API with the expected inputs, and making sure that us get the response that we expect. Testing prediction service performance, which involves load testing the service to capture metrics and model latency.
- Verifying that models meet the predictive performance targets before they are deployed.
- Automated deployment to a test environment, for example, a deployment that is triggered by pushing code to the development branch.
- Semi-automated deployment to a pre-production environment, for example, a deployment that is triggered by merging code to the main branch after reviewers approve the changes.
- Manual deployment to a production environment after several successful runs of the pipeline on the pre-production environment.
Continuous Training (CT). CT is the process of continuously training the model means automated model training. It embraces all steps of model lifecycle from data ingestion to tracking its performance in production, which consists of following steps:
- ML specialists create training pipeline, pre-processes new features, monitor training process and fix problems.
- Ops specialists test components of the pipeline and deploy them into a target environment.
A model training pipeline which is a key component of the continuous training process and the entire MLOps workflow, performs frequent model training and retraining.
Model Registry: In this phase, ML specialists shares models and collaborate with Ops specialists to improve model management.
When the right model for production is found, it is pushed to a model registry — a centralized hub capturing all metadata for published models like
- Identifier,
- Name,
- Version,
- The date this version was added,
- The remote path to the serialized model,
- The model’s stage of deployment (development, production, archived, etc.),
- Information on datasets used for training,
- Runtime metrics,
- Governance data for auditing goals in highly regulated industries (like healthcare or finance), and
- Other additional metadata depending on the requirements of your system and business.
The registry acts as a communication layer between research and production environments, providing a deployed model with the information it needs in runtime.
Model Serving: Ops specialists control model deployment, while ML specialists initiates testing in production. The latest approach called Model-as-a-Service is currently the most popular of all as it simplifies deployment, separating the machine learning part from software code, by which we can update a model version without redeployment of the application.
Conclusion: Implementing ML in a production environment doesn’t get finished in deploying our model as an API for prediction, but it continues in deploying an ML pipeline that can automate the retraining and deployment of new models. A CI/CD system enables us to automatically test and deploy new ML pipeline implementations. This system lets us cope with rapid changes in our data and business environment.
hydroxychloroquine side effects eye https://plaquenilx.com/# hydrocloroquine