Ant Colony Optimization Algorithm
Where:
Hardware considerations:
Hardware considerations:
Software considerations:
Software considerations:
SOA is a design pattern in which services are provided between components, through a communication protocol over a network.
SOA is a design pattern in which services are provided between components, through a communication protocol over a network.
Microservices are an architectural style that structures an application as a collection of small, autonomous services. Each microservice is self-contained and implements a business capability.
SOA is a design pattern in which services are provided between components, through a communication protocol over a network.
Microservices are an architectural style that structures an application as a collection of small, autonomous services. Each microservice is self-contained and implements a business capability.
The concept of "Everything as a Service" (XaaS) extends the principles of SOA and microservices by offering comprehensive services over the internet. XaaS encompasses a wide range of services, including infrastructure, platforms, and software.
AI as a Service (AIaaS) enables us to access and expose AI capabilities over the internet. We can integrate AI tools such as machine learning models, natural language processing, and computer vision into our applications leveraging SOA and microservices features.
from flask import Flask, request, jsonify
app = Flask(__name__)
class SentimentAnalysisService:
def __init__(self, model):
self.model = model
def analyze_sentiment(self, text):
sentiment_score = self.model.predict(text)
if sentiment_score > 0.5:
return "Positive"
elif sentiment_score < -0.5:
return "Negative"
else:
return "Neutral"
...
@app.route('/analyze', methods=['POST'])
def analyze():
data = request.get_json()
text_to_analyze = data.get('text', '')
sentiment = service.analyze_sentiment(text_to_analyze)
return jsonify({'sentiment': sentiment})
...
MLOps is a set of practices and tools that support deploying and maintaining ML models in production reliably and efficiently. The goal is to automate and streamline the ML pipeline. These practices and tools include all the pipeline stages from data collection, model training, and deployment to monitoring and governance. We aim to ensure that ML models are robust, scalable, and continuously delivering value.
import mlflow
import logging
from datetime import datetime
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class ModelMonitor:
def __init__(self, model_name):
self.model_name = model_name
mlflow.set_tracking_uri("http://localhost:5000")
def log_prediction(self, input_data, prediction,
actual=None, model_version="1.0"):
"""Log model predictions for monitoring"""
with mlflow.start_run():
mlflow.log_params({
"input_size": len(input_data),
"model_version": model_version,
"timestamp": datetime.now().isoformat()
})
mlflow.log_metric("prediction", prediction)
if actual is not None:
mlflow.log_metric("actual", actual)
mlflow.log_metric("error", abs(prediction - actual))
logger.info(f"Prediction logged: {prediction}")
def monitor_drift(self, current_stats, baseline_stats):
"""Monitor for data drift"""
drift_score = self.calculate_drift(current_stats, baseline_stats)
mlflow.log_metric("drift_score", drift_score)
if drift_score > 0.1: # Threshold
logger.warning(f"Data drift detected: {drift_score}")
import mlflow
import logging
from datetime import datetime
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class ModelMonitor:
def __init__(self, model_name):
self.model_name = model_name
mlflow.set_tracking_uri("http://localhost:5000")
def log_prediction(self, input_data, prediction,
actual=None, model_version="1.0"):
"""Log model predictions for monitoring"""
with mlflow.start_run():
mlflow.log_params({
"input_size": len(input_data),
"model_version": model_version,
"timestamp": datetime.now().isoformat()
})
mlflow.log_metric("prediction", prediction)
if actual is not None:
mlflow.log_metric("actual", actual)
mlflow.log_metric("error", abs(prediction - actual))
logger.info(f"Prediction logged: {prediction}")
def monitor_drift(self, current_stats, baseline_stats):
"""Monitor for data drift"""
drift_score = self.calculate_drift(current_stats, baseline_stats)
mlflow.log_metric("drift_score", drift_score)
if drift_score > 0.1: # Threshold
logger.warning(f"Data drift detected: {drift_score}")
Focus on Operations
Focus on Operations
Focus on Operations
Focus on Operations
The Data Dichotomy: “While data-driven systems are about exposing data, service-oriented architectures are about hiding data.” (Stopford, 2016)
The Data Dichotomy: “While data-driven systems are about exposing data, service-oriented architectures are about hiding data.” (Stopford, 2016)
We need to design systems prioritising data!
Data-Oriented Architectures
Data-First Systems
Data-Oriented Architectures
Data-First Systems
Data-Oriented Architectures
Prioritise Decentralisation
Data-Oriented Architectures
Openness
"It seems to me what is called for is an exquisite balance between two conflicting needs: the most skeptical scrutiny of all hypotheses that are served up to us and at the same time a great openness to new ideas. Obviously those two modes of thought are in some tension. But if you are able to exercise only one of these modes, whichever one it is, you’re in deep trouble. (The Burden of Skepticism, Sagan, 1987)
The systems engineering approach is better equipped than the ML community to facilitate the adoption of this technology by prioritising the problems and their context before any other aspects.
_script: true
This script will only execute in HTML slides
_script: true