β± Duration: 5 Hours
π Learning Objectives
- Understand Azure AI Services ecosystem
- Learn OpenAI API basics and authentication
- Make API calls to AI services
- Understand AI integration in DevOps workflows
π Core Concepts (2 Hours)
Azure AI Services Overview
Azure AI Services:
βββββββββββββββββββββββββββββββββββββββββββββββ
β Azure OpenAI Service β
β β’ GPT-4, GPT-3.5 models β
β β’ DALL-E for image generation β
β β’ Enterprise security and compliance β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Azure Cognitive Services β
β β’ Vision: Image analysis, OCR β
β β’ Speech: Speech-to-text, text-to-speech β
β β’ Language: Translation, sentiment β
β β’ Decision: Personalizer, anomaly detectionβ
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Azure Machine Learning β
β β’ Custom model training β
β β’ MLOps pipelines β
β β’ Model deployment β
βββββββββββββββββββββββββββββββββββββββββββββββOpenAI API Basics
# OpenAI API Structure
Base URL: https://api.openai.com/v1
Main Endpoints:
β’ /chat/completions - Chat models (GPT-4, GPT-3.5)
β’ /completions - Legacy text completion
β’ /embeddings - Text embeddings
β’ /images/generations - DALL-E image generation
Authentication:
Header: Authorization: Bearer YOUR_API_KEY
Rate Limits:
β’ Requests per minute (RPM)
β’ Tokens per minute (TPM)
β’ Varies by model and tierAPI Authentication Setup
# Set API key as environment variable (NEVER hardcode!)
export OPENAI_API_KEY="sk-your-api-key-here"
# Or for Azure OpenAI:
export AZURE_OPENAI_API_KEY="your-azure-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
# In Python
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
# For Azure OpenAI
openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.api_version = "2024-02-15-preview"
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")Chat Completions API
# Python example using openai library
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful DevOps assistant."},
{"role": "user", "content": "Explain what a Dockerfile is in one sentence."}
],
max_tokens=100,
temperature=0.7
)
print(response.choices[0].message.content)
# Response structure:
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"choices": [{
"message": {
"role": "assistant",
"content": "A Dockerfile is a text file..."
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 30,
"total_tokens": 55
}
}cURL Examples
# Chat completion with cURL
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "What is CI/CD?"}
],
"max_tokens": 150
}'
# List available models
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
# Azure OpenAI example
curl "$AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-35-turbo/chat/completions?api-version=2024-02-15-preview" \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
-d '{
"messages": [{"role": "user", "content": "Hello!"}]
}'AI in DevOps Use Cases
DevOps + AI Integration:
βββββββββββββββββββββββββββββββββββββββββββββββ
β Code Review β
β β’ AI-powered code suggestions β
β β’ Security vulnerability detection β
β β’ Best practice recommendations β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Documentation β
β β’ Auto-generate README files β
β β’ Create API documentation β
β β’ Write commit messages β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Incident Response β
β β’ Log analysis and anomaly detection β
β β’ Root cause analysis β
β β’ Suggested remediation steps β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Infrastructure β
β β’ Generate Terraform/CloudFormation β
β β’ Optimize resource allocation β
β β’ Cost analysis and recommendations β
βββββββββββββββββββββββββββββββββββββββββββββββπ¬ Hands-on Lab (2.5 Hours)
Lab 1: Set Up OpenAI Environment
# Create project directory
mkdir -p ~/ai-labs/openai-intro
cd ~/ai-labs/openai-intro
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install OpenAI library
pip install openai python-dotenv
# Create .env file (add to .gitignore!)
cat > .env << 'EOF'
OPENAI_API_KEY=your-api-key-here
EOF
# Add to .gitignore
echo ".env" >> .gitignore
echo "venv/" >> .gitignoreLab 2: Basic Chat Completion Script
# Create chat.py
cat > chat.py << 'EOF'
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables
load_dotenv()
# Initialize client
client = OpenAI()
def chat_with_ai(prompt, system_message="You are a helpful assistant."):
"""Send a prompt to OpenAI and get a response."""
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": prompt}
],
max_tokens=500,
temperature=0.7
)
return response.choices[0].message.content
except Exception as e:
return f"Error: {str(e)}"
if __name__ == "__main__":
# Example: DevOps assistant
system = "You are a DevOps expert. Give concise, practical answers."
questions = [
"What are the key differences between Docker and Kubernetes?",
"How do I set up a basic CI/CD pipeline with GitHub Actions?",
"Explain blue-green deployment in 3 sentences."
]
for question in questions:
print(f"\nπ Question: {question}")
print(f"π€ Answer: {chat_with_ai(question, system)}")
print("-" * 50)
EOF
# Run the script
python chat.pyLab 3: DevOps Helper Tool
# Create devops_helper.py
cat > devops_helper.py << 'EOF'
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
client = OpenAI()
def generate_dockerfile(app_type, requirements):
"""Generate a Dockerfile based on app type."""
prompt = f"""Generate a production-ready Dockerfile for a {app_type} application.
Requirements: {requirements}
Include best practices like multi-stage builds, non-root user, and health checks."""
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a Docker expert. Output only the Dockerfile content."},
{"role": "user", "content": prompt}
],
max_tokens=800
)
return response.choices[0].message.content
def explain_error(error_message):
"""Explain an error and suggest fixes."""
prompt = f"""Explain this error and provide a solution:
{error_message}"""
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a DevOps troubleshooting expert. Be concise and practical."},
{"role": "user", "content": prompt}
],
max_tokens=400
)
return response.choices[0].message.content
def generate_terraform(resource_description):
"""Generate Terraform code for a resource."""
prompt = f"""Generate Terraform code for: {resource_description}
Include variables, outputs, and follow best practices."""
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a Terraform expert. Output only valid HCL code with comments."},
{"role": "user", "content": prompt}
],
max_tokens=800
)
return response.choices[0].message.content
if __name__ == "__main__":
print("=" * 50)
print("π³ DOCKERFILE GENERATOR")
print("=" * 50)
dockerfile = generate_dockerfile("Python Flask", "Python 3.11, pip requirements.txt, runs on port 5000")
print(dockerfile)
print("\n" + "=" * 50)
print("π§ TERRAFORM GENERATOR")
print("=" * 50)
terraform = generate_terraform("AWS S3 bucket with versioning and encryption")
print(terraform)
print("\n" + "=" * 50)
print("π ERROR EXPLAINER")
print("=" * 50)
error = "Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name is not available"
explanation = explain_error(error)
print(explanation)
EOF
# Run the helper
python devops_helper.pyLab 4: Interactive CLI Tool
# Create interactive_cli.py
cat > interactive_cli.py << 'EOF'
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
client = OpenAI()
def devops_chat():
"""Interactive DevOps assistant."""
print("π€ DevOps AI Assistant")
print("Type 'quit' to exit, 'clear' to reset conversation")
print("-" * 50)
messages = [
{"role": "system", "content": """You are an expert DevOps engineer assistant.
You help with Docker, Kubernetes, Terraform, CI/CD, AWS, Azure, and Linux.
Provide practical, concise answers with code examples when appropriate."""}
]
while True:
user_input = input("\nπ€ You: ").strip()
if user_input.lower() == 'quit':
print("Goodbye! π")
break
elif user_input.lower() == 'clear':
messages = messages[:1] # Keep system message
print("Conversation cleared.")
continue
elif not user_input:
continue
messages.append({"role": "user", "content": user_input})
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
max_tokens=800,
temperature=0.7
)
assistant_message = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_message})
print(f"\nπ€ Assistant: {assistant_message}")
except Exception as e:
print(f"\nβ Error: {str(e)}")
if __name__ == "__main__":
devops_chat()
EOF
# Run interactive CLI
python interactive_cli.pyβ Day 5 Checklist
- Understand Azure AI Services ecosystem
- Know OpenAI API authentication and structure
- Can make chat completion API calls
- Understand AI use cases in DevOps
- Created DevOps helper tool with AI