Administration Guide
Welcome to the AtlasML Administration Guide! This guide provides comprehensive documentation for deploying, configuring, and maintaining AtlasML in production environments.
About AtlasML
AtlasML is a FastAPI-based microservice that provides AI-powered competency management features for the Artemis learning platform. It requires a centralized Weaviate vector database, Azure OpenAI for embeddings, and is deployed exclusively via Docker Compose for production workloads.
Prerequisites
Before deploying AtlasML, ensure you have:
- Docker and Docker Compose installed on your server
- A centralized Weaviate instance (see Weaviate Setup Guide)
- Azure OpenAI API credentials for embedding generation
- API keys for securing AtlasML endpoints
- Basic knowledge of Docker, environment variables, and reverse proxies
Quick Start
Follow this checklist to deploy AtlasML to production:
Deployment Checklist:
- Set up the centralized Weaviate instance with Traefik and API key authentication (required prerequisite)
- Follow the Installation Guide to deploy AtlasML with Docker Compose
- Configure all required environment variables using the Configuration Reference
- Review Deployment Best Practices for production hardening and CI/CD setup
- Set up Monitoring with health checks, logging, and optional Sentry integration
- Test your deployment and refer to Troubleshooting if issues arise
Documentation Sections
Installation
Step-by-step guide to deploy AtlasML using Docker Compose, including Weaviate setup, environment configuration, and initial deployment.
Configuration
Complete reference for all environment variables, including Weaviate connection settings, Azure OpenAI credentials, API keys, and optional Sentry integration.
Deployment
Production best practices, CI/CD workflows with GitHub Actions, secrets management, and deployment strategies.
Monitoring
Health check endpoints, log management, container monitoring, and Sentry error tracking for production observability.
Troubleshooting
Common issues and solutions for startup failures, Weaviate connection problems, API errors, and performance issues.
Architecture Overview
AtlasML follows a microservice architecture:
- AtlasML Service: FastAPI application serving REST endpoints
- Centralized Weaviate: Shared vector database with HTTPS and API key authentication
- Azure OpenAI: Embedding generation service
- Artemis: Primary client consuming AtlasML's competency management features
Communication is unidirectional—Artemis calls AtlasML, and AtlasML never initiates requests back to Artemis.
Support
- GitHub Repository: ls1intum/edutelligence
- Issues: Report bugs and request features on GitHub Issues
- Weaviate Setup: Weaviate README