Train anywhere. Ship with discipline.
8 weeks. 48 hours live. A Databricks-native MLOps curriculum covering MLflow experiment tracking, Unity Catalog model registry, Feature Store and Feature Serving, Model Serving architectures, Databricks Asset Bundles, CI/CD, Inference Tables, and Lakehouse Monitoring. You leave with a deployed, governed, monitored ML system you can walk through in an interview.
✓ Live sessions Saturdays & Sundays · 8:00 PM – 11:00 PM WAT · Cohort 1 · Enrolling
Training a model is the easy part. Shipping it, governing it, and knowing when it breaks is where data science teams get stuck.
Models live in notebooks. When it's time to deploy, everything breaks — environment drift, missing features, untracked data. The gap from notebook to production eats months.
Nobody can say which data trained the prod model. Which features, which hyperparameters, which commit. Reproducibility disappears and audits become nightmares.
A model ships, works well for 6 weeks, then quietly degrades. Without inference logging and drift detection, you hear about it from stakeholders — or not at all.
From notebook-bound data scientist to platform engineer shipping governed ML systems.
Track, govern, and version everything
Set up your Databricks workspace, codify MLOps principles, wire up MLflow for experiment tracking, register models in Unity Catalog, build reusable feature pipelines, and tune models at scale.
Serve, deploy, and monitor at scale
Register production models, stand up serving endpoints with feature lookups, define your stack as Asset Bundles, wire CI/CD, and watch everything with Inference Tables and Lakehouse Monitoring.
Each week pairs a clear operational concept with a build that makes it real. Click any week to see what you'll own.
Understand what MLOps actually requires (reproducibility, lineage, deployability), provision a Databricks workspace, configure clusters, and ship a fully logged training run tied to a git commit.
Track every run with MLflow, wrap non-standard models with PyFunc, log artifacts and signatures cleanly, and register the winner in Unity Catalog with full lineage.
Build a Databricks Feature Store, wire automatic feature lookups into training and serving, and eliminate the training/serving skew that kills real ML systems.
Move beyond sklearn-on-a-laptop. Run distributed hyperparameter tuning on Databricks, track every trial in MLflow, and pick winners with defensible experiment math.
Survey real-time vs. batch vs. streaming serving. Stand up a Databricks Model Serving endpoint with automatic feature lookups, configure scaling, and wire A/B traffic splits.
Define your whole pipeline as code with Databricks Asset Bundles. Promote through dev, staging, and prod with GitHub Actions, wire branching strategy, and make deploys boring.
Turn every prediction into a durable, queryable record. Wire Inference Tables, build a lakehouse-native evaluation pipeline, and configure drift detection that actually pages someone.
Combine everything — features, model, registry, serving, CI/CD, monitoring — into a single production system. Present it on Demo Day with real traffic, real drift charts, real numbers.
A Databricks-native stack for the full ML lifecycle, plus the open tooling that surrounds it.
Databricks Workspace, Unity Catalog, Delta Lake, Clusters & compute
MLflow Tracking, MLflow Models, PyFunc, Nested runs
Unity Catalog Model Registry, Lineage, Aliases, Governance
Databricks Feature Store, Feature Lookup, Feature Serving, Online stores
Hyperopt, Optuna, Distributed training, Early stopping
Databricks Model Serving, Feature lookup serving, A/B splits, Autoscaling
Databricks Asset Bundles, GitHub Actions, Branching strategy, Secrets
Inference Tables, Lakehouse Monitoring, Drift detection, Alerting
VS Code + dbconnect, Databricks Folders, Notebook-to-code patterns
Live instruction, production builds, reference architecture, and an operator community. One price. Everything in.
Feedback from engineers and data scientists who ran MLOps playbooks inside real teams.
"The Feature Store week was the one. We'd been fighting training/serving skew for a year. Two sessions in and we had a working Feature Lookup pattern we now use across every model."
"Unity Catalog model registry changed how our team thinks about governance. Every production model now has proper lineage, aliases, and ownership. Audits are actually short now."
"Asset Bundles finally made deploys boring. No more click-ops. Every job, every endpoint, every model alias lives as code. Our staging env actually mirrors prod."
"I came in as a data scientist who'd never served a model. Week 5 walked me through Model Serving with feature lookup in one sitting. I deployed my first production model the following Monday."
"The Inference Tables + Lakehouse Monitoring module is what separates this from every other ML course. Watching drift in real time during demo day — that was the moment I knew this was legit."
"The best MLOps content I've seen. Structured, hands-on, and rooted in actual Databricks work — not theory. I'll keep using the templates for years."
Practising ML Platform Engineers
The MLOps Bootcamp is led by SoftBricks Academy professionals — engineers who operate ML platforms on Databricks for real clients every week. The curriculum is distilled from production engagements: feature stores we've built, model registries we've governed, drift incidents we've debugged, and CI/CD pipelines we maintain. Every module comes with the patterns, templates, and guardrails we use in our own work.
MLOps is an operator discipline. This cohort is for people ready to own the whole lifecycle.
No upsells. No locked modules. Everything you need to ship and operate ML systems on Databricks.
✓ Secure checkout · EMI available · Invoice on request
Enroll now and warm up your Databricks workspace before Cohort 1 kicks off — provision clusters, clone the repo, and walk into Week 1 already oriented.
Enroll Now — $787What data scientists and ML engineers usually ask before joining.
Plan for 10-15 hours per week. That includes 6 hours of live sessions (two 3-hour sessions on Saturday and Sunday) plus 4-9 hours on the weekly build and self-study. The builds are where operator instincts get wired in — don't skip them.
No. You need solid Python, basic git and CI/CD familiarity, and comfort with the classical ML workflow (train/eval/save). We teach the Ops discipline from Day 1 — no one walks in pre-qualified.
A free Databricks trial is enough to follow every build. We walk through workspace setup, compute, and permissions on day one. If your employer already has Databricks, you'll apply the patterns directly to your job.
All sessions are recorded and available within 24 hours. You also get unlimited re-attendance for future cohorts at no extra cost. Life happens — the program is designed for working engineers.
MLOps covers the classical ML lifecycle: features, training, hyperparameter tuning, model registry, serving, monitoring. LLMOps covers LLM-specific patterns: tracing, evaluation, prompt registry, agent deployment. Tooling overlaps (MLflow, Asset Bundles, Unity Catalog) but the discipline around each is distinct. Many graduates eventually take both.
Yes. EMI is available at checkout. Employer sponsorship and invoicing also available — email academy@softbricks.ai and we'll send the paperwork.
Live sessions are every Saturday and Sunday, 8:00 PM – 11:00 PM WAT. All sessions are recorded for anyone who can't attend live. The community and async channels are active 24/7.
No bootcamp can promise a job. What you get here is what most data-science hires are missing: a deployed, governed, monitored ML system you can walk through live, plus operator-level fluency that shows up the moment you open your laptop in an interview. That combination is what gets senior MLOps and ML platform engineers hired.