Created by a Student, for Students

Learn to Build Brain Tumor Segmentation Models

A 12-month curriculum covering radiology, deep learning, nnU-Net, MONAI, and clinical deployment — everything you need to go from zero to competing in the BraTS Challenge at MICCAI.

Start the 12-Month Path
// 12-Month Curriculum

Your Year-Long Path to Medical AI

A structured, month-by-month curriculum designed for students. Each month builds on the last — starting from clinical context, through preprocessing and model building, all the way to deployment and advanced radiomics.

Q1 — Clinical & Preprocessing Foundations
MONTH 01 March Q1
Clinical uses of AI-based segmentation software in Neuro-Oncology practice. Understand why this work matters — how automated segmentation assists neurosurgeons and oncologists with diagnosis, surgical planning, and monitoring treatment response. Study real clinical workflows to see where AI tools fit in.
Neuro-oncology overview Clinical decision-making AI in radiology workflows Tumor grading (WHO) Treatment planning
MONTH 02 April Q1
Performing basic MRI preprocessing — skull-stripping and intensity normalization. Learn to prepare raw brain MRI data for analysis by removing non-brain tissue and standardizing signal intensity across patients and scanners. These steps are essential before any segmentation algorithm can work reliably.
NIfTI file format Skull-stripping (BET, HD-BET) Intensity normalization Co-registration nibabel & SimpleITK 3D Slicer / ITK-SNAP
MONTH 03 May Q1
Introduction to open-source tools for creation of automated image segmentation algorithms — MONAI Core, MONAI Label & MONAI Auto3DSeg. Explore the MONAI ecosystem: a PyTorch-based framework purpose-built for medical imaging. MONAI Core provides transforms and dataloaders; MONAI Label enables interactive annotation; Auto3DSeg automates model selection and training.
MONAI Core MONAI Label MONAI Auto3DSeg PyTorch basics Medical image transforms Interactive annotation
Q2 — Deep Learning & nnU-Net Mastery
MONTH 04 June Q2
Fundamentals of convolutional neural networks. Introduction to nnU-Net. Build your deep learning foundations — understand convolution operations, pooling, activation functions, and the encoder-decoder architecture. Then meet nnU-Net: the self-configuring framework that dominates medical image segmentation benchmarks.
CNNs & convolutions U-Net architecture Encoder-decoder Skip connections nnU-Net overview Loss functions
MONTH 05 July Q2
In-depth understanding of nnU-Net. Go beyond the basics — study how nnU-Net’s fingerprinting system analyzes datasets, how it selects between 2D, 3D full-resolution, and cascade configurations, and how its automated preprocessing, augmentation, and post-processing pipelines work under the hood.
Dataset fingerprinting 2D / 3D / cascade configs Automated preprocessing Data augmentation pipeline 5-fold cross-validation Post-processing
MONTH 06 August Q2
Dataset preparation, folder structuring, patch sizes, interpreting logs. The hands-on month — learn to convert raw BraTS data into nnU-Net’s expected format, understand how patch sizes and batch sizes are determined, read training logs to diagnose problems, and run your first real training jobs.
nnU-Net folder structure dataset.json creation File renaming conventions Patch size selection Training log analysis GPU memory management
Q3 — Evaluation, Optimization & Integration
MONTH 07 September Q3
Understanding evaluation metrics for image segmentation algorithms. Learn to measure your model’s performance with the metrics that matter: Dice Similarity Coefficient, Hausdorff Distance (HD95), Normalized Surface Distance, sensitivity, specificity, and precision. Understand what each metric rewards and penalizes.
Dice Score (DSC) Hausdorff Distance (HD95) Normalized Surface Distance Sensitivity & specificity Per-region evaluation Statistical significance
MONTH 08 October Q3
Fine-tuning and improving an image segmentation model. Move beyond baselines — explore model ensembling, residual encoders, learning rate scheduling, test-time augmentation, custom post-processing rules, and strategies that top BraTS teams use to squeeze out every fraction of a Dice point.
Model ensembling Residual encoders Test-time augmentation Learning rate tuning Custom post-processing Error analysis
MONTH 09 November Q3
Integrating AI solutions into the radiology workflow: challenges and future perspectives. Step back from code to understand the bigger picture — regulatory hurdles (FDA clearance), clinician trust, explainability, bias in datasets, generalizability across institutions, and the gap between research performance and clinical utility.
FDA/CE regulatory pathways Clinical validation Explainability & trust Dataset bias Domain shift Ethics in medical AI
Q4 — Deployment & Advanced Applications
MONTH 10 December Q4
Deploying an AI software into clinical practice using MONAI Deploy, Streamlit, BraTS Orchestrator, and other tools. Turn your trained model into something doctors can actually use — build inference pipelines, create web-based visualization apps, package models in Docker containers, and understand the MONAI Deploy App SDK.
MONAI Deploy Streamlit apps BraTS Orchestrator Docker containers Inference pipelines DICOM integration
MONTH 11 January Q4
Organizing and segmenting a brain MRI longitudinal dataset of pre- and post-treatment brain tumors for treatment response assessment. RANO criteria. Work with real-world complexity — longitudinal data where the same patient has multiple scans over time. Learn the RANO (Response Assessment in Neuro-Oncology) criteria used by clinicians to judge whether tumors are responding, stable, or progressing.
Longitudinal datasets RANO criteria Treatment response Pre- vs post-treatment Volumetric tracking Clinical endpoints
MONTH 12 February Q4
Radiomics applications in neuro-oncological radiology. The frontier — extract quantitative features from medical images (shape, texture, intensity histograms) that go beyond what the human eye can see. Learn how radiomics pipelines extract hundreds of features from segmented tumors and how these features correlate with genetics, treatment outcomes, and prognosis.
Radiomics fundamentals PyRadiomics Feature extraction Texture analysis Radiogenomics Prognostic modeling
// Months 01–02 Deep Dive

Radiology & MRI for the Absolute Beginner

You’re not training to be a radiologist — you’re training a model to behave like one. But you need to understand what the data means.

🧠
What is MRI and Why Does It Matter?

Magnetic Resonance Imaging (MRI) uses powerful magnets and radio waves to create detailed images of soft tissues — particularly the brain. Unlike CT scans or X-rays, MRI doesn’t use ionizing radiation, making it safer for repeated imaging. It produces images with excellent contrast between different types of brain tissue, ideal for spotting tumors.

An MRI scan is not a single 2D image — it’s a 3D volume made up of many 2D “slices.” Think of it like a loaf of bread: each slice is a cross-section of the brain. MRI files come in the NIfTI format (.nii or .nii.gz), which stores these 3D volumes along with metadata about voxel spacing and orientation.

📡
The Four MRI Modalities Used in BraTS

We use four different “modalities” that each highlight different tissue properties — like looking at the same scene with different camera filters.

T1
Native T1-weighted. Shows brain anatomy clearly. CSF appears dark, white matter bright, gray matter intermediate.
T1ce
T1 with contrast agent (gadolinium). Makes actively growing tumor regions “light up” where the blood-brain barrier is disrupted.
T2
T2-weighted. Fluid appears bright, making edema (swelling) and CSF highly visible.
FLAIR
Fluid-Attenuated Inversion Recovery. Like T2 but CSF is suppressed (dark), making peritumoral edema stand out clearly.
🏷️
Understanding Tumor Sub-regions (Segmentation Labels)

Brain tumors have distinct sub-regions. BraTS asks you to segment three nested regions:

Enhancing Tumor (ET)
The actively growing, vascularized core that “enhances” on T1ce. The most aggressive part.
Tumor Core (TC)
Enhancing tumor + necrotic and non-enhancing core. The bulk of solid tumor mass.
Whole Tumor (WT)
Everything: ET + TC + peritumoral edema. The total extent of visible disease.

Radiology Learning Resources

Course
Free lecture series covering conventional radiography, CT, ultrasound, and MRI with real clinical cases. Ideal starting point with zero background.
Video series
Free
Course
Deep dive into MRI physics: magnetic resonance, relaxation, RF pulses, imaging sequences (spin echo, gradient echo, EPI), and contrast mechanisms.
4 weeks
Free audit
Site
Award-winning interactive course with animations and experiments. Won awards from the European and North American radiology societies.
16 chapters
Free
Site
The Wikipedia of radiology. 25,000+ cases with expert annotations. Search for “glioblastoma” or “brain metastasis” to build visual intuition.
25,000+ cases
Free
Site
Free e-learning with scrollable MRI image stacks and illustrated pathology examples for medical students.
Interactive
Free
Site
Structured tutorials on X-ray, CT, and MRI interpretation with quizzes and gallery cases.
Tutorials
Free
// Months 04–06 Deep Dive

Machine Learning & Deep Learning Fundamentals

From “what is a neural network” to confidently working with CNNs and U-Net.

🔗
Key Concepts

Supervised learning: Learning from labeled examples (MRI + expert tumor boundaries).

CNNs: Convolution filters detect edges, textures, and patterns in images.

U-Net: Encoder-decoder with skip connections — the architecture behind nnU-Net.

Loss functions: Dice loss and cross-entropy — how the model measures segmentation mistakes.

Backpropagation: How the model learns through gradient descent.

📐
The Math You Actually Need

Linear algebra: Matrices and multiplication (tensor operations). 3Blue1Brown’s series is the gold standard.

Basic calculus: Derivatives and the chain rule for backpropagation.

Probability: Softmax, sigmoid, distributions. Your model outputs per-voxel class probabilities.

ML & Deep Learning Resources

Video
The best visual introduction to neural networks. Explains neurons, gradient descent, and backpropagation with beautiful animations. Watch this first.
4 videos
~1 hour
Video
Visual, geometric intuition for vectors, matrices, and transformations. Makes linear algebra feel intuitive.
16 videos
~3.5 hours
Course
Jeremy Howard’s legendary free course. Top-down approach: build working models from day one, then learn theory. Uses PyTorch. The single most impactful ML course for self-learners.
~7 weeks
Free
Course
Five courses covering neural networks, optimization, CNNs, sequence models, and ML projects. More theoretical than fast.ai — great complement.
5 courses
Free audit
Tutorial
Learn the framework nnU-Net is built on. Start with “Learn the Basics,” then the CNN tutorial.
Self-paced
Free
Video
Detailed walkthrough of the original U-Net paper: encoder-decoder, skip connections, and why it works for medical segmentation.
30 min
Free
Video
Hands-on PyTorch U-Net implementation. Builds each component step-by-step. Great prep before using nnU-Net.
~30 min
Free
Tutorial
Official MONAI quickstart. Medical imaging-specific PyTorch framework. Covers transforms, datasets, and building your first pipeline. Aligns with months 3 and 10.
Self-paced
Free
// Months 05–08 Deep Dive

nnU-Net: The Framework That Wins Challenges

nnU-Net automatically adapts to any dataset. It dominated the Medical Segmentation Decathlon and is the baseline for most BraTS competitors.

⚙️
What Makes nnU-Net Special

The name “no-new-Net” is the whole point — it squeezes maximum performance from the classic U-Net by optimizing everything else: preprocessing, augmentation, training schedule, patch size, network depth, post-processing, and ensembling. It analyzes your dataset’s “fingerprint” and auto-configures the pipeline.

It generates up to three configs: 2D U-Net, 3D full-resolution, and 3D cascade. It trains all with 5-fold cross-validation, then picks the best or ensembles them. Nine out of ten MICCAI 2020 winners built on nnU-Net.

🔧
Setting Up nnU-Net v2

Requirements: NVIDIA GPU (8GB+ VRAM), CUDA, PyTorch, Python 3.9+.

# Create environment
conda create -n nnunet python=3.10
conda activate nnunet

# Install PyTorch + nnU-Net
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install nnunetv2

# Set environment variables
export nnUNet_raw="/path/to/nnUNet_raw"
export nnUNet_preprocessed="/path/to/nnUNet_preprocessed"
export nnUNet_results="/path/to/nnUNet_results"
📂
Preparing BraTS Data

nnU-Net v2 expects this folder structure:

Dataset001_BraTS/
├── imagesTr/
│ ├── BraTS_001_0000.nii.gz # T1
│ ├── BraTS_001_0001.nii.gz # T1ce
│ ├── BraTS_001_0002.nii.gz # T2
│ ├── BraTS_001_0003.nii.gz # FLAIR
├── labelsTr/
│ ├── BraTS_001.nii.gz
└── dataset.json
🚀
The Training Pipeline

Three commands:

# 1. Analyze & preprocess
nnUNetv2_plan_and_preprocess -d 001 --verify_dataset_integrity

# 2. Train (3D full-res, fold 0)
nnUNetv2_train 001 3d_fullres 0

# 3. Predict
nnUNetv2_predict -i INPUT -o OUTPUT -d 001 -c 3d_fullres -f 0

Train all 5 folds for best results. Each fold: 12–48+ hours on a single GPU.

nnU-Net Resources

Repo
Official source code and docs. Start with the README, then documentation/how_to_use_nnunet.md.
Official
Free
Tutorial
Most comprehensive written tutorial. Covers theory and practice: fingerprinting, three variants, optimization strategies.
Long read
Free
Tutorial
Includes a Google Colab notebook for the full pipeline. Great for first experiments without GPU setup.
Colab
Free
Tutorial
Practical walkthrough: installation, formatting, training, inference. Helpful on v2 naming conventions.
Step-by-step
Free
Tutorial
Beginner guide with BraTS-Metastases data conversion code. Covers v1 and v2 differences.
Aug 2024
Free
Paper
The original paper. Explains design philosophy, fingerprinting, and evaluation across 23 datasets.
Nature Methods
Open Access
// The Challenge

The BraTS Challenge at MICCAI

Running since 2012, BraTS is the premier benchmark for brain tumor segmentation.

🏆
What is BraTS?

BraTS is an annual challenge at MICCAI, the top conference in medical image analysis. Organized by CBICA (UPenn) with RSNA, ASNR, ESNR, NIH, and FDA. Since 2023, it expanded into a “Cluster of Challenges” with 12+ tasks.

📋
BraTS 2025 Lighthouse — Tasks

Task 1 — Adult Glioma: Pre- and post-treatment glioma segmentation.

Tasks 2–3 — Meningioma: Pre-treatment and pre-RT meningioma segmentation.

Task 4 — Brain Metastases: 1,475 cases, 4-label system, pre- and post-treatment.

Task 5 — BraTS-Africa: Addressing bias from underrepresentation of sub-Saharan populations.

Task 6 — Pediatric. Task 7 — GOAT (generalizability across tumor types).

Tasks 8–10: Synthesis, inpainting, and histopathology.

Evaluation Metrics

Dice Score (DSC)
2|A∩B| / (|A|+|B|)
Overlap between prediction and ground truth. Primary ranking metric.
Hausdorff Distance (HD95)
95th percentile
Maximum boundary error. Lower is better.
Normalized Surface Dist.
NSD@τ
Surface accuracy at clinical tolerance.
💡
How to access BraTS data: Hosted on Synapse. Create a free account, search “BraTS 2025,” register, accept the data use agreement. Training data with annotations is free.

BraTS Resources

Site
Central hub for all past and current editions, datasets, leaderboards, and proceedings.
Official
Free
Site
Data hosting and prediction submission platform. Also hosts many other medical imaging challenges.
Platform
Free
Tutorial
Complete walkthrough with BraTS 2020: NIfTI handling, modalities, U-Net training, visualization. GitHub repo included.
Full tutorial
Free + Code
Paper
Official analysis: annotation pipeline, 1,475 cases, 4-label system, evaluation methodology.
2025
Open Access
Paper
How 56 medical students improved neuroimaging skills through BraTS annotation. Proof this path works.
2025
Open Access
// Beyond BraTS

Other Segmentation Competitions

Apply your skills elsewhere and build your portfolio.

CompetitionTargetModalityDetailsLink
Medical Segmentation Decathlon10 organs/tumorsCT & MRIGeneralist benchmark. Rolling leaderboard still active.medicaldecathlon.com
AMOS15 abdominal organsCT & MRI500+ cases. Won by nnU-Net in 2022.grand-challenge.org
KiTSKidneys & tumorsCT300+ cases. Simpler — good stepping stone.kits-challenge.org
HECKTORHead & neck tumorsPET/CT & MRIMulti-modal for radiation therapy planning.grand-challenge.org
ISLESStroke lesionsMRIBrain MRI like BraTS but for stroke.isles-challenge.org
Grand-Challenge.orgDozens activeVariousLargest biomedical challenge platform.grand-challenge.org
Kaggle Medical ImagingVariousVariousBeginner-friendly. Often has prize money.kaggle.com
// Reference

Essential Tools & Additional Resources

Tools you’ll use throughout the curriculum and resources for going deeper.

Tool
Standard open-source medical image viewer. Load NIfTI, view all planes, overlay segmentations.
Desktop
Free
Tool
Lighter-weight 3D viewer with semi-automatic segmentation. Quick NIfTI checks.
Desktop
Free
Tool
Python library for NIfTI files. nib.load('scan.nii.gz').get_fdata() gives you a NumPy array.
pip install
Free
Tool
Medical image processing: resampling, registration, filtering. Used in nnU-Net’s preprocessing.
pip install
Free
Tool
PyTorch framework for healthcare imaging. Core, Label, Auto3DSeg, Deploy. Central to months 3, 10.
pip install
Free
Tool
Radiomics feature extraction: shape, texture, intensity. Central to month 12.
pip install
Free
Dataset
All BraTS datasets across years. The 2021 set (2,040 cases) is widely used for baselines.
Multi-year
Free
Dataset
10 diverse segmentation datasets. CC-BY-SA licensed.
10 datasets
CC-BY-SA
Video
“Neural Networks: Zero to Hero” — builds NNs from raw Python. Deep intuition.
Series
Free
Course
Legendary CV course. Free notes and assignments. Advanced — after your first model.
Full semester
Free
Course
AI applied to healthcare: diagnosis, prognosis, treatment. Class imbalance, clinical metrics.
3 courses
Free audit
🖥️
GPU Access for Students: Free options: Google Colab, Kaggle Notebooks (30 hrs/week GPU), cloud student credits ($100–300). Ask your school’s CS department about shared resources. Lambda Labs and Paperspace offer affordable GPU rentals.