Created by a Student, for Students

Learn to Build Brain Tumor Segmentation Models

A 12-week curriculum covering radiology, deep learning, nnU-Net, MONAI, and clinical deployment — everything you need to go from zero to competing in the BraTS Challenge at MICCAI.

Start the 12-Week Path
About This Project → MICCAI Conference → BraTS 2025 Challenge →
// 12-Week Curriculum

Your 12-Week Path to Medical AI

A structured, week-by-week curriculum designed for students. Each week builds on the last — starting from clinical context, through preprocessing and model building, all the way to deployment and advanced radiomics.

Weeks 1–3 — Clinical & Preprocessing Foundations
WEEK 01 Q1
Clinical uses of AI-based segmentation software in Neuro-Oncology practice. Understand why this work matters — how automated segmentation assists neurosurgeons and oncologists with diagnosis, surgical planning, and monitoring treatment response. Study real clinical workflows to see where AI tools fit in.
Neuro-oncology overview Clinical decision-making AI in radiology workflows Tumor grading (WHO) Treatment planning
View lesson →
WEEK 02 Q1
Performing basic MRI preprocessing — skull-stripping and intensity normalization. Learn to prepare raw brain MRI data for analysis by removing non-brain tissue and standardizing signal intensity across patients and scanners. These steps are essential before any segmentation algorithm can work reliably.
NIfTI file format Skull-stripping (BET, HD-BET) Intensity normalization Co-registration nibabel & SimpleITK 3D Slicer / ITK-SNAP
View lesson →
WEEK 03 Q1
Introduction to open-source tools for creation of automated image segmentation algorithms — MONAI Core, MONAI Label & MONAI Auto3DSeg. Explore the MONAI ecosystem: a PyTorch-based framework purpose-built for medical imaging. MONAI Core provides transforms and dataloaders; MONAI Label enables interactive annotation; Auto3DSeg automates model selection and training.
MONAI Core MONAI Label MONAI Auto3DSeg PyTorch basics Medical image transforms Interactive annotation
View lesson →
Weeks 4–6 — Deep Learning & nnU-Net Mastery
WEEK 04 Q2
Fundamentals of convolutional neural networks. Introduction to nnU-Net. Build your deep learning foundations — understand convolution operations, pooling, activation functions, and the encoder-decoder architecture. Then meet nnU-Net: the self-configuring framework that dominates medical image segmentation benchmarks.
CNNs & convolutions U-Net architecture Encoder-decoder Skip connections nnU-Net overview Loss functions
View lesson →
WEEK 05 Q2
In-depth understanding of nnU-Net. Go beyond the basics — study how nnU-Net’s fingerprinting system analyzes datasets, how it selects between 2D, 3D full-resolution, and cascade configurations, and how its automated preprocessing, augmentation, and post-processing pipelines work under the hood.
Dataset fingerprinting 2D / 3D / cascade configs Automated preprocessing Data augmentation pipeline 5-fold cross-validation Post-processing
View lesson →
WEEK 06 Q2
Dataset preparation, folder structuring, patch sizes, interpreting logs. The hands-on week — learn to convert raw BraTS data into nnU-Net’s expected format, understand how patch sizes and batch sizes are determined, read training logs to diagnose problems, and run your first real training jobs.
nnU-Net folder structure dataset.json creation File renaming conventions Patch size selection Training log analysis GPU memory management
View lesson →
Weeks 7–9 — Evaluation, Optimization & Integration
WEEK 07 Q3
Understanding evaluation metrics for image segmentation algorithms. Learn to measure your model’s performance with the metrics that matter: Dice Similarity Coefficient, Hausdorff Distance (HD95), Normalized Surface Distance, sensitivity, specificity, and precision. Understand what each metric rewards and penalizes.
Dice Score (DSC) Hausdorff Distance (HD95) Normalized Surface Distance Sensitivity & specificity Per-region evaluation Statistical significance
View lesson →
WEEK 08 Q3
Fine-tuning and improving an image segmentation model. Move beyond baselines — explore model ensembling, residual encoders, learning rate scheduling, test-time augmentation, custom post-processing rules, and strategies that top BraTS teams use to squeeze out every fraction of a Dice point.
Model ensembling Residual encoders Test-time augmentation Learning rate tuning Custom post-processing Error analysis
View lesson →
WEEK 09 Q3
Integrating AI solutions into the radiology workflow: challenges and future perspectives. Step back from code to understand the bigger picture — regulatory hurdles (FDA clearance), clinician trust, explainability, bias in datasets, generalizability across institutions, and the gap between research performance and clinical utility.
FDA/CE regulatory pathways Clinical validation Explainability & trust Dataset bias Domain shift Ethics in medical AI
View lesson →
Weeks 10–12 — Deployment & Advanced Applications
WEEK 10 Q4
Deploying an AI software into clinical practice using MONAI Deploy, Streamlit, BraTS Orchestrator, and other tools. Turn your trained model into something doctors can actually use — build inference pipelines, create web-based visualization apps, package models in Docker containers, and understand the MONAI Deploy App SDK.
MONAI Deploy Streamlit apps BraTS Orchestrator Docker containers Inference pipelines DICOM integration
View lesson →
WEEK 11 Q4
Organizing and segmenting a brain MRI longitudinal dataset of pre- and post-treatment brain tumors for treatment response assessment. RANO criteria. Work with real-world complexity — longitudinal data where the same patient has multiple scans over time. Learn the RANO (Response Assessment in Neuro-Oncology) criteria used by clinicians to judge whether tumors are responding, stable, or progressing.
Longitudinal datasets RANO criteria Treatment response Pre- vs post-treatment Volumetric tracking Clinical endpoints
View lesson →
WEEK 12 Q4
Radiomics applications in neuro-oncological radiology. The frontier — extract quantitative features from medical images (shape, texture, intensity histograms) that go beyond what the human eye can see. Learn how radiomics pipelines extract hundreds of features from segmented tumors and how these features correlate with genetics, treatment outcomes, and prognosis.
Radiomics fundamentals PyRadiomics Feature extraction Texture analysis Radiogenomics Prognostic modeling
View lesson →
// The Challenge

The BraTS Challenge at MICCAI

Running since 2012, BraTS is the premier benchmark for brain tumor segmentation.

🏆
What is BraTS?

BraTS is an annual challenge at MICCAI, the top conference in medical image analysis. Organized by CBICA (UPenn) with RSNA, ASNR, ESNR, NIH, and FDA. Since 2023, it expanded into a “Cluster of Challenges” with 12+ tasks.

📋
BraTS 2025 Lighthouse — Tasks

Task 1 — Adult Glioma: Pre- and post-treatment glioma segmentation.

Tasks 2–3 — Meningioma: Pre-treatment and pre-RT meningioma segmentation.

Task 4 — Brain Metastases: 1,475 cases, 4-label system, pre- and post-treatment.

Task 5 — BraTS-Africa: Addressing bias from underrepresentation of sub-Saharan populations.

Task 6 — Pediatric. Task 7 — GOAT (generalizability across tumor types).

Tasks 8–10: Synthesis, inpainting, and histopathology.

Evaluation Metrics

Dice Score (DSC)
2|A∩B| / (|A|+|B|)
Overlap between prediction and ground truth. Primary ranking metric.
Hausdorff Distance (HD95)
95th percentile
Maximum boundary error. Lower is better.
Normalized Surface Dist.
NSD@τ
Surface accuracy at clinical tolerance.
💡
How to access BraTS data: Hosted on Synapse. Create a free account, search “BraTS 2025,” register, accept the data use agreement. Training data with annotations is free.

BraTS Resources

Site
The official website for the MICCAI conference — the premier international venue for medical image analysis research. BraTS is held here annually.
Conference
Official
Site
The official BraTS 2025 challenge page on Synapse. Register here, download training data, and submit your predictions for evaluation on the leaderboard.
Challenge
Free
Site
Central hub for all past and current editions, datasets, leaderboards, and proceedings.
Official
Free
Site
Data hosting and prediction submission platform. Also hosts many other medical imaging challenges.
Platform
Free
Tutorial
Complete walkthrough with BraTS 2020: NIfTI handling, modalities, U-Net training, visualization. GitHub repo included.
Full tutorial
Free + Code
Paper
Official analysis: annotation pipeline, 1,475 cases, 4-label system, evaluation methodology.
2025
Open Access
Paper
How 56 medical students improved neuroimaging skills through BraTS annotation. Proof this path works.
2025
Open Access
// Beyond BraTS

Other Segmentation Competitions

Apply your skills elsewhere and build your portfolio.

CompetitionTargetModalityDetailsLink
Medical Segmentation Decathlon10 organs/tumorsCT & MRIGeneralist benchmark. Rolling leaderboard still active.medicaldecathlon.com
AMOS15 abdominal organsCT & MRI500+ cases. Won by nnU-Net in 2022.grand-challenge.org
KiTSKidneys & tumorsCT300+ cases. Simpler — good stepping stone.kits-challenge.org
HECKTORHead & neck tumorsPET/CT & MRIMulti-modal for radiation therapy planning.grand-challenge.org
ISLESStroke lesionsMRIBrain MRI like BraTS but for stroke.isles-challenge.org
Grand-Challenge.orgDozens activeVariousLargest biomedical challenge platform.grand-challenge.org
Kaggle Medical ImagingVariousVariousBeginner-friendly. Often has prize money.kaggle.com
// Reference

Essential Tools & Additional Resources

Tools you’ll use throughout the curriculum and resources for going deeper.

Tool
Standard open-source medical image viewer. Load NIfTI, view all planes, overlay segmentations.
Desktop
Free
Tool
Lighter-weight 3D viewer with semi-automatic segmentation. Quick NIfTI checks.
Desktop
Free
Tool
Python library for NIfTI files. nib.load('scan.nii.gz').get_fdata() gives you a NumPy array.
pip install
Free
Tool
Medical image processing: resampling, registration, filtering. Used in nnU-Net’s preprocessing.
pip install
Free
Tool
PyTorch framework for healthcare imaging. Core, Label, Auto3DSeg, Deploy. Central to weeks 3, 10.
pip install
Free
Tool
Radiomics feature extraction: shape, texture, intensity. Central to week 12.
pip install
Free
Dataset
All BraTS datasets across years. The 2021 set (2,040 cases) is widely used for baselines.
Multi-year
Free
Dataset
10 diverse segmentation datasets. CC-BY-SA licensed.
10 datasets
CC-BY-SA
Video
“Neural Networks: Zero to Hero” — builds NNs from raw Python. Deep intuition.
Series
Free
Course
Legendary CV course. Free notes and assignments. Advanced — after your first model.
Full semester
Free
Course
AI applied to healthcare: diagnosis, prognosis, treatment. Class imbalance, clinical metrics.
3 courses
Free audit
🖥️
GPU Access for Students: Free options: Google Colab, Kaggle Notebooks (30 hrs/week GPU), cloud student credits ($100–300). Ask your school’s CS department about shared resources. Lambda Labs and Paperspace offer affordable GPU rentals.
// About

The Story Behind MedVision Academy

🥈
BraTS 2025 Challenge
2nd Place
MICCAI Lighthouse

Hi, I’m Wes Krikorian (Horace Mann Class of 2027). I started MedVision Academy after competing in the 2025 BraTS Challenge at MICCAI.

When I first dove into this, I was starting from scratch with no real background in medicine, radiology, or AI. I spent that summer pretty much living in academic papers, YouTube videos, and GitHub repos, trying to piece everything together through a lot of trial and error. Eventually, I managed to build a MONAI pipeline using three different architectures that ended up taking 2nd place in the competition.

The hardest part of the whole experience wasn’t just the coding — it was how scattered the information was. There wasn’t one place that walked you through the journey from “what is an MRI?” to “here is how you submit predictions.”

I built this site to be that resource. It’s a collection of everything I learned along the way, designed to demystify the BraTS competition and make medical AI research a bit more accessible to any student who’s motivated to dive in.

I hope this website will also be useful for other medical AI competitions and hackathons. Please reach out with comments and content suggestions!

Acknowledgements

This project would not have been possible without the guidance and support of Dr. Mariam Aboian, Crystal Chukwurah, Dr. Nikolay Yordanov, and Raisa Amiruddin — whose mentorship, educational resources, and work with the BraTS Challenge made both the competition experience and this website possible.

// Get in Touch

Contact

Have a question about the curriculum, the BraTS challenge, or getting started with medical AI research? Found a broken link or want to suggest a resource? Want to share your own experience competing?

Fill out the form and I’ll get back to you as soon as I can.