Frequently Asked Questions

How should I submit and what should I include in my submission?
This is explained here: Submit

How should 3-class segmentation methods interpret the labels?
Algorithms that only segment gray matter, white matter and cerebrospinal fluid should merge labels 1 and 2 (GM), 3 and 4 (WM), and 5 and 6 (CSF). The cerebellum and brain stem (label 7 and 8) will in that case be excluded from the evaluation. The output should be labeled as either 0 (background), 1 (GM), 2 (WM) or 3 (CSF).

How will 3-class segmentation methods be ranked?
There will be two ranking tables; the 3-class segmentations (i.e. the output from 3-class methods and the merged output from 8-class methods) will be ranked separately from the 8-class segmentations.

Will there be a leaderboard before 16 September?
No, it will not be possible to anticipate on a preliminary ranking. All results will be kept secret until 16 September.

Can you explain the input paths [TEST-ORIG]:/input/orig and [TEST-ORIG]:/input/pre?
The Docker images will be run on every test case, which means that the /input and /output folders will point to different physical locations for each run. For example:

docker run --network none -dit -v /path/to/test/1/orig:/input/orig:ro -v /path/to/test/1/pre:/input/pre:ro -v /output mrbrains18/[TEAM-NAME]
<copy result.nii.gz for test case 1 and stop container>
docker run --network none -dit -v /path/to/test/2/orig:/input/orig:ro -v /path/to/test/2/pre:/input/pre:ro -v /output mrbrains18/[TEAM-NAME]
<copy result.nii.gz for test case 2 and stop container>
<and so on...>

In other words; the script inside your Docker container should just process the case in /input/pre (or /input/orig), write the result.nii.gz file in /output and then quit. There is no need to make it iterate over a series of test cases.

How can I create a GPU-enabled Docker image?
Docker images that are built upon (an image that inherits from) the nvidia/cuda image can use the CUDA toolkit for GPU computing. All tensorflow/tensorflow:<version>-gpu images, for example, inherit from nvidia/cuda. If you let us know that your Docker image needs a GPU, then we will make sure it will be run with the nvidia runtime, which will expose an NVidia Titan Xp GPU.

My Docker image is rather large. How should I make it smaller?
In order to reduce the Docker image size, you could try to reduce the number of RUNs in your Docker file. That is:

RUN step1 && step2 && step3

instead of:

RUN step1
RUN step2
RUN step3

Furthermore, when installing a new package through apt-get, add the –no-install-recommends flag. This flag does not install packages that are recommended, but not required.

Will you train my method again from scratch?
No. We expect a method that is already trained; i.e. a method that reads images and creates segmentations.