‘Grand-Challenge.org’ – A  open source platform for end-to-end development of machine learning solutions in biomedical imaging.

Grand Challenge is open source. It was largely written by research software engineers from the Diagnostic Image Analysis Group at Radboud University Medical Center. Radboud University Medical Center is also the responsible entity for the website https://grand-challenge.org.

Grand Challenge was developed in 2010 to make it easy for organizers of challenges to set up a website for a particular challenge and to bring all information on challenges in the domain of biomedical image analysis available at one place. In 2012 we switched to Django web framework. Hence 2012 as the initial year in our copyright statement at the bottom of each page.

Grand Challenge provides key functionality for running challenges, such as user management and role base access control, a discussion forum to communicate with and between participants, the possibility that participants form Teams, support for adding multiple phases to a challenge, each with their own leaderboard, and much more. Over 70,000 user accounts have been created on Grand Challenge from countries all across the globe.

Maintaining a challenge, so that new submissions are quickly processed upon submission, is a lot of work. Typically, a junior researcher at some institution is responsible for maintaining a challenge website, but at some point the researcher moves on and the site is no longer kept up-to-date. We have therefore set up a system where challenge organizers can upload the code that computes the score for a submission in the form of a Docker container. This system has been operational since 2017 and has been used by almost 200 challenges. It now process around 2,000 submissions per month. In total over 50,000 submissions have been evaluated and placed on the leaderboard.

The last few years we have seen that container technology has become more widely used and several challenges have been organized where participants are asked to upload a container with their algorithm. The challenge organizers than run the secret test data through the container. We have developed a principled solution by adding the possibility for any user to create Algorithms, containers that perform computation tasks on uploaded input, typically a scan, and produce output, say a set of detected objects with their location, a classification or a segmentation. There is a flexible and extendible system from input(s) and output(s) to choose from for your Algorithm. You can specify what kind of computational requirements are needed (CPU, RAM, GPU) to run the Algorithm. Algorithms can be submitted as solutions to Challenges. In this way, it is possible to organize challenges where the test data is never released to the participants and, more importantly, where the submitted solutions are directly available to the Grand Challenge community. Physicians or clinical researchers can upload their own data and have Algorithms process these, and download the result. Such usage can be automated using the Grand Challenge API.

They have also extended the platform to support various medical viewers that run in the browser and the possibility to set up Reader Studies. In a Reader Study, a user is presented with images and a set of questions. Questions can include annotations, for example, segment the liver. The organizers of the Reader Study can download the results via the website or using the API. With Reader Studies, researcher can carry our observer studies, or set up annotation efforts that are usually needed to run a challenge. You can even set up training programs for physicians; it is possible to provide immediate feedback after a question has been answered.

Features

Manage Your Data

  • Upload medical imaging data easily and securely
  • Control who has access to the data
  • View data with our globally available browser-based workstations
Archives Image

Train Expert Annotators

  • Create sets of questions that users must answer about a dataset
  • Invite clinical experts take part in the training
  • Deliver immediate feedback on performance
Courses Image

Gather Annotations

  • Create your own set of questions for your dataset
  • Customise the hanging protocols and overlays
  • Use our intuitive workstations to view and report the images
Reader Studies Image

Benchmark Algorithms

  • Manage your annotated training, test and validation data sets
  • Gather machine learning solutions for your clinical question
  • Objectively assess the performance of algorithms
Challenges Image

Deploy Your Algorithms

  • Upload algorithm container images
  • Manage access for clinical and non-clinical researchers
  • Upload data for execution by your algorithm on our infrastructure
Algorithms Image

Find Certified Solutions

  • Filter to easily find solutions to your clinical questions
  • Compare product specifications
  • Verify CE and FDA certification

https://grand-challenge.org/