News & Updates

Challenge Leaderboard Released (Updated on Jun 16, 2023)

Hi Participants,

We are pleased to announce that our challenge leaderboard has been released (validation, https://www.synapse.org/#!Synapse:syn51471091/wiki/622548), and we welcome everyone to participate in the challenge!
When preparing your data, you can refer to our data specifications in the website (https://www.synapse.org/#!Synapse:syn51471091/wiki/622411), as well as example code and toolkits provided on our GitHub repository (https://github.com/CmrxRecon/CMRxRecon).

We look forward to seeing your excellent performance!

Best regards,
CMRxRecon Team

STACOM EquinOCS System Opens (Updated on 09, 2023)

Hi Participants ,

The STACOM EquinOCS system is open for submissions – there are two types of papers, regular and CMRxRecon. When submitting challenge papers to the system please remind authors to choose CMRxRecon. Details of STACOM paper submission refers to the website: https://cmrxrecon.github.io/STACOM.html

Best Regards,
CMRxRecon Team

Supplementary Data (Updated on 08, 2023)

Hi Participants,

We have uploaded the manually segmented labels corresponding to the training dataset (please refer to our website for label explanations).
Two zip files are:
SingleCoil-Cine-SegmentROI.zip
SingleCoil-Mapping-SegmentROI.zipIn addition, we have also uploaded the TI time tables corresponding to several missing T1 mapping data from the previous dataset.
The zip file is:
supplement.zipAll the files are included in the folder: ‘CMRxRecon Supplement’.
The method to download the supplementary data is the same as the previous one. (Guidance is attached to the email)
Please download the 'Supplement Downloadlink.txt' to acquire the link and password.Feel free to contact us if you have any questions.

Best Regards,
CMRxRecon Team

Download Dataset via Google Drive (Updated on Jun 08, 2023)

Hi Participants,

Recently, due to the high demand for downloading the dataset via One Drive, we have received some emails about technical problems. The download link via Google Drive is prepared and is included in the DownloadLink.txt, which you have previously visited. Additionally, some notes are included as well. Please read them carefully.The Guidance on Data Access is attached to the email that we sent.Feel free to contact us if you have any questions.

Best regards,
CMRxRecon Team

Three Updates (Updated on May 19, 2023)

Hi Participants,

Thanks for your interest in the CMRxRecon Challenge again. Until now, we kindly remind you of three updates.
1. We have uploaded a zip version of the dataset on One Drive. Unfortunately, due to the restrictions from One Drive, we are not allowed to upload a file larger than 200GB. Therefore, we uploaded several zip files with the same structure on One Drive. Participants need to recompose folders by themselves. The download link and password are updated in the new DownloadLink.txt. ( https://www.synapse.org/#!Synapse:syn51476561/files/ )
2. To check whether the dataset is downloaded and completed or correctly, we provide some Python scripts on our GitHub( https://github.com/CmrxRecon/CMRxRecon/tree/main/Download_Dataset_Check ). The 'CMRxRecon.xlsx' file generated from our complete dataset is uploaded on the website( https://www.synapse.org/#!Synapse:syn51476561/files/ )
3. FAQ has been updated! ( https://cmrxrecon.github.io/FAQ.html )

Feel free to contact us, if you have any questions.

Best regards,
CMRxRecon Team

FAQ

What is the estimated time for approval of participation request after sending the signed challenge rule?

If the signed challenge rule document is filled out correctly, the participation request will be approved within 2-4 business days.

Is it possible to download the challenge dataset without participating in the challenge?

No, the dataset is only available to participants during the challenge to ensure enough submissions.

What is the maximum number of team members allowed?

Each team can consist of up to 6 people. The authors in the submitted paper should be the same as the team member list.

Can other datasets or pre-trained models be used to develop the reconstrction algorithms?

No, only the challenge dataset is allowed for developing the reconstruction algorithms.

Data augmentation based on the training dataset is allowed.

Is manual annotation of unlabeled images allowed to increase the training set? 

No, any manual interventions are not allowed, including manual annotation of unlabeled images.

Can high-quality pseudo labels for the unlabeled images be manually selected? 

No, any manual interventions are not allowed, including manual selection of high-quality pseudo labels for unlabeled images.

Does the validation submission affect the final ranking? 

No, the validation submission does not affect the final ranking.

Can modifications be made to the methods and the paper during the testing phase?

Yes, modifications can be made before the testing submission, but not after.

How should we evaluate multi-channel and single-channel data? Do we need to run both?

 You can choose to calculate and save results for both multi-channel and single-channel data in their respective folders, or you can run only one of them. Our final evaluation score will be based on the higher value between the two.


How should we aggregate evaluations for different acceleration factors?

Your model should output results for all acceleration factors (4x, 8x, and 10x). We will evaluate the results for each acceleration factor and use the average value as the final result.


How can we perform model computation using the sensitivity map?

Our original data does not provide a sensitivity map, but the central 24 lines of our k-space are fully sampled and can be used as calibration lines for GRAPPA reconstruction. Alternatively, you can use the "myESP_SENSEmap.m" code provided at https://github.com/CmrxRecon/CMRxRecon/tree/main/CMRxReconDemo .


What should we do if the k-space sizes are not uniform?

We recommend zero-padding the k-space data to a consistent size for reconstruction, and then cropping them back to its original dimensions.


Should we compute the evaluations after cropping the images? Specifically, should we evaluate the entire image or only the cardiac region?

For cine, the evaluations should be performed on the entire image. For mapping, we will provide segmentation labels for the myocardium, and only the mapping parameter values within the segmented region should be used to calculate the RMSE as the ranking metric.


How should the final data be saved?

Please follow the Submission example we provided and save the results in .mat format, with the data dimensions consistent with the original data (without the coil dimension). During the validation phase, to reduce file size, please use the "run4Ranking.m" code provided to crop the image data and save it with the same directory structure. All file names should match the original data names. For more details, please refer to the SubmissionFormat.txt file.


Do we need to upload the training code?

We do not require teams to upload the training code. Only the inference code's Docker is needed for participation.


Should different undersampling factors use the same reconstruction model?

You can use different models, but the code should automatically determine the model to use. During the testing phase, we only allow reading the root directory path at once.


Should different views use the same reconstruction model?

You can use different models, but the code should automatically determine the model to use. During the testing phase, we only allow reading the root directory path at once.


Do we need to use the same reconstruction model for T1mapping and T2mapping?

You can use different models, but the code should automatically determine the model to use. During the testing phase, we only allow reading the root directory path at once.


Can we preprocess the data?

We do not restrict any data preprocessing, and we only evaluate the final reconstruction effect. However, the total running time for all data in each task cannot exceed 4 hours.


Is motion correction between different frames necessary?

No, it can be done but is not mandatory.


Is it necessary to introduce data fidelity terms? 

It can be done but is not mandatory.. We only evaluate the test results and do not have specific requirements for the process.


Do we need to upload the fitted parameter maps? 

No, there is no need to upload them. We will perform the fitting calculations during the evaluation.

Rules

1. All individuals who wish to participate in this challenge are required to register using their real name, affiliation details (including department, university /institute/company name in full, country), and affiliation E-mails. Incomplete and repetitive registrations will be removed without any prior notice. Each team is permitted to have a maximum of six members.

2. During the validation and training phase, all participants must submit a complete solutio n to this challenge, which includes a Docker image(in tar file format) and a qualified methodology paper (of at least 8 pages, in LNCS format).

3. All participants must agree that the short papers they submit can be made publicly available on the challenge website, and that organizers can use information provided by participants, including scores, predicted labels, and papers.

4. Participants are not allowed to register multiple teams or accounts.The CMRxRecon Organizers reserve the right to disqualify such participants.

5. Redistribution or transfer of data or data links is prohibited. Participants must use the data solely for their own purposes. 

6. Participants should develop fully automated methods based solely on the training set, and no manual interventions (such as manual annotation of cases) are allowed.