15 Comments

  1. I have just sent my predictions, my algorithm extracts from the sequences ordered as the input volume of tests. Then it creates a cvs file, but the patient name is ordered differently from the CSV file provided within test volumes, are the validation algorithms able to check on the patient’s MRI ID of the csv to be evaluated?

    Thanks.

    • Hi Nick

      I have already answered this question to you via email but I think this question will help other participants.

      Our algorithm re-sorts your csv files when we receive it, so it does not really matter what order you have your rows in, as long as all the correct row are somewhere in your submission file.

      Thanks

      Zhaohan

  2. Dear Organizers,

    Can you help me figure out where can I upload my abstract?

    Sincerely,
    LuoLuyang

  3. Hello there,

    I could not find any information on what template to use for the workshop paper? Do we use the Springer one? Does it have a page limit, e.g. 8 pages?

    Also, is there a max word count on the preliminary abstract that needs to be submitted? Where should the abstract be uploaded?

    Kind regards,
    Rashed

  4. Hi, I was just wondering there are 100 data for training and 25 data for testing, which are not released yet. However in the supporting files, i.e. ‘pre post ablation recrods.csv’ there are total 154 studies involved. How can we utilize this information?

    • Hi Luyang, thanks for your interest in the challenge.

      The supporting files simply has information on all of our data (which is much greater than 100) for whether a particular scan is pre-ablation or post-ablation. You can simple match the ID of the data in the training set which those of the same ID in the .csv file to obtain the ablation status of the 100 training data. This file is more of an “additional support information” and is not really useful for this challenge anyway. However, other participants insisted we include the information, therefore we have uploaded it. That being said, we still encourage you to develop an approach based purely on the LGE-MRIs, and no additional information as the result will be a more robust method.

      Thanks
      Zhaohan

      • Thanks a lot for your reply. Our team once thought the multi-modal data would help increase the performance. Since the additional data is not acquired for every patient, we agree that purely using LGE-MRIs would be a more robust and meaningful way.

        • Hi Luyang. Thanks for your feedback. All the best for the rest of the challenge.

          Thanks
          Zhaohan

    • Hi Shamane, thanks for your interest in the challenge.

      The accuracy will be evaluated using the DICE score as shown on the “Evaluation” page and will be evaluated for each 3D mask corresponding to each 3D LGE-MRI in the test set (which will be released later on). The DICE score calculates the effectiveness of segmentation on a pixel-wise level, and you can easily find implementations for it online in many different languages. Basically, for example, if the test set had 10 3D LGE-MRIs, you are required to submit 10 3D masks which will each be compared the ground truth in our own database. Your overall score will be the average of the individual accuracies for each test data.

      Thanks
      Zhaohan

  5. Hi, I can’t open the website of Data Access Agreement Form and tranning data. Is there another alternate website?

    • Dear participant. Thanks for your interest in the challenge. The data agreement form should be accessible and you are required to fill it in before gaining access to the training data. You will need a google account to do so. Please let us know if you run into any other potential issues.

      Thanks

      Zhaohan

Comments are closed.