Linxi Chen’s STA399 Journey

Hi everyone! My name is Linxi Chen and I’m finishing my third year at the University of Toronto, pursuing a statistics specialist and a mathematics major. I did an STA399 research opportunity program with Professor Pascal Tyrrell from May 2021 – August 2021 and I am very grateful that I can have this opportunity. This project provided me an opportunity in understanding machine learning and scientific research. I would love to share my experience with you all.

Initially, I had no experience with machine learning and convolutional neural network and integrating machine learning with medical imaging was a brand-new area for me. Therefore, at the beginning of this project, I searched loads of information on machine learning to gain a general picture of this area. The first assignment in this project was to make a slide deck on machine learning in medical imaging. Extracting and simplifying the gathered information helped me understand this area more deeply. 

My research project is to find out an objective metric for heterogeneity and explore how dataset heterogeneity will affect heterogeneity as measured by the CNN training image features with sample size. At first, how to specifically define the term “heterogeneity” was a big challenge for me, since there are various kinds of definitions on Google and there was so little information that was directly related to my project. By comparing the information on websites and talking with Professor Tyrrell in the weekly meeting, I managed to define the term “between-group heterogeneity” as the extent to which the measurements of each group vary within a dataset, considering the mean of each subgroup and the grand mean of the population. Next, designing the experiment setup was also challenging, because I have to ensure the experiment steps are applicable and explicable. The datasets were separated into different groups according to the label of each image. We introduced new groups into the dataset while keeping the total sample size the same in each case. There was a total of four cases and the between-group heterogeneity was measured using Cochran’s Q which is a test statistic based on chi-squared distribution. The experiment setup was modified several times, because problems came up from time to time. For example, I planned to use multi-classification CNN model at first, but it showed that I have to use different number of output neurons for the model in different cases, which made the results not comparable. Professor Tyrrell and Mauro suggested I use a binary-classification model with pseudo label, which successfully solved this problem. Luckily, I found some code on the website and with the help of Mauro, I managed to come up with the code that was applicable to use. Next came the hardest part that I encountered. Although idealistically my expectation could be explained very well, still the output results I got were not what I expected. After modifying the model and the sample size several times, I finally managed to get the expected results.

Overall, I have learned a lot from this ROP program. With the guidance of Professor Tyrrell and the help of students in the lab, I have gained an in-depth understanding of machine learning, neural network and the process of scientific research. Also, I have become more familiar with writing a formal scientific research paper. The most valuable thing that I got from this experience is the ability of problem-solving and never be frustrated when things get wrong. I would like to thank Professor Tyrrell for giving me this opportunity to learn about scientific research and helping me overcome all the challenges that I encountered during this process. I’m very grateful that I have gained so many valuable skills in this project. Also, I would like to thank Mauro and all the members in the lab for giving me so much help with my project.

Linxi Chen

Parinita Edke’s ROP experience in the Tyrrell Lab!

Hi! My name is Parinita Edke and I’m finishing my third year at UofT, specializing in Computer Science with a minor in Statistics. I did a STA399Y research project with Professor Tyrrell from September 2020 – April 2021 and I am excited to share my experience in the lab!

I have always been interested in medicine and the applications of Computer Science and Statistics to solve problems in the medical field. I was looking out for opportunities to do research in this intersection and was excited when I saw Professor Tyrrell’s ROP posting. I applied prior to the second-round deadline and waited to hear back. After almost 2 weeks past the deadline, I had still not heard back and decided to follow up on the status of my application. I quickly received a reply from Professor Tyrrell that he had already picked his students prior to receiving my application. While this was extremely disappointing, I thanked Professor Tyrrell for his time and expressed that I was still interested in working with him during the year and attached my application package to the email. I was not really expecting anything coming out of this, so I was extremely happy when I received an invite to an interview! After a quick chat with Professor Tyrrell about my goals and fit for the lab, I was accepted as an ROP student!

Soon after being accepted, I joined my first lab meeting where I was quickly lost in the technical machine learning terms, the statistical concepts and the medical imaging terminology used. I ended the meeting determined to really begin understanding what machine learning was all about!

This marked the beginning of the long and challenging journey through my project. When I decided on my project, it seemed interesting as solving the problem allowed for some cool questions to be answered. The task was to detect the presence of blood in ultrasound images of the knee joint; my project was to determine if Fourier Transformation can be used to generate features to perform the task at hand well. It seemed quite straightforward at first – simply generate Fourier Transformed features and run a classification model to get the outputs, right? After completing the project, I am here to tell you that it was far from being straightforward. It was more like a zigzag progress pattern through the project. The first challenge that I faced was understanding the theory behind the Fourier Transform and how it applies to the task at hand. This took me quite some time to fully grasp and was definitely one of the more challenging parts of the project. The next challenge was figuring out the steps and the things I would need for my project. Rajshree, a previous lab member, had done some initial work using a CNN+SVM model. I first tried to replicate what Rajshree had done in order to create a baseline to compare my approach to. It took me some time to understand what each line of code did within Rajshree’s model but after I was able to get it to work, I felt amazing! Reading through Rajshree’s code gave me more experience in understanding the common Python libraries used in machine learning, so when I built my model, it was much quicker! When I ran my model for the first time, I felt incredible! The process was incredibly frustrating at times, but when I saw results for the first time, I felt like all this struggle was worth it. Throughout this process of figuring out the project steps and building the model, Mauro was always there to help, always being enthusiastic when answering any questions I had and giving me encouragement to keep going.

Throughout the process, Professor Tyrrell was always there as well – during our weekly ROP meetings, he always reminded us to think about the big picture of what our projects were about and the objectives we were trying to accomplish. I definitely veered off in the wrong direction at times, but Professor Tyrrell was quick to pull me back and redirect me in the right direction. Without this guidance, I would not have been able to finish and execute the project in the way that I did and am proud of.

Looking back at the year, I am astonished at the number of things I have learned and how much I have grown. Everything that I learned, not only about machine learning, but about writing a research paper, learning from others and your own mistakes, collaborating with others, learning from even more of my own mistakes, and persevering when things get tough will carry with me throughout the rest of my undergraduate studies and the rest of my professional career.

Thank you, Professor Tyrrell, for taking a chance on me. He could have simply passed on my application but the fact that he took a chance with me and accepted me into the course lead to such an invaluable experience for me which I truly appreciate. The experiences and the connections I have made in this lab have been a highlight of my year, and I hope to keep contributing to the lab in the future!

Parinita Edke

Stanley Hua in ROP299: Joining the Tyrrell Lab during a Pandemic

My name is Stanley Hua, and I’ve just finished my 2nd year in the bioinformatics program. I have also just wrapped up my ROP299 with Professor Pascal. Though I have yet to see his face outside of my monitor screen, I cannot begin to express how grateful I am for the time I’ve been spending at the lab. I remember very clearly the first question he asked me during my interview: “Why should I even listen to you?” Frankly, I had no good answer, and I thought that the meeting didn’t go as well as I’d hoped. Nevertheless, he gave me a chance, and everything began from there.

Initially, I got involved with quality assessment of Multiple Sclerosis and Vasculitis 3D MRI images along with Jason and Amar. Here, I got introduced to the many things Dmitrii can complain about taking brain MRI images. Things such as scanner bias, artifacts, types of imaging modalities and prevalence of disease play a role in how we can leverage these medical images in training predictive models.

My actual ROP, however, revolved around a niche topic in Mauro and Amar’s project. Their project sought to understand the effect of dataset heterogeneity in training Convolutional Neural Networks (CNN) by cluster analysis of CNN-extracted image features. Upon extraction of image features using a trained CNN, we end up with high-dimensional vectors representing each image. As a preprocessing step, the dimensionality of the features is reduced by transformation via Principal Component Analysis, then selecting a number of principal components (PC) to keep (e.g. 10 PCs). The question must then be asked: How many principal components should we use in their methodology? Though it’s a very simple question, I took way too many detours to answer this question. I looked at the difference between standardization vs. no standardization before PCA, nonlinear dimensionality reduction techniques (e.g. autoencoder) and comparisons of neural network image representation (via SVCCA) among other things. Finally, I proposed an equally simple method for determining the number of PCs to use in this context, which is the minimum number of PCs that gives the most frequent resulting value (from the original methodology).

Regardless of the difficulty of the question I sought to answer, I learned more about practices in research, and I even learned about how research and industry intermingle. I only have Professor Pascal to thank for always explaining things in a way that a dummy such as me would understand. Moreover, Professor Pascal always focused on impact; is what you’re doing meaningful and what are its applications?

 I believe that the time I spent with the lab has been worthwhile. It was also here that I discovered that my passion to pursue data science trumps my passion to pursue medical school (big thanks to Jason, Indranil and Amar for breaking my dreams). Currently, I look towards a future, where I can drive impact with data; maybe even in the field of personalized medicine or computational biology. Whoever is reading this, feel free to reach out! Hopefully, I’ll be the next Elon Musk by then…

Transiently signing out,

Stanley Bryan Z. Hua