Yan Qing Lee’s ROP299 Journey

Hi! I’m Yan Qing Lee, an incoming 3rd-year Computer Science and Psychology double major undergraduate student. This past summer, I was given the opportunity to embark on my first research project in the field of artificial intelligence, and I’m excited to share my experience.

My research topic investigated if individuals who receive a false-positive mammogram result by an AI model have a higher risk of receiving a breast cancer diagnosis later on. Past studies have found that receiving a false-positive mammogram result from radiologists is associated with a higher risk of future breast cancer, but no studies have yet investigated if this holds true for AI breast cancer detection models. In this project, I used a longitudinal dataset of breast cancer mammograms, and ran a trained AI breast cancer classifier, made of an ensemble of 4 Convnext-small models, to obtain false-positive and true-negative results. Cox proportional hazards models were then used to investigate the hazard ratio of receiving a false-positive result, from both the AI model, and from radiologists.

As a student who entered the Computer Science major out-of-stream, I started the ROP feeling really out of place. Although I’ve known I wanted to pursue AI, I had no real experience in neither AI nor medical imaging, and I wondered if I was too under-qualified for this experience. Still, I was determined to put in as many hours as I needed to succeed. 

I first began by familiarizing myself with ML terms, and choosing an area of interest (breast cancer mammography) to formulate a research question upon. As I’m sure other ROP students would agree, this process was extremely challenging; as weeks passed by, I found that my research questions were always either over-ambitious or not feasible. Over time, however, I realized that my difficulty with creating a research question stemmed from my lack of knowledge in exactly how ML models work, and the existing literature and gaps within the field of breast cancer mammography. As I dug deeper into existing literature, the one interesting finding regarding radiologists’ false-positives caught my eye, and this finally led me to my research question. 

Once I began working on my project, the many challenges of research revealed themselves to me. This included difficulties of downloading and parsing through a large dataset, of installing packages and working around incompatible versions of libraries to set up a working environment, and, worst of all, of finding out an AI breast cancer detection model you originally centered your project around is not as replicable as you assumed it would be. Despite that I made sure to set up my research question to be relatively simple, the process of setting up, debugging preprocessing code, training and running an AI breast cancer classification model and obtaining undesirable training results was nothing short of complicated. Still, with the weekly lab meetings keeping me on track, and the support of Dr. Tyrrell, Mauro and the other students in the lab, I slowly but surely overcame every obstacle, and learned immense amounts every week to successfully complete my project. Even though I had to find a new AI model to use near the end, and redo my experimentation, I found that with my experience with the previous AI model, I was now able to independently set up and run the new model much more efficiently than before. It was proof of how much I’d learned, and I’m glad to now be able to look back and be proud of how much I’ve accomplished in the span of a few months.

At the end of it all, I have to thank Dr. Tyrrell for fostering my passion towards AI and its applications in fields as impactful and important as breast cancer mammography. This experience only made me more excited to delve into the applications of AI in other fields in the future, and I can’t thank the MiData lab enough for this experience.

MiWord of the Day is… Region of Interest!

Look! You’ve finally made it to Canada! You gloriously take in the view of Lake Ontario when your friend beside you exclaims, “Look, they have beaver tails!” You excitedly scan the lake, asking, “Where?” 

“There!”

“Where?”

“There!”

You see no movement from the lake. It isn’t until your friend pulls you to the front of a storefront says “BeaverTails” with a picture of delicious pastries that you realize they didn’t mean actual beavers’ tails. It turns out you were looking at the wrong place the whole time!

Often times, it’s easy for us to quickly identify objects because we know the context of where things should be. These are the kinds of things we take for granted, until it’s time to hand the same tasks over to machines. 

In medical imaging, experts label what are called Regions of Interests (ROIs), which are specific areas of a medical image that contain pathology, such as the specific area of a lesion. Having labelled ROIs are important, as it can help prevent extra time from being wasted on analyzing non-relevant areas of an image, especially since medical images contain complex structures that take time to interpret. But in the area of machine learning (ML) in medical imaging, having labelled ROIs is also useful because it can help with training ML models that classify whether a medical image contains a pathology or not; with ROIs identified, cropping can be done during the preprocessing of images so that only relevant areas of images are compared for the model to learn differences between positive and negative images faster.

In fact, having ROIs is so important, there is an entire field in artificial intelligence dedicated to it: Computer Vision. The field of computer vision focuses on automating the extraction of ROIs in images or videos, which plays a critical role in the mechanization of tasks like object detection and tracking for things like self-driving cars. In object detection, for example, things like ROI Pooling can be utilized; this is where multiple ROIs are used to obtain input feature maps, from which maximum values are used to detect the presence of features, giving rise to the ability to identify many objects at once – this is extremely useful, especially once you’re on the road and there are 10 other cars around you!

Now, the fun part: using Region of Interest in a sentence!

Serious: The coordinates of ROIs are given for the positive mammogram images in the dataset I’m using. Maybe I could use Grad-CAM to see if the ML breast cancer classification model I’m using uses the same regions of the image to arrive at its classification decision; this way, I can see if its decision making aligns with the decision making of radiologists.

Less serious: I forced my friend to watch my favorite movie with me, but I can’t lie – I think the attractive male lead was her only region of interest!

See you in the blogosphere,

Yan Qing Lee

Yuxi Zhu’s ROP Journey

Hi, I am Yuxi Zhu, a Bioinformatics and Computational Biology specialist and Molecular Genetics Major who just finished my second year. Like most people, this is my first formal research experience. Professor Tyrrell warned me from the start that I would need to be independent in this lab, but my genuine interest in ML and its applications gave me the confidence to take on the challenge. Overall, this summer’s ROP journey in the MiDATA lab was filled with both excitement and challenges.

The first challenge was finding a research question. I’m incredibly grateful to Daniel, a volunteer and former ROP student, who introduced me to the concept of “adversarial examples” and helped me formulate my research question from the start. During the first two months of the literature review, I often found myself diving too deeply into theoretical aspects that were less applicable to Medical Imaging, or exploring questions that, while feasible, didn’t capture my interest. Luckily, I was able to settle down with understanding the differential effects between random perturbations (like random noise and loss of resolution) and non-random adversarial perturbations on the model. 

As the project progressed, I encountered a series of obstacles and bugs that required constant problem-solving and debugging. For example, my initial findings showed very low performance, all under 50%. Professor Tyrrell pointed out that the accuracy of a binary classifier should never drop below 50%, as that would mean it’s performing worse than a random model. I quickly realized there were bugs in my code and implementation. Additionally, after obtaining results, I thought interpreting them would be straightforward. However, when Professor Tyrrell asked me why adversarial perturbations led to accuracies below 50% while the others didn’t, I found myself at a loss for words. In the end, with Professor Tyrrell’s guidance, I was able to interpret the results correctly and articulate them in my report.

Despite the stress I felt before presenting my findings at our weekly meetings, these sessions became invaluable learning experiences. Professor Tyrrell would scrutinize my work with questions and critiques, pushing me to think more deeply and critically about every aspect of my research. The other lab members also provided very helpful insights and shared their work. These meetings not only allowed me to understand what others were working on but also gave me the chance to get involved in or observe lively discussions that often took place. 

Looking back on the last few months, this experience has been invaluable. I am deeply thankful to Professor Tyrrell who offered me this wonderful opportunity in ML and guided me through my research project. I especially appreciate how we weren’t just taught to implement a given research project or conduct a specific experiment; we were taught how to find gaps and how to conduct research. I also want to express my gratitude to Daniel for his support and insights when I was in doubt, and to Atsuhiro for his helpful suggestions. Completing my first-ever research project was challenging yet rewarding, and I am grateful for all the guidance and help I received. I’m confident that what I have learned will stay with me in my future research and career.

Today’s MiWORD of the day is… Adversarial Example!

According to the dictionary, the term “adversarial” refers to a situation where two parties or sides oppose each other. But what about the “adversarial example”? Does it imply an example of two opposing sides? In a way, yes.

In machine learning, an example is one instance of the dataset. Adversarial examples are examples with calculated and imperceptible perturbation that tricks the model into the wrong prediction but look the same to humans. So “adversarial”, in this case, indicates opposition between something (or human) and the model. The adversarial examples are intentionally crafted to trick the model by exploiting its vulnerabilities.

How it works? There are many ways to find weak spots and generate adversarial examples, but FGSM is one classic way, and the goal is to make small changes to a picture such that it outputs the wrong prediction. First, we input the model with the picture. Assume the model outputs the correct prediction, so the loss function, which represents the difference between the prediction and the true label, will be low. Second, we compute the gradient of the loss function to tell us whether we should add or subtract a certain value epsilon to each pixel to make the loss bigger. Epsilon is typically very small, resulting in a tiny change to the value. Now, we have a picture that looks the same as the original but will trick the model into the opposite prediction!

One exciting property of adversarial examples is their transferability. It is known that adversarial examples created for one model can also trick other unknown models. This might be due to inherent flaws in the pattern recognition mechanisms of all models and, sometimes, model similarities, allowing these adversarial examples to exploit common vulnerabilities and lead to incorrect predictions.

Now, use “adversarial example” in a sentence by the end of the day: 

Kinda Serious: “Oh I can’t believe my eyes. I am seeing a dog right here and the model says it’s a cupcake…So you’re saying it might be an adversarial image? What even is that? The model is just dumb.”

Less Serious: Apparently, the movie star has an adversarial relationship with the media, but which stars have a good relationship with the media nowadays?

See you in the blogosphere,

Yuxi Zhu

MiWord of the Day is… Learned Perceptual Image Patch Similarity (LPIPS)!

Imagine you’re trying to compare two images—not just any images, but complex medical images like MRIs or X-rays. You want to know how similar they are, but traditional methods like simply comparing pixel values don’t always capture the whole picture. This is where Learned Perceptual Image Patch Similarity, or LPIPS, comes into play.

Learned Perceptual Image Patch Similarity (LPIPS) is a cutting-edge metric for evaluating perceptual similarity between images. Unlike traditional methods like Structural Similarity Index (SSIM) or Peak Signal-to-Noise Ratio (PSNR), which rely on pixel-level analysis, LPIPS utilizes deep learning. It compares images by passing them through a pre-trained convolutional neural network (CNN) and analyzing the features extracted from various layers. This approach allows LPIPS to capture complex visual differences more closely aligned with human perception. It is especially useful in applications such as evaluating generative models, image restoration, and other tasks where perceptual accuracy is critical.

Why is this important? In medical imaging, where subtle differences can be crucial for diagnosis, LPIPS provides a more accurate assessment of image quality, especially when images have undergone various types of degradation, such as noise, blurring, or compression.

Now, let’s use LPIPS in sentences!

Serious: When evaluating the effectiveness of a new medical imaging technique, LPIPS was used to compare the generated images to the original scans, showing that it was more sensitive to perceptual differences than traditional metrics.

Less Serious: I used LPIPS to compare my childhood photos with recent ones. According to the metric, I’ve definitely “degraded” over time!

See you in the blogosphere!

Jingwen (Lisa) Zhong

Jingwen (Lisa) Zhong’s ROP299 Journey

Hi all! My name is Jingwen (Lisa) Zhong. I’m a Data Science Specialist and Actuarial Science Major at UofT, graduating in 2026. I’m really happy and honored to have joined Prof. Tyrrell’s lab in the summer of 2024 as an ROP299 student. This was my first research project, and it has truly exercised many of my research and scientific skills, such as literature review, critical thinking, and the ability to get familiar with a brand-new field.

Coming into the lab, I had no research experience and no prior knowledge of medical imaging. As a student just finishing my second year of study, I felt curious about machine learning and artificial intelligence because these topics are so widely discussed. However, I still can’t forget how uneasy I felt during the first few weeks as I tried to think of a research question related to medical images and machine learning. I’m incredibly thankful to Prof. Tyrrell, who ‘relentlessly’ pointed out issues during each lab meeting, and to the lab volunteers, Daniel and Atshuhiro, who were always willing to help and guide me through the process. I couldn’t have gotten my project ready for implementation without their support. After a month of struggle, I finally settled on my research topic: investigating whether LPIPS is a better metric for assessing the similarities of medical images compared to PSNR and SSIM under various degradation conditions.

Having a research question is just the beginning; implementing it is another huge mountain to climb. I remember how excited I was when my research question was finally approved. I worked hard that week to implement almost all the code for my project. If I could go back, I would approach this differently. Instead of diving straight into coding, I would first take the time to design the entire study process—splitting the dataset, testing the code on a smaller dataset, figuring out how to use the GPU, then applying the code to the full dataset, and finally choosing the appropriate statistical analysis. I say this because I stumbled at each of these steps. After completing my code, I found that it ran so slowly that it would take several days to get results. So, I began the process of figuring out how to set up the environment to run on the lab’s GPU. This process took me almost two weeks, but with the help of other ROP students, I finally got the code running on the GPU.

Once the GPU problem was solved, my results came in much faster. However, the next obstacle was interpreting these results. As a Data Science student, it’s hard to admit, but I hadn’t yet learned ANOVA. Initially, I turned to ChatGPT for help, but the results weren’t ideal. Prof. Tyrrell suggested that I use SAS to perform ANOVA, which provided me with ideal and comprehensive results. So, I learned how to use SAS—a very powerful statistical analysis tool compared to Python.

Through this ROP experience, I learned the importance of communication and teamwork. Although we worked on different projects, the weekly lab meetings were incredibly helpful. It was a place where everyone’s intelligence came together, and I always left with new insights and a clear plan in mind.

Overall, this journey has been a steep learning curve but an immensely rewarding one. I am grateful for the opportunity to work with such a supportive team, and I know that the skills and lessons I’ve learned will continue to guide me in my future research endeavors.

MiWord of the Day Is… Volume Rendering!

Volumetric rendering stands at the forefront of visual simulation technology. It intricately models how light interacts with myriad tiny particles to produce stunningly realistic visual effects such as smoke, fog, fire, and other atmospheric phenomena. This technique diverges significantly from traditional rendering methods that predominantly utilize geometric shapes (such as polygons in 3D models). Instead, volumetric rendering approaches these phenomena as if they are composed of an immense number of particles. Each particle within this cloud-like structure has the capability to absorb, scatter, and emit light, contributing to the overall visual realism of the scene. 

This is not solely useful for generating lifelike visual effects in movies and video games; it also serves an essential function in various scientific domains. Volumetric rendering enables the visualization of intricate three-dimensional data crucial for applications such as medical imaging, where it helps in the detailed analysis of body scans, and in fluid dynamics simulations, where it assists in studying the behavior of gases and liquids in motion. This technology, thus, bridges the gap between digital imagery and realistic visual representation, enhancing both our understanding and our ability to depict complex phenomena in a more intuitive and visually engaging manner. 

How does this work? 

Let’s start by talking about direct volume rendering. Instead of trying to create a surface for every object, this technique directly translates data (like a 3D array of samples, representing our volumetric space) into images. Each point in the volume, or voxel , contains data that dictates how it should appear based on how it interacts with light. 

For example, when visualizing a CT scan, certain data points might represent bone, while others might signify soft tissue. By applying a transfer function—a kind of filter—different values are mapped to specific colors and opacities. This way, bones might be made to appear white and opaque, while softer tissues might be semi-transparent. 

The real trick lies in the sampling process. The renderer calculates how light accumulates along lines of sight through the volume, adding up the contributions of each voxel along the way. It’s a complex ballet of light and matter, with the final image emerging from the cumulative effect of thousands, if not millions, of tiny interactions. 

Let us make this a bit more concrete. We first have transfer functions, a transfer function maps raw data values to visual properties like color and opacity. Let us represent the color assigned to some voxel as C(v) and the opacity as α(v). For each pixel in the final image, a ray is cast through the data volume from the viewer’s perspective. For this we have a ray equation: 

Where P(t) is a point along the ray at parameter 𝑡, P0 is the ray’s origin, and is the normalized direction vector of the ray. As the ray passes through the volume, the renderer calculates the accumulated color and opacity along the ray. This is often done using compositing, where the color and opacity from each sampled voxel are accumulated to form the final pixel color. 

You probably used Volumetric Rendering 

Volumetric rendering transforms CT and MRI scans into detailed 3D models, enabling doctors to examine the anatomy and functions of organs in a non-invasive manner. A specific application includes most of the modern CT viewers. Volumetric rendering is key in creating realistic simulations and environments. In most AR applications, it is used under the hood to overlay interactive, three-dimensional images on the user’s view of the real world, such as in educational tools that project anatomical models for medical students. 

Now for the fun part (see the rules here), using volume rendering  in a sentence by the end of the day: 

Serious: The breakthrough in volumetric rendering technology has enabled scientists to create highly detailed 3D models of the human brain. 

Less Serious: I tried to use volumetric rendering to visualize my Netflix binge-watching habits, but all I got was a 3D model of a couch with a never-ending stream of pizza and snacks orbiting around it. 

…I’ll see you in the blogosphere. 

MiWord of the Day is… KL Divergence!

You might be thinking, “KL Divergence? Sounds exotic. Is it something to do with the Malaysian capital (Kuala Lumpur) or a measurement (kiloliter)?” Nope, and nope again! It stands for Kullback-Leibler Divergence, a fancy name for a metric to compare two probability distributions.

But why not just compare their means? After all, who needs these hard-to-pronounce names? Kullback… What was it again? That’s a good point! Here’s the catch: two distributions can have the same mean but look completely
different. Imagine two Gaussian distributions, both centered at zero, but one is wide and flat, while the other is narrow and tall. Clearly, not similar!

So, maybe comparing the mean and variance would work? Excellent thinking! But what if the distributions aren’t both Gaussian? For example, a wide and flat Gaussian and a uniform distribution (totally flat) might look similar visually, but the uniform distribution is not parametrized by a mean or variance. So, what do we compare?


Enter KL Divergence!

KL Divergence returns a single number that tells us how similar two distributions are, regardless of their types. The smaller the number, the more similar the distributions. But how do we calculate it? Here’s the formula (don’t worry, you don’t have to memorize it!).

Notice, if the distribution q has probability mass where p doesn’t, the KL Divergence will be large. Good, that’s what we want! But, if q has little mass where p has a lot, the KL Divergence will be small. Wait, that’s not what we want! No, it’s not, but luckily KL Divergence is asymmetric! KL(q || p) returns a different value than KL(p || q), so
we can compute both! Why are they different? I’ll leave that up to you to figure out!

KL Divergence in Action

Now, the fun part: using KL Divergence in a sentence!

Serious: Professor, can we approximate one distribution with another by minimizing the KL Divergence between them? That’s a great question! You’ve just stumbled on the idea behind Variational Inference.

Less Serious: Ladies and gentlemen, the KL Divergence between London and Kuala Lumpur is large, and so our flight time today will be 7 hour and 30 minutes. Please remember to stow your hand luggage in the overhead bins above you, fold your tray tables, and fasten your seatbelts.

See you in the blogosphere,
Benedek Balla

Mason Hu’s ROP Journey

Hey! I am Mason Hu, a Data Science Specialist and Math Applications in Stats/Probabilities Specialist who just finished my second year. This summer’s ROP journey in MiDATA lab has been an enlightening journey for me, marking my first formal venture into the world of research. Beyond gaining insight into the intricate technicalities of machine learning and medical imaging, I’ve gleaned foundational lessons that shaped my understanding of the research process itself. My experience can be encapsulated in the following three points:

Research is a journey that begins with a wide scope and gradually narrows down to a focused point. When I was writing my project proposal, I had tons of ideas and planned to test multiple hypotheses in a row. Specifically, I envisioned myself investigating four different attention mechanisms of UNet and assessing all the possible combinations of them, which was already discouraged by Prof. Tyrrell in the first meeting. My aspirations proved to be overambitious, as the dynamic nature of research led me to focus on some unexpected yet incredible discoveries. One example of this would be my paradoxical discovery that attention maps in UNets with residual blocks have almost completely opposite weights to those without. Hence, for a long time, I delved into the gradient flows in residual blocks and tried to explain the phenomenon. Even when time is limited and not all ambitious goals can be reached, the pursuit of just one particular aspect can lead to spectacular insights.

Sometimes plotting out the weights and visualizing them gives me the best sparks and intuitions. This is not restricted to visualizing attention maps in this case. The practice of printing out important statistics and milestones in training models might usually yield great fruition. I once printed out each and every one of the segmentation IoUs in a validation data loader, and it surprised me that some of them are really close to zero. I tried to explain this anomaly as model inefficacy, but it just made no sense. Through an intensive debugging session, I came to realize that it is actually a PyTorch bug specific to batch normalization when the batch size is one. As I go deeper and deeper into the research, I get a better and better understanding of the technical aspects of machine learning and discover better what my research objectives and my purpose are.

Making models reproducible is a really hard task, especially when configurations are complicated. In training a machine learning model, especially CNNs, we usually have a dozen tunable hyperparameters, sometimes more. The technicality of keeping track of them and changing them is already annoying, let alone reproducing them. Moreover, changing an implementation to an equivalent form might not always produce completely equivalent results. Two seemingly equivalent implementations of a function might have different implicit triggers of functionalities that are hooked to one but not the other. This can be especially pronounced in optimized libraries like PyTorch, where subtle differences in implementation can lead to significantly divergent outcomes. The complexity of research underscores the importance of meticulous tracking and understanding of every aspect of the model, affirming that reproducibility is a nuanced and demanding facet of machine learning research.

Reflecting on this summer’s research, I am struck by the depth and breadth of the learning that unfolded. I faced a delicate balance between pursuing big ideas and focusing on careful investigation, always keeping an eye on the small details that could lead to surprising insights. Most importantly, thanks to Prof. Tyrrell, Atsuhiro, Mauro, and Rosa for all the feedback and guidance. Together, they formed a comprehensive research experience for me. As I look to the future, I know that these lessons will continue to shape my thinking, guiding my ongoing work and keeping my curiosity alive.

MiWORD of the Day is… Residual!

Have you ever tried to assemble a Lego set and ended up with mysterious extra pieces? Or perhaps you have cleaned up after a big party and found some confetti hiding in the corners days later? Welcome to the world of “residuals”!

Residuals pop up everywhere. It’s an everyday term but it’s actually fancier than just referring to the leftovers of a meal; it’s also a term used in regression models to describe the difference between observed and predicted values, or in finance to talk about what’s left of an asset. However, nothing I mentioned compares to the role residuals played in machine learning and particularly training deep neural networks.

When you learn an approximation of a function from an input space to an output space using backpropagation, the weights are updated based on the learning rate and gradients that are calculated through chain rule. As a neural network gets deeper, you have to multiply a small value—usually much smaller than 1—multiple times to pass it to the earliest layers, making the neural network excessively hard to optimize. This phenomenon prevalent in deep learning is call the vanishing gradient problem.

However, notice how deep layers of a neural network are usually composed by mappings that are close to identity. This is exactly why residual connections do their magic! Suppose your true mapping from input to output is h(x), and let the forward pass be f(x)+x. It follows that the mapping subject to learning would be h(x)-x, which is close to a zero function. This means f(x) would be way easier to learn under the vanishing gradient problem, since functions that are close to zero functions demand a lower level of sensitivity to each parameter, unlike the identity function.

Now before we dive too deep into the wizardry of residuals, should we use residual in a sentence?

Serious: Neuroscientists wanted to explore if CNNs perform similarly to the human brain in visual tasks, and to this end, they simulated the grasp planning using a computational model called the generative residual convolutional neural network.

Less serious: Mom: “What happened?”
Me: “Sorry Mom, but after my attempt to bake chocolate cookies, the residuals were a smoke-filled kitchen and a cookie-shaped piece of charcoal that even the dog wouldn’t eat”

See you in the blogosphere,
Mason Hu