MiWord of the Day Is… Fourier Transform!

Ok, a what Transform now??

In the early 1800s, Jean-Baptiste Joseph Fourier, a French mathematician and physicist, introduced the transform in his study of heat transfer. The idea seemed preposterous to many mathematicians at the time, but it has now become an important cornerstone in mathematics.

So, what exactly is the Fourier Transform? The Fourier Transform is a mathematical transform that decomposes a function into its sine and cosine components. It decomposes a function depending on space or time into a function depending on spatial or temporal frequency.

Before diving into the mathematical intricacies of the Fourier Transform, it is important to understand the intuition and the key idea behind it. The main idea of the Fourier Transform can be explained simply using the metaphor of creating a milkshake.

Imagine you have a milkshake. It is hard to look at a milkshake and understand it directly; answering questions such as “What gives this shake its nutty flavour?” or “What is the sugar content of this shake?” are harder to answer when we are simply given the milkshake. Instead, it is easier to answer these questions by understanding the recipe and the individual ingredients that make up the shake. So, how exactly does the Fourier Transform fit in here? Given a milkshake, the Fourier Transform allows us to find its recipe to determine how it was created; it is able to present the individual ingredients and the proportions at which they were combined to make the shake. This brings up the questions of how does the Fourier transform determine the milkshake “recipe” and why would we even use this transform to get the “recipe”? To answer the former question, we are able to determine the recipe of the milkshake by running it through filters that then extract each individual ingredient that makes up the shake. The reason we use the Fourier Transform to get the “recipe” is that recipes of milkshakes are much easier to analyze, compare, and modify than working with the actual milkshake itself. We can create new milkshakes by analyzing and modifying the recipe of an existing milkshake. Finally, after deconstructing the milkshake into its recipe and ingredients and analyzing them, we can simply blend the ingredients back to get the milkshake.

Extending this metaphor to signals, the Fourier Transform essentially takes a signal and finds the recipe that made it. It provides a specific viewpoint: “What if any signal could be represented as the sum of simple sine waves?”.

By providing a method to decompose a function into its sine and cosine components, we can analyze the function more easily and create modifications as needed for the task at hand.

 A common application of the Fourier Transform is in sound editing. If sound waves can be separated into their “ingredients” (i.e., the base and treble frequencies), we can modify this sound depending on our requirements. We can boost the frequencies we care about while hiding the frequencies that cause disturbances in the original sound. Similarly, there are many other applications of the Fourier Transform such as image compression, communication, and image restoration.

This is incredible! An idea that the mathematics community was skeptical of, now has applications to a variety of real-world applications.

Now, for the fun part, using Fourier Transform in a sentence by the end of the day:

Example 1:

Koby: “This 1000 puzzle is insanely difficult. How are we ever going to end up with the final puzzle picture?”

Eng: “Don’t worry! We can think of the puzzle pieces as being created by taking the ‘Fourier transform’ of the puzzle picture. All we have to do now is take the ‘inverse Fourier Transform’ and then we should be done!”

Koby: “Now when you put it that way…. Let’s do it!”

Example 2: 

Grace: “Hey Rohan! What’s the difference between a first-year and fourth-year computer science student?

Rohan: “… what?”

Grace: “A Fouri-y-e-a-r Transform”

Rohan: “…. (╯°□°)╯︵ ┻━┻ ”

I’ll see you in the blogosphere…

Parinita Edke

The MiDATA Word of the Day is…”clyster”

Holy mother of pearl! Do you remember when the first Pokémon games came out on the Game Boy? Never heard of Pokémon? Get up to speed by watching this short video. Or even better! Try out one of the games in the series, and let me know how that goes!

The name of the Pokémon in this picture is Cloyster. You may remember it from Pokémon Red or Blue. But! Cloyster, in fact, has nothing to do with clysters.

In olden days, clyster meant a bunch of persons, animals or things gathered in a close body. Now, it is better known as a cluster.

You yourself must identify with at least one group of people. What makes you human; your roles, qualities, or actions make you unique. But at the same time, you fall into a group of others with the same characteristics.

You yourself fall into multiple groups (or clusters). This could be your friend circle or perhaps people you connect with on a particular topic. At the end of the day, you belong to these groups. But is there a way we can determine that you, in fact, belong?

Take for example Jack and Rose from the Titanic. Did Jack and Rose belong together?

If you take a look at the plot to the right, Jack and Rose clearly do not belong together. They belong to two separate groups (clusters) of people. Thus, they do not belong together. Case closed!

But perhaps it is a matter of perspective? Let’s take a step back…

Woah! Now, you could now say that they’re close enough, they might as well be together! Compared to the largest group, they are more similar than they are different. And so, they should be together!

For the last time, we may have been looking at this completely wrong! From the very beginning, what are we measuring on the x-axis and on the y-axis of our graph?

Say it was muscle mass and height. That alone shouldn’t tell us if Rose and Jack belong together! And yet, that is exactly what we could have done. But if not those, then what..?

Now for the fun part (see the rules here), using clyster in a sentence by the end of the day:

Serious: Did you see the huge star clysters last night? I heard each one contained anywhere from 10,000 to several million stars…

Less serious: *At a seafood restaurant by the beach* Excuse me, waiter! I’d like one of your freshest clysters, please. – “I’m sorry. We’re all out!”

…I’ll see you in the blogosphere.

Stanley Hua

Stanley Hua in ROP299: Joining the Tyrrell Lab during a Pandemic

My name is Stanley Hua, and I’ve just finished my 2nd year in the bioinformatics program. I have also just wrapped up my ROP299 with Professor Pascal. Though I have yet to see his face outside of my monitor screen, I cannot begin to express how grateful I am for the time I’ve been spending at the lab. I remember very clearly the first question he asked me during my interview: “Why should I even listen to you?” Frankly, I had no good answer, and I thought that the meeting didn’t go as well as I’d hoped. Nevertheless, he gave me a chance, and everything began from there.

Initially, I got involved with quality assessment of Multiple Sclerosis and Vasculitis 3D MRI images along with Jason and Amar. Here, I got introduced to the many things Dmitrii can complain about taking brain MRI images. Things such as scanner bias, artifacts, types of imaging modalities and prevalence of disease play a role in how we can leverage these medical images in training predictive models.

My actual ROP, however, revolved around a niche topic in Mauro and Amar’s project. Their project sought to understand the effect of dataset heterogeneity in training Convolutional Neural Networks (CNN) by cluster analysis of CNN-extracted image features. Upon extraction of image features using a trained CNN, we end up with high-dimensional vectors representing each image. As a preprocessing step, the dimensionality of the features is reduced by transformation via Principal Component Analysis, then selecting a number of principal components (PC) to keep (e.g. 10 PCs). The question must then be asked: How many principal components should we use in their methodology? Though it’s a very simple question, I took way too many detours to answer this question. I looked at the difference between standardization vs. no standardization before PCA, nonlinear dimensionality reduction techniques (e.g. autoencoder) and comparisons of neural network image representation (via SVCCA) among other things. Finally, I proposed an equally simple method for determining the number of PCs to use in this context, which is the minimum number of PCs that gives the most frequent resulting value (from the original methodology).

Regardless of the difficulty of the question I sought to answer, I learned more about practices in research, and I even learned about how research and industry intermingle. I only have Professor Pascal to thank for always explaining things in a way that a dummy such as me would understand. Moreover, Professor Pascal always focused on impact; is what you’re doing meaningful and what are its applications?

 I believe that the time I spent with the lab has been worthwhile. It was also here that I discovered that my passion to pursue data science trumps my passion to pursue medical school (big thanks to Jason, Indranil and Amar for breaking my dreams). Currently, I look towards a future, where I can drive impact with data; maybe even in the field of personalized medicine or computational biology. Whoever is reading this, feel free to reach out! Hopefully, I’ll be the next Elon Musk by then…

Transiently signing out,

Stanley Bryan Z. Hua

Jessica Xu’s Journey in ROP299

Hello everyone! My name is Jessica Xu, and I’ve just completed my second year in Biochemistry and Statistics at the University of Toronto. This past school year, I’ve had the wonderful opportunity to do a ROP299 project with Dr. Pascal Tyrrell and I’d like to share my experience with you all!

A bit about myself first: in high school, I was always interested in life sciences. My favourite courses were biology and chemistry, and I was certain that I would go to medical school and become a doctor. But when I took my first stats course in first year, I really enjoyed it and I started to become interested in the role of statistics in life sciences. Thus, at the end of my first year, while I was looking through the various ROP courses, I felt that Dr. Tyrrell’s lab was the perfect opportunity to explore my budding interest in this area. I was very fortunate to have an interview with Dr. Tyrrell, and even more fortunate to be offered a position in his lab!

Though it may be obvious, doing a research project when you have no research experience is very challenging! Coming into this lab having taken a statistics course and a few computer science courses in first year, I felt I had a pretty good amount of background knowledge. But as I joined my first lab meeting, I realized I couldn’t be more wrong! Almost every other word being said was a word I’d never heard of before! And so, I realized that there was a lot I needed to learn before I could even begin my project.

I then began on the journey of my project, which was looking at how two dimension reduction techniques, LASSO and SES, performed in an ill-posed problem. It was definitely no easy task! While I had learned a little bit about dimension reduction in my statistics class, I still had a lot to learn about the specific techniques, their applications in medical imaging, and ill-posed problems. I was also very inexperienced in coding, and had to learn a lot of R on my own, and become familiar with the different packages that I would have to use. It was a very tumultuous journey, and I spent a lot of time just trying to get my code to work. Luckily, with help from Amar, I was able to figure out some of the errors and issues I was facing in regards to the code.

I learned a lot about statistics and dimension reduction in this ROP, more than I have learned in any other courses! But most importantly, I had learned a lot about the scientific process and the experience of writing a research paper. If I can provide any advice based on my experience, it’s that sometimes it’s okay to feel lost! It’s not expected of you to have devised a perfect plan of execution for your research, especially when it’s your first time! There will be times that you’ll stray off course (as I often did), but the most valuable lesson that I learned in this ROP is how to get back on track. Sometimes you just need to take a step back, go back to the beginning and think about the purpose of your project and what it is you’re trying to tell people. But it’s not always as easy to realize this. Luckily Dr. Tyrrell has always been there to guide us throughout our projects and to make sure we stay on track by reminding us of the goal of our research. I’m incredibly grateful for all the support, guidance, and time that Dr. Tyrrell has given this past year. It has been an absolute pleasure of having the experience of working in this lab.

Now that I’ve taken my first step into the world of research, with all the new skills and lessons I’ve learned in my ROP, I look forward to all the opportunities and the journey ahead!

Jessica Xu

Today’s MiWORD of the day is… Lasso!

Wait… Lasso? Isn’t a lasso that lariat or loop-like rope that cowboys use? Or perhaps you may be thinking about that tool in Photoshop that’s used for selecting free-form segments!

Well… technically neither is wrong! However, in statistics and machine learning, Lasso stands for something completely different: least absolute shrinkage and selection operator. This term was coined by Dr. Robert Tibshirani in 1996 (who was a UofT professor at that time!).

Okay… that’s cool and all, but what the heck does that actually mean? And what does it do?

Lasso is a type of regression analysis method, meaning it tries to estimate the relationship between predictor variables and outcomes. It’s typically used to perform feature selection or regularization.

Regularization is a way of reducing overfitting of a model, ie. it removes some of the “noise” and randomness of the data. On the other hand, feature selection is a form of dimension reduction. Out of all the predictor variables in a dataset, it will select the few that contribute the most to the outcome variable to include in a predictive model.

Lasso works by applying a fixed upper bound to the sum of absolute values of the coefficient of the predictors in a model. To ensure that this sum is within the upper bound, the algorithm will shrink some of the coefficients, particularly it shrinks the coefficients of predictors that are less important to the outcome. The predictors whose coefficients are shrunk to zero are not included at all in the final predictive model.

Lasso has applications in a variety of different fields! It’s used in finance, economics, physics, mathematics, and if you haven’t guessed already… medical imaging! As the state-of-the-art feature selection technique, Lasso is used a lot in turning large radiomic datasets into easily interpretable predictive models that help researchers study, treat, and diagnose diseases.

Now onto the fun part, using Lasso in a sentence by the end of the day! (see rules here)

Serious: This predictive model I got using Lasso has amazing accuracy for detecting the presence of a tumour!

Less serious: I went to my professor’s office hours for some help on how to use Lasso, but out of nowhere he pulled out a rope!

See you in the blogosphere!

Jessica Xu

Jacky Wang’s ROP399 Journey

My name is Jacky Wang, and I am just finishing my third year at the University of Toronto, pursuing a computer science specialist. Looking back on this challenging but incredible year, I was honoured to have the opportunity to work inside Dr. Tyrrell’s lab as part of the ROP399 course. I would love to share my experience studying and working inside the lab.

Looking back, I realize one of the most challenging tasks is getting onboard. I felt a little lost at first when surrounded by loads of new information and technologies that I had little experience with before. Though feeling excited by all the collision of ideas during each meeting, having too many choices sometimes could be overwhelming. Luckily after doing more literature review and with the help of the brilliant researchers in the lab (a big thank you to Mauro, Dimitri, and of course, Dr. Tyrrell), I start to get a better view of the trajectories of each potential project and further determine what to get out from this experience. I did not choose the machine learning projects, though they were looking shiny and promising as always (as a matter of fact, they turned out to be successful indeed). Instead, I was more leaning towards studying the sample size determination methodology, especially the concept of ill-posed problems, which often occur when the researchers make conclusions from models trained on limited samples. It had always been a mystery why I would get different and even contrasting results when replicating someone else’s work on smaller sample sizes. From there, I settled the research topic and moved onto the implementation details.

This year the ROP students are coming from statistics, computer science and biology etc. I am grateful that Dr. Tyrrell is willing to give anyone who has the determination to study in his lab a chance though they may have little research experience and come from various backgrounds. As someone who studies computer science with a limited statistics background, the real challenge lies in understanding all the statistical concepts and designing the experiments. We decided to apply various dimension reduction techniques to study the effect of different sample sizes with many features. I designed experiments around the principal component analysis (PCA) technique while another ROP student Jessica explored the lasso and SES model in the meantime. It was for sure a long and memorable experience with many debugging when implementing the code from scratch. But it was never more rewarding than seeing the successful completion of the code and the promising results.

I feel lucky and grateful that Dr. Tyrell helped me complete my first research project. He broke down the long and challenging research task into clear and achievable subgoals within our reach. After completing each subgoal, I could not even believe it sent us close to the finished line. It felt so different taking an ROP course than attending the regular lessons. For most university courses, most topics are already determined, and the materials are almost spoon-fed to you. But sometimes, I start to lose the excitement of learning new topics, as I am not driven by the curiosity nor the application needs but the pressure of being tested. However, taking the ROP course gives me almost complete control of my study. For ROP, I was the one who decides what topics to explore, how to design the experiment. I could immediately test my understanding and put everything I learned into real applications.

I am so proud of all the skills that I have picked up in the online lab during this unique but special ROP experience. I would like to thank Dr. Tyrrell for giving me this incredible study experience in his lab. There are so many resources out there to reach and so many excellent researchers to seek help from. I would also like to thank all members of the lab for patiently walking me through each challenge with their brilliant insights.

Jacky Wang

MiWord of the Day Is… dimensionality reduction!

Guess what?

You are looking at a real person, not a painting! This is one of the great works by a talented artist Alexa Meade, who paints on 3D objects but creates a 2D painting illusion. Similarly in the world of statistics and machine learning, dimensionality reduction means what it sounds like: reduce the problem to a lower dimension. But only this time, not an illusion.

Imagine a 1x1x1 data point living inside a 2x2x2 feature space. If I ask you to calculate the data density, you will get ½ for 1D, ¼ for 2D and 1/8 for 3D. This simple example illustrates that the data points become sparser in higher dimensional feature space. To address this problem, we need some dimensional reduction tools to eliminate the boring dimensions (dimensions that do not give much information on the characteristics of the data).

There are mainly two approaches when it comes to dimension reduction. One is to select a subset of features (feature selection), the other is to construct some new features to describe the data in fewer dimensions (feature extraction).

Let us consider an example to illustrate the difference. Suppose you are asked to come up features to predict the university acceptance rate of your local high school.

You may discard the “grade in middle school” for its many missing values; discard “date of birth” and “student name” as they are not playing much role in applying university; discard “weight > 50kg” as everyone has the same value; discard “grade in GPA” as it can be calculated. If you have been through a similar process, congratulations! You just performed a dimension reduction by feature selection.

What you have done is removing the features with many missing values, the least correlated features, the features with low variance and one of the highly correlated. The idea behind feature selection is that the data might contain some redundant or irrelevant features and can be removed without losing too much loss information.

Now, instead of selecting a subset of features, you might try to construct some new features from the old ones. For example, you might create a new feature named “school grade” based on the full history of the academic features. If you have been through a thought process like this, you just performed a dimensional reduction by feature extraction

If you would like to do a linear combination, principal component analysis (PCA) is the tool for you. In PCA, variables are linearly combined into a new set of variables, known as the principal components. One way to do so is to give a weighted linear combination of “grade in score”, “grade in middle school” and “recommend letter” …

Now let us use “dimensionality reduction” in a sentence.

Serious: There are too many features in this dataset, and the testing accuracy seems too low. Let us apply dimensional reduction techniques to reduce overfit of our model…

Less serious:

Mom: “How was your trip to Tokyo?”

Me: “Great! Let me just send you a dimensionality reduction version of Tokyo.”

Mom: “A what Tokyo?”

Me: “Well, I mean … photos of Tokyo.”

I’ll see you in the blogosphere…

Jacky Wang

Yan Qing Lee’s ROP299 Journey

Hi! I’m Yan Qing Lee, an incoming 3rd-year Computer Science and Psychology double major undergraduate student. This past summer, I was given the opportunity to embark on my first research project in the field of artificial intelligence, and I’m excited to share my experience.

My research topic investigated if individuals who receive a false-positive mammogram result by an AI model have a higher risk of receiving a breast cancer diagnosis later on. Past studies have found that receiving a false-positive mammogram result from radiologists is associated with a higher risk of future breast cancer, but no studies have yet investigated if this holds true for AI breast cancer detection models. In this project, I used a longitudinal dataset of breast cancer mammograms, and ran a trained AI breast cancer classifier, made of an ensemble of 4 Convnext-small models, to obtain false-positive and true-negative results. Cox proportional hazards models were then used to investigate the hazard ratio of receiving a false-positive result, from both the AI model, and from radiologists.

As a student who entered the Computer Science major out-of-stream, I started the ROP feeling really out of place. Although I’ve known I wanted to pursue AI, I had no real experience in neither AI nor medical imaging, and I wondered if I was too under-qualified for this experience. Still, I was determined to put in as many hours as I needed to succeed. 

I first began by familiarizing myself with ML terms, and choosing an area of interest (breast cancer mammography) to formulate a research question upon. As I’m sure other ROP students would agree, this process was extremely challenging; as weeks passed by, I found that my research questions were always either over-ambitious or not feasible. Over time, however, I realized that my difficulty with creating a research question stemmed from my lack of knowledge in exactly how ML models work, and the existing literature and gaps within the field of breast cancer mammography. As I dug deeper into existing literature, the one interesting finding regarding radiologists’ false-positives caught my eye, and this finally led me to my research question. 

Once I began working on my project, the many challenges of research revealed themselves to me. This included difficulties of downloading and parsing through a large dataset, of installing packages and working around incompatible versions of libraries to set up a working environment, and, worst of all, of finding out an AI breast cancer detection model you originally centered your project around is not as replicable as you assumed it would be. Despite that I made sure to set up my research question to be relatively simple, the process of setting up, debugging preprocessing code, training and running an AI breast cancer classification model and obtaining undesirable training results was nothing short of complicated. Still, with the weekly lab meetings keeping me on track, and the support of Dr. Tyrrell, Mauro and the other students in the lab, I slowly but surely overcame every obstacle, and learned immense amounts every week to successfully complete my project. Even though I had to find a new AI model to use near the end, and redo my experimentation, I found that with my experience with the previous AI model, I was now able to independently set up and run the new model much more efficiently than before. It was proof of how much I’d learned, and I’m glad to now be able to look back and be proud of how much I’ve accomplished in the span of a few months.

At the end of it all, I have to thank Dr. Tyrrell for fostering my passion towards AI and its applications in fields as impactful and important as breast cancer mammography. This experience only made me more excited to delve into the applications of AI in other fields in the future, and I can’t thank the MiData lab enough for this experience.

MiWord of the Day is… Region of Interest!

Look! You’ve finally made it to Canada! You gloriously take in the view of Lake Ontario when your friend beside you exclaims, “Look, they have beaver tails!” You excitedly scan the lake, asking, “Where?” 

“There!”

“Where?”

“There!”

You see no movement from the lake. It isn’t until your friend pulls you to the front of a storefront says “BeaverTails” with a picture of delicious pastries that you realize they didn’t mean actual beavers’ tails. It turns out you were looking at the wrong place the whole time!

Often times, it’s easy for us to quickly identify objects because we know the context of where things should be. These are the kinds of things we take for granted, until it’s time to hand the same tasks over to machines. 

In medical imaging, experts label what are called Regions of Interests (ROIs), which are specific areas of a medical image that contain pathology, such as the specific area of a lesion. Having labelled ROIs are important, as it can help prevent extra time from being wasted on analyzing non-relevant areas of an image, especially since medical images contain complex structures that take time to interpret. But in the area of machine learning (ML) in medical imaging, having labelled ROIs is also useful because it can help with training ML models that classify whether a medical image contains a pathology or not; with ROIs identified, cropping can be done during the preprocessing of images so that only relevant areas of images are compared for the model to learn differences between positive and negative images faster.

In fact, having ROIs is so important, there is an entire field in artificial intelligence dedicated to it: Computer Vision. The field of computer vision focuses on automating the extraction of ROIs in images or videos, which plays a critical role in the mechanization of tasks like object detection and tracking for things like self-driving cars. In object detection, for example, things like ROI Pooling can be utilized; this is where multiple ROIs are used to obtain input feature maps, from which maximum values are used to detect the presence of features, giving rise to the ability to identify many objects at once – this is extremely useful, especially once you’re on the road and there are 10 other cars around you!

Now, the fun part: using Region of Interest in a sentence!

Serious: The coordinates of ROIs are given for the positive mammogram images in the dataset I’m using. Maybe I could use Grad-CAM to see if the ML breast cancer classification model I’m using uses the same regions of the image to arrive at its classification decision; this way, I can see if its decision making aligns with the decision making of radiologists.

Less serious: I forced my friend to watch my favorite movie with me, but I can’t lie – I think the attractive male lead was her only region of interest!

See you in the blogosphere,

Yan Qing Lee

Yuxi Zhu’s ROP Journey

Hi, I am Yuxi Zhu, a Bioinformatics and Computational Biology specialist and Molecular Genetics Major who just finished my second year. Like most people, this is my first formal research experience. Professor Tyrrell warned me from the start that I would need to be independent in this lab, but my genuine interest in ML and its applications gave me the confidence to take on the challenge. Overall, this summer’s ROP journey in the MiDATA lab was filled with both excitement and challenges.

The first challenge was finding a research question. I’m incredibly grateful to Daniel, a volunteer and former ROP student, who introduced me to the concept of “adversarial examples” and helped me formulate my research question from the start. During the first two months of the literature review, I often found myself diving too deeply into theoretical aspects that were less applicable to Medical Imaging, or exploring questions that, while feasible, didn’t capture my interest. Luckily, I was able to settle down with understanding the differential effects between random perturbations (like random noise and loss of resolution) and non-random adversarial perturbations on the model. 

As the project progressed, I encountered a series of obstacles and bugs that required constant problem-solving and debugging. For example, my initial findings showed very low performance, all under 50%. Professor Tyrrell pointed out that the accuracy of a binary classifier should never drop below 50%, as that would mean it’s performing worse than a random model. I quickly realized there were bugs in my code and implementation. Additionally, after obtaining results, I thought interpreting them would be straightforward. However, when Professor Tyrrell asked me why adversarial perturbations led to accuracies below 50% while the others didn’t, I found myself at a loss for words. In the end, with Professor Tyrrell’s guidance, I was able to interpret the results correctly and articulate them in my report.

Despite the stress I felt before presenting my findings at our weekly meetings, these sessions became invaluable learning experiences. Professor Tyrrell would scrutinize my work with questions and critiques, pushing me to think more deeply and critically about every aspect of my research. The other lab members also provided very helpful insights and shared their work. These meetings not only allowed me to understand what others were working on but also gave me the chance to get involved in or observe lively discussions that often took place. 

Looking back on the last few months, this experience has been invaluable. I am deeply thankful to Professor Tyrrell who offered me this wonderful opportunity in ML and guided me through my research project. I especially appreciate how we weren’t just taught to implement a given research project or conduct a specific experiment; we were taught how to find gaps and how to conduct research. I also want to express my gratitude to Daniel for his support and insights when I was in doubt, and to Atsuhiro for his helpful suggestions. Completing my first-ever research project was challenging yet rewarding, and I am grateful for all the guidance and help I received. I’m confident that what I have learned will stay with me in my future research and career.

Today’s MiWORD of the day is… Adversarial Example!

According to the dictionary, the term “adversarial” refers to a situation where two parties or sides oppose each other. But what about the “adversarial example”? Does it imply an example of two opposing sides? In a way, yes.

In machine learning, an example is one instance of the dataset. Adversarial examples are examples with calculated and imperceptible perturbation that tricks the model into the wrong prediction but look the same to humans. So “adversarial”, in this case, indicates opposition between something (or human) and the model. The adversarial examples are intentionally crafted to trick the model by exploiting its vulnerabilities.

How it works? There are many ways to find weak spots and generate adversarial examples, but FGSM is one classic way, and the goal is to make small changes to a picture such that it outputs the wrong prediction. First, we input the model with the picture. Assume the model outputs the correct prediction, so the loss function, which represents the difference between the prediction and the true label, will be low. Second, we compute the gradient of the loss function to tell us whether we should add or subtract a certain value epsilon to each pixel to make the loss bigger. Epsilon is typically very small, resulting in a tiny change to the value. Now, we have a picture that looks the same as the original but will trick the model into the opposite prediction!

One exciting property of adversarial examples is their transferability. It is known that adversarial examples created for one model can also trick other unknown models. This might be due to inherent flaws in the pattern recognition mechanisms of all models and, sometimes, model similarities, allowing these adversarial examples to exploit common vulnerabilities and lead to incorrect predictions.

Now, use “adversarial example” in a sentence by the end of the day: 

Kinda Serious: “Oh I can’t believe my eyes. I am seeing a dog right here and the model says it’s a cupcake…So you’re saying it might be an adversarial image? What even is that? The model is just dumb.”

Less Serious: Apparently, the movie star has an adversarial relationship with the media, but which stars have a good relationship with the media nowadays?

See you in the blogosphere,

Yuxi Zhu

MiWord of the Day is… Learned Perceptual Image Patch Similarity (LPIPS)!

Imagine you’re trying to compare two images—not just any images, but complex medical images like MRIs or X-rays. You want to know how similar they are, but traditional methods like simply comparing pixel values don’t always capture the whole picture. This is where Learned Perceptual Image Patch Similarity, or LPIPS, comes into play.

Learned Perceptual Image Patch Similarity (LPIPS) is a cutting-edge metric for evaluating perceptual similarity between images. Unlike traditional methods like Structural Similarity Index (SSIM) or Peak Signal-to-Noise Ratio (PSNR), which rely on pixel-level analysis, LPIPS utilizes deep learning. It compares images by passing them through a pre-trained convolutional neural network (CNN) and analyzing the features extracted from various layers. This approach allows LPIPS to capture complex visual differences more closely aligned with human perception. It is especially useful in applications such as evaluating generative models, image restoration, and other tasks where perceptual accuracy is critical.

Why is this important? In medical imaging, where subtle differences can be crucial for diagnosis, LPIPS provides a more accurate assessment of image quality, especially when images have undergone various types of degradation, such as noise, blurring, or compression.

Now, let’s use LPIPS in sentences!

Serious: When evaluating the effectiveness of a new medical imaging technique, LPIPS was used to compare the generated images to the original scans, showing that it was more sensitive to perceptual differences than traditional metrics.

Less Serious: I used LPIPS to compare my childhood photos with recent ones. According to the metric, I’ve definitely “degraded” over time!

See you in the blogosphere!

Jingwen (Lisa) Zhong

Jingwen (Lisa) Zhong’s ROP299 Journey

Hi all! My name is Jingwen (Lisa) Zhong. I’m a Data Science Specialist and Actuarial Science Major at UofT, graduating in 2026. I’m really happy and honored to have joined Prof. Tyrrell’s lab in the summer of 2024 as an ROP299 student. This was my first research project, and it has truly exercised many of my research and scientific skills, such as literature review, critical thinking, and the ability to get familiar with a brand-new field.

Coming into the lab, I had no research experience and no prior knowledge of medical imaging. As a student just finishing my second year of study, I felt curious about machine learning and artificial intelligence because these topics are so widely discussed. However, I still can’t forget how uneasy I felt during the first few weeks as I tried to think of a research question related to medical images and machine learning. I’m incredibly thankful to Prof. Tyrrell, who ‘relentlessly’ pointed out issues during each lab meeting, and to the lab volunteers, Daniel and Atshuhiro, who were always willing to help and guide me through the process. I couldn’t have gotten my project ready for implementation without their support. After a month of struggle, I finally settled on my research topic: investigating whether LPIPS is a better metric for assessing the similarities of medical images compared to PSNR and SSIM under various degradation conditions.

Having a research question is just the beginning; implementing it is another huge mountain to climb. I remember how excited I was when my research question was finally approved. I worked hard that week to implement almost all the code for my project. If I could go back, I would approach this differently. Instead of diving straight into coding, I would first take the time to design the entire study process—splitting the dataset, testing the code on a smaller dataset, figuring out how to use the GPU, then applying the code to the full dataset, and finally choosing the appropriate statistical analysis. I say this because I stumbled at each of these steps. After completing my code, I found that it ran so slowly that it would take several days to get results. So, I began the process of figuring out how to set up the environment to run on the lab’s GPU. This process took me almost two weeks, but with the help of other ROP students, I finally got the code running on the GPU.

Once the GPU problem was solved, my results came in much faster. However, the next obstacle was interpreting these results. As a Data Science student, it’s hard to admit, but I hadn’t yet learned ANOVA. Initially, I turned to ChatGPT for help, but the results weren’t ideal. Prof. Tyrrell suggested that I use SAS to perform ANOVA, which provided me with ideal and comprehensive results. So, I learned how to use SAS—a very powerful statistical analysis tool compared to Python.

Through this ROP experience, I learned the importance of communication and teamwork. Although we worked on different projects, the weekly lab meetings were incredibly helpful. It was a place where everyone’s intelligence came together, and I always left with new insights and a clear plan in mind.

Overall, this journey has been a steep learning curve but an immensely rewarding one. I am grateful for the opportunity to work with such a supportive team, and I know that the skills and lessons I’ve learned will continue to guide me in my future research endeavors.

AI in Drug & Biological Product Development: FDA & CTTI Workshop

AI holds great potential to transform how drugs are developed, manufactured, and utilized. As with any innovation, AI use in drug development creates new and unique challenges that require both careful management and a risk-based regulatory framework that is built on sound regulatory science approaches that support innovation. Join us as we explore guiding principles that are being applied by innovators and regulators to promote the responsible use of AI in the development of safe and effective drugs. Learn from experts about applying principles for good machine learning practices to ensure responsible use of AI in the development of drugs. Drawing on real case examples, we will discuss the rationale for particular approaches, how success was evaluated, what challenges were encountered, options for scaling and wider applicability, and considerations for moving forward.

On Tuesday August 6, 2024 from 10 AM to 5:30 PM EDT, FDA and the Clinical Trials Transformation Initiative are hosting a free hybrid public workshop on artificial intelligence (AI) in drug and biological product development.
Please join us as we explore the guiding principles being applied by innovators and regulators to ensure AI is used responsibly. AI holds great potential to transform how drugs are developed, manufactured, and used.
Participants may attend virtually or in-person in the FDA Great Room located at 10903 New Hampshire Avenue, Silver Spring, MD 20993.
Registration information can be found here.
The Small Business and Industry Assistance (SBIA) program in the Center for Drug Evaluation and Research provides guidance, education and updates for regulated industry.

 

MiWord of the Day Is… Volume Rendering!

Volumetric rendering stands at the forefront of visual simulation technology. It intricately models how light interacts with myriad tiny particles to produce stunningly realistic visual effects such as smoke, fog, fire, and other atmospheric phenomena. This technique diverges significantly from traditional rendering methods that predominantly utilize geometric shapes (such as polygons in 3D models). Instead, volumetric rendering approaches these phenomena as if they are composed of an immense number of particles. Each particle within this cloud-like structure has the capability to absorb, scatter, and emit light, contributing to the overall visual realism of the scene. 

This is not solely useful for generating lifelike visual effects in movies and video games; it also serves an essential function in various scientific domains. Volumetric rendering enables the visualization of intricate three-dimensional data crucial for applications such as medical imaging, where it helps in the detailed analysis of body scans, and in fluid dynamics simulations, where it assists in studying the behavior of gases and liquids in motion. This technology, thus, bridges the gap between digital imagery and realistic visual representation, enhancing both our understanding and our ability to depict complex phenomena in a more intuitive and visually engaging manner. 

How does this work? 

Let’s start by talking about direct volume rendering. Instead of trying to create a surface for every object, this technique directly translates data (like a 3D array of samples, representing our volumetric space) into images. Each point in the volume, or voxel , contains data that dictates how it should appear based on how it interacts with light. 

For example, when visualizing a CT scan, certain data points might represent bone, while others might signify soft tissue. By applying a transfer function—a kind of filter—different values are mapped to specific colors and opacities. This way, bones might be made to appear white and opaque, while softer tissues might be semi-transparent. 

The real trick lies in the sampling process. The renderer calculates how light accumulates along lines of sight through the volume, adding up the contributions of each voxel along the way. It’s a complex ballet of light and matter, with the final image emerging from the cumulative effect of thousands, if not millions, of tiny interactions. 

Let us make this a bit more concrete. We first have transfer functions, a transfer function maps raw data values to visual properties like color and opacity. Let us represent the color assigned to some voxel as C(v) and the opacity as α(v). For each pixel in the final image, a ray is cast through the data volume from the viewer’s perspective. For this we have a ray equation: 

Where P(t) is a point along the ray at parameter 𝑡, P0 is the ray’s origin, and is the normalized direction vector of the ray. As the ray passes through the volume, the renderer calculates the accumulated color and opacity along the ray. This is often done using compositing, where the color and opacity from each sampled voxel are accumulated to form the final pixel color. 

You probably used Volumetric Rendering 

Volumetric rendering transforms CT and MRI scans into detailed 3D models, enabling doctors to examine the anatomy and functions of organs in a non-invasive manner. A specific application includes most of the modern CT viewers. Volumetric rendering is key in creating realistic simulations and environments. In most AR applications, it is used under the hood to overlay interactive, three-dimensional images on the user’s view of the real world, such as in educational tools that project anatomical models for medical students. 

Now for the fun part (see the rules here), using volume rendering  in a sentence by the end of the day: 

Serious: The breakthrough in volumetric rendering technology has enabled scientists to create highly detailed 3D models of the human brain. 

Less Serious: I tried to use volumetric rendering to visualize my Netflix binge-watching habits, but all I got was a 3D model of a couch with a never-ending stream of pizza and snacks orbiting around it. 

…I’ll see you in the blogosphere. 

MiWord of the Day is… KL Divergence!

You might be thinking, “KL Divergence? Sounds exotic. Is it something to do with the Malaysian capital (Kuala Lumpur) or a measurement (kiloliter)?” Nope, and nope again! It stands for Kullback-Leibler Divergence, a fancy name for a metric to compare two probability distributions.

But why not just compare their means? After all, who needs these hard-to-pronounce names? Kullback… What was it again? That’s a good point! Here’s the catch: two distributions can have the same mean but look completely
different. Imagine two Gaussian distributions, both centered at zero, but one is wide and flat, while the other is narrow and tall. Clearly, not similar!

So, maybe comparing the mean and variance would work? Excellent thinking! But what if the distributions aren’t both Gaussian? For example, a wide and flat Gaussian and a uniform distribution (totally flat) might look similar visually, but the uniform distribution is not parametrized by a mean or variance. So, what do we compare?


Enter KL Divergence!

KL Divergence returns a single number that tells us how similar two distributions are, regardless of their types. The smaller the number, the more similar the distributions. But how do we calculate it? Here’s the formula (don’t worry, you don’t have to memorize it!).

Notice, if the distribution q has probability mass where p doesn’t, the KL Divergence will be large. Good, that’s what we want! But, if q has little mass where p has a lot, the KL Divergence will be small. Wait, that’s not what we want! No, it’s not, but luckily KL Divergence is asymmetric! KL(q || p) returns a different value than KL(p || q), so
we can compute both! Why are they different? I’ll leave that up to you to figure out!

KL Divergence in Action

Now, the fun part: using KL Divergence in a sentence!

Serious: Professor, can we approximate one distribution with another by minimizing the KL Divergence between them? That’s a great question! You’ve just stumbled on the idea behind Variational Inference.

Less Serious: Ladies and gentlemen, the KL Divergence between London and Kuala Lumpur is large, and so our flight time today will be 7 hour and 30 minutes. Please remember to stow your hand luggage in the overhead bins above you, fold your tray tables, and fasten your seatbelts.

See you in the blogosphere,
Benedek Balla

Mason Hu’s ROP Journey

Hey! I am Mason Hu, a Data Science Specialist and Math Applications in Stats/Probabilities Specialist who just finished my second year. This summer’s ROP journey in MiDATA lab has been an enlightening journey for me, marking my first formal venture into the world of research. Beyond gaining insight into the intricate technicalities of machine learning and medical imaging, I’ve gleaned foundational lessons that shaped my understanding of the research process itself. My experience can be encapsulated in the following three points:

Research is a journey that begins with a wide scope and gradually narrows down to a focused point. When I was writing my project proposal, I had tons of ideas and planned to test multiple hypotheses in a row. Specifically, I envisioned myself investigating four different attention mechanisms of UNet and assessing all the possible combinations of them, which was already discouraged by Prof. Tyrrell in the first meeting. My aspirations proved to be overambitious, as the dynamic nature of research led me to focus on some unexpected yet incredible discoveries. One example of this would be my paradoxical discovery that attention maps in UNets with residual blocks have almost completely opposite weights to those without. Hence, for a long time, I delved into the gradient flows in residual blocks and tried to explain the phenomenon. Even when time is limited and not all ambitious goals can be reached, the pursuit of just one particular aspect can lead to spectacular insights.

Sometimes plotting out the weights and visualizing them gives me the best sparks and intuitions. This is not restricted to visualizing attention maps in this case. The practice of printing out important statistics and milestones in training models might usually yield great fruition. I once printed out each and every one of the segmentation IoUs in a validation data loader, and it surprised me that some of them are really close to zero. I tried to explain this anomaly as model inefficacy, but it just made no sense. Through an intensive debugging session, I came to realize that it is actually a PyTorch bug specific to batch normalization when the batch size is one. As I go deeper and deeper into the research, I get a better and better understanding of the technical aspects of machine learning and discover better what my research objectives and my purpose are.

Making models reproducible is a really hard task, especially when configurations are complicated. In training a machine learning model, especially CNNs, we usually have a dozen tunable hyperparameters, sometimes more. The technicality of keeping track of them and changing them is already annoying, let alone reproducing them. Moreover, changing an implementation to an equivalent form might not always produce completely equivalent results. Two seemingly equivalent implementations of a function might have different implicit triggers of functionalities that are hooked to one but not the other. This can be especially pronounced in optimized libraries like PyTorch, where subtle differences in implementation can lead to significantly divergent outcomes. The complexity of research underscores the importance of meticulous tracking and understanding of every aspect of the model, affirming that reproducibility is a nuanced and demanding facet of machine learning research.

Reflecting on this summer’s research, I am struck by the depth and breadth of the learning that unfolded. I faced a delicate balance between pursuing big ideas and focusing on careful investigation, always keeping an eye on the small details that could lead to surprising insights. Most importantly, thanks to Prof. Tyrrell, Atsuhiro, Mauro, and Rosa for all the feedback and guidance. Together, they formed a comprehensive research experience for me. As I look to the future, I know that these lessons will continue to shape my thinking, guiding my ongoing work and keeping my curiosity alive.