MiWORD of the Day Is… Radiomics FM: Broadcasting the Hidden Stories in Medical Images

At first glance, radiomics sounds like the name of a futuristic radio station:
“Welcome back to Radiomics FM, where all your favorite tumors are top hits!”

But no, radiomics isn’t about DJs, airwaves, or tuning into late-night medical jams. Instead, it’s about something even cooler: finding hidden patterns buried deep inside medical images and letting ML models “listen” to what those patterns are trying to say.

Imagine staring at a blurry shadow on the wall. Is it a cat? A chair? A really bad haircut?

Medical images, like CT scans, MRIs, and ultrasounds, can feel just as mysterious to the naked eye. They’re full of shapes, textures, and intensity patterns that look like a mess… until you start digging deeper.

That’s where radiomics comes in. Radiomics acts like a detective with a magnifying glass, picking out tiny, subtle clues inside the fuzziness. It systematically extracts hundreds, sometimes even thousands, of quantitative features from images, including:

  • Texture features (like entropy, smoothness, or roughness)
  • Shape descriptors (capturing the size, compactness, or irregularity of objects)
  • First-order intensity statistics (how bright or dark different regions are)
  • Higher-order patterns (relationships between pixel groups, like GLCM and GLRLM matrices)

Each of these features gets transformed into structured data, powerful numbers that machine learning models can analyze to predict clinical outcomes. Instead of relying only on human interpretation, radiomics opens a new window into understanding:

  • Will the tumor grow fast or stay slow?
  • Will the patient respond well to a certain treatment?
  • Could we detect early signs of disease long before symptoms appear?

Fun Fact: Radiomics can spot differences so subtle that even expert radiologists can’t always detect them. It’s like giving X-ray vision… to an already X-rayed image. By turning complex images into rich datasets, radiomics is revolutionizing how we approach personalized medicine. It allows researchers to build predictive models, identify biomarkers, and move toward earlier, more accurate diagnoses without the need for additional invasive biopsies or surgeries.

Radiomics reminds us that in science, and in life, what we see isn’t always the full truth. Sometimes, it’s the quiet, hidden patterns that matter most. So next time you see a grayscale ultrasound or a mysterious CT scan, remember: Behind those shadows, there’s a secret world of patterns and numbers just waiting to be uncovered.

Now, try using radiomics in a sentence by the end of the day!

Serious: “Radiomics enables earlier detection of subtle tumor changes that are invisible to the human eye.”

Not so serious: “I’m using radiomics to decode my friend’s emotions, because reading faces is harder than reading scans.”

See you next time in the blogosphere, and don’t forget to tune out Radiomics FM!

Phoebe (Shih-Hsin) Chuang

Phoebe (Shih-Hsin) Chuang’s ROP299 Journey 

Hi everyone! My name is Phoebe (Shih-Hsin) Chuang, and I’m a third-year Computer Science Specialist student with a minor in Statistics and a focus in Artificial Intelligence. This year, I had the opportunity to work on my first formal research project involving machine learning in the field of medical imaging. Although the experience was often stressful and full of challenges, it has definitely been one of the most meaningful and transformative learning experiences of my undergraduate academic journey so far.

Before starting this ROP, I had no prior experience in either machine learning or medical imaging. Choosing a research topic initially felt overwhelming. Formulating a good research question required a deep understanding of the current state of the field, so I spent a great deal of time reading papers to grasp major trends such as image generation, multimodal learning, image segmentation, and classification tasks. Eventually, I decided to focus on adnexal mass classification using ultrasound images from the lab.

A major challenge for this project was the small dataset size compared to those typically used in current literature. Recognizing this limitation, I explored approaches specifically designed for small data scenarios. I found that radiomics was particularly promising, especially given that deep learning models typically require large datasets to generalize well. To make my approach more nuanced, I chose not just to use extracted radiomics features in numeric form, but to generate radiomic feature maps. This allowed me to integrate them directly into convolutional neural networks, leveraging CNNs’ strengths in learning from images.

Although this may appear minor, aside from selecting the research topic and technical exploration, one of the biggest lessons I learned was the importance of keeping my code, folders, and documentation organized. Without a clear structure from the beginning, it became very easy to get lost, especially when I paused work for a few days. If I could redo the project, I would definitely prioritize setting up a consistent, organized structure early on to save a lot of confusion and debugging time later.

Looking back, I am deeply grateful to Dr. Tyrrell for offering me this invaluable research opportunity. Through weekly meetings, Dr. Tyrrell emphasized that the primary goal of this experience was not simply achieving great results, but learning the full research process, from identifying gaps in knowledge to formulating research questions and hypotheses, designing experiments, and performing rigorous statistical analyses (since this was a statistics department course!). I would also like to sincerely thank Noushin, our postdoc, whose insightful feedback and support helped me greatly in refining my research questions and overcoming challenges during implementation. Finally, I want to thank everyone else in the lab for their encouragement, shared experiences, and thoughtful suggestions during meetings. It was both inspiring and motivating to see everyone’s projects evolve alongside mine.

This ROP journey has definitely been a steep but rewarding learning curve. It has brought me one step closer to becoming an independent researcher, and I look forward to carrying the skills, mindset, and resilience I built this year into my future research and career endeavours.

Xin Lei’s Personal Reflection

Hi! I’m Xin Lei! I was a second-year Computer Science Specialist and Molecular Genetics major student when I began my ROP with Professor Tyrrell.

My project focused on developing a framework that uses Latent Diffusion Models (LDMs) to generate high-fidelity gastrointestinal (GI) medical images from segmentation masks. 

I trained a two-stage pipeline: first, a VQ-GAN model to encode the structure of unlabeled GI images into a latent space and then conditioned a Latent Diffusion Model on segmentation masks to generate corresponding realistic GI tract images. To enhance anatomical diversity, I also designed a novel mask interpolation pipeline to create intermediate anatomical configurations, encouraging the generation of diverse and realistic segmentation-image pairs. It was challenging to tackle the challenge of synthesizing new, varied, and coherent medical images for segmentation tasks, and to push beyond the limitations of existing inpainting and stitching-based generation methods.

Overall, it was a lot of paper reading, GitHub repositories visited, and overnight coding session, all of which would have been impossible without Professor Tyrrell’s continual support and advice! My biggest mistake was not spending enough time reading about the best current methods for solving my problem of interest. Indeed, countless hours would have been saved, if I had found the right repositories and research papers earlier, where others had already implemented parts of the ideas I was trying to build!

Reflecting on my ROP journey, the most difficult part was avoiding the endless rabbit holes of technical optimizations. I would often find myself spending days obsessing over marginal model improvements, investigating every possible architectural tweak or hyperparameter adjustment I could think of. While these deep dives were fun and intellectually stimulating, they were dangerous because no project could ever be delivered on time if perfection was the only goal.

I owe a huge thanks to Professor Tyrrell, who repeatedly pulled me back out of these tangents and helped me refocus on moving the project forward. His guidance taught me one of the most valuable lessons of research: perfect is the enemy of good. A deliverable, working project is far more valuable than an imaginary, flawless one stuck in perpetual revision.

In the end, I am proud of what I accomplished, not just technically, but also in learning how to think more strategically about research. This experience has cemented my excitement about applying AI to real-world medical problems, and I am deeply grateful to Professor Tyrrell and the MiDATA lab for giving me this incredible opportunity.

I can’t wait to see where this journey will take me next!

Xin Lei Lin

MiWord of the Day is… Diffusion!

OK, what comes to mind when you hear the word diffusion? Perfume spreading through a room? A drop of ink swirling into a glass of water? When I first heard the terms “diffusion model”, I thought of my humidifier, chaotically diffusing water droplets in my room.

But today, diffusion has taken on a very new meaning in the world of medical imaging!

You’ve probably heard a lot about GPT recently, models that can generate almost anything: stories, poems, even computer code. But did you know that alongside GPT for text, there are other types of models that generate images, like beautiful paintings, photorealistic pictures… and yes, even medical images?

This is where the “diffusion” in diffusion models comes in! Just like my humidifier slowly releases tiny water droplets into the air, diffusion models spread random noise across an image and then cleverly gather it back together to form something meaningful! In my case, instead of a cat jumping because they saw a cucumber, I generate gastrointestinal tract images from their segmentation masks! (Yes, I agree with you, I am cooler)

But what are segmentation masks?

Elementary, my dear Watson! Segmentation masks are like a topological map, showing the exact locations that Sherlock Holmes (in this case, the radiologist) would search for hidden clues, such as tumors, organs, vessels, to uncover cancerous Moriarty’s next plan. Super important when doctors need to know exactly where to operate or how a disease is spreading.

Until recently, generating these masks required lots of manual work from radiologists, or tons of carefully labeled data. But now?

By training diffusion models properly, we can synthesize realistic segmentation masks, even when data is limited. That means more diverse, more accurate, and more creative ways to augment medical datasets for training better AI models.

It’s like equipping our medical research toolbox with a team of colorful GPUs, each one working like a tireless laboratory assistant, swiftly and precisely creating endoscopy images at the click of a button, generating in moments what used to take hours of painstaking effort. This lets you breathe easy, knowing that your next endoscopy won’t need to be fed into an AI model, thus sparing patient privacy and giving medical professionals more time to focus on what truly matters!

Thank you for reading, and I’ll see you in the blogosphere!

Xin Lei Lin

Nathan Liu’s STA299 Journey

Hi everyone! My name is Nathan Liu, and I am currently a second-year student at the University of Toronto, specializing in Statistics. From May to August 2025, I had the privilege of conducting an independent research project under the supervision of Dr. Pascal Tyrell. I am deeply grateful for his guidance throughout this journey. This was my first time having an independent research experience in data science, and it proved to be both challenging and rewarding. I would love to share some of the lessons I learned during this summer.

At the core of my project, I focused on the problem of automated grading of knee osteoarthritis (KOA) using deep learning. While recent work has shown promising results, the classification of Kellgren–Lawrence grade 2 (KL2) remains particularly unreliable. My study explored how self-supervised learning (SSL), specifically SimCLR embeddings, could be used to relabel ambiguous KL2 cases and improve classification performance. I designed four experimental pipelines: a baseline, a hard relabeling approach, a confidence-based relabeling approach, and a weighted loss strategy. Along the way, I incorporated quantitative evaluations such as bootstrap confidence intervals and McNemar’s test to assess improvements in KL2 reliability.

Before joining this project, I was already interested in the medical applications of machine learning, but I had never worked directly with this kind of research. I still remember my first lab meeting: Dr. Tyrell introduced a wide range of ongoing projects on different diseases, and I felt both excited and overwhelmed by the amount of new information. He warned us that the beginning would be the most difficult stage, but I underestimated just how challenging it would be. As I started exploring public databases, I quickly realized that many were incomplete, with missing labels and ambiguous annotations. This left me uncertain about how to begin. At this stage, I am thankful for the help I received from Noushin and Dr. Tyrell, as well as advice from a previous student in the lab. Their input helped me realize that I needed to commit to working with my own chosen dataset and design a study that I could take full ownership of.

During the research process, I encountered multiple challenges. The KL grading system itself is inherently noisy, and KL2 is especially difficult to identify consistently. On top of that, my dataset was imbalanced, which made model training unstable. Technically, training SimCLR models was not straightforward—convergence was slow, embeddings were difficult to interpret, and results were often not what I expected. Under Dr. Tyrell’s guidance, I learned to compare different baseline models, and switching from ResNet to EfficientNet immediately improved performance. He also encouraged me to experiment with visualization approaches beyond clustering, which eventually led me to explore spatial distance methods for relabeling KL2 cases. Noushin provided very practical advice on tuning SimCLR hyperparameters to maximize feature learning, which was critical to stabilizing my experiments. Throughout this process, I gained a new appreciation for how problem-solving in research often requires a mix of independent exploration, peer support, and careful reading of the literature.

Looking back, I am especially grateful for the structure of weekly lab meetings. They pushed me to stay disciplined, improve my efficiency, and keep refining my research plan. Just as importantly, they gave me the chance to see how other students tackled projects in different medical domains. I was struck by how many of us faced similar problems—unstable models, imperfect data, unexpected results—and it was reassuring to realize I was not alone. Watching others troubleshoot their difficulties often gave me ideas for my own work.

Overall, this project taught me valuable lessons both technically and personally. On the technical side, I became much more comfortable with self-supervised learning, parameter tuning, and methods for quantifying and visualizing results. On the personal side, I developed patience, resilience, and the ability to adapt when experiments did not go as planned. I also improved my academic writing skills and learned how to present my findings in a structured and convincing way. Most importantly, I am thankful to Dr. Tyrell for his constructive advice whenever I felt uncertain, and to Noushin for patiently answering many of my technical questions—even the simplest ones. I also want to thank my peers and all the lab members for their support, encouragement, and good company. This experience has not only strengthened my skills but has also made me more confident about pursuing research in medical imaging and machine learning in the future.

MiWORD of the Day is… McNemar Test!

Remember that famous Spider-Man meme where two Spider-Men are pointing at each other, yelling “You’re me!”? That’s basically the spirit of the McNemar Test. It’s a statistical tool that checks whether the same group of people changes their answers under two different conditions.

Think of it like this: yesterday everyone swore bubble tea was the best, but today half of them suddenly insist black coffee is the only way to survive finals. The McNemar Test is the referee here—it counts how many people actually flipped sides and asks, “Okay, is this change big enough to matter, or is it just random mood swings?”

The McNemar Test works on paired data. The total numbers don’t matter as much as the people who changed their minds.

People who said “yes” before and still say “yes” after → not interesting.

People who said “no” before and still say “no” after → also not interesting.

The stars of the show? Those who said “yes” before and “no” after, and those who said “no” before and “yes” after. The test compares these two groups. If the difference between them is large, it means the change is real, not just random noise.

In clinical research this is super important. Suppose a study tests whether a new drug actually helps with a disease. A total of 314 patients are observed both before and after treatment. Here’s the data:

Here’s what’s going on: 101 stayed sick before and after. 33 stayed healthy before and after.

121 improved (from sick → healthy). 59 worsened (from healthy → sick).

Now, McNemar steps in with this formula:

That comes out to 21.35, which is way too extreme to happen by chance (p < 0.001). Translation: the drug worked—the number of patients who got better is significantly higher than those who got worse.

In medicine (or in evaluating machine learning models), it’s not enough to just report an overall accuracy. What really matters is whether the changes—improvements or mistakes—are meaningful and consistent. The McNemar Test is a simple way to check if those differences are statistically real.

Now let’s use McNemar Test in a sentence.

Serious: In a clinical trial, the McNemar Test showed that significantly more patients improved after treatment than worsened, proving the drug’s effectiveness.

Less Serious: Yesterday my friend swore pizza was the best food on earth. Today she switched to sushi. According to McNemar, this isn’t just random—it’s a statistically significant betrayal.

See you in the blogosphere!

Nathan Liu

MiWORD of the Day is… Blur!

Have you ever tried to take a perfect vacation photo in Toronto, only to find your friend’s face is a mysterious smudge and the CN Tower looks like it’s melting? Blur has a way of sneaking into our lives, and it is everywhere. Sometimes it is more fascinating than you might think.

The smudge you see in your photo is blur. Blur has existed since the first camera was invented because film or sensors need time to gather light. If either the subject or the camera moves during this exposure time, the image appears blurred. In our discussion, we will focus on motion blur caused by fast movement, rather than unrelated effects like pixelation or mosaic artifacts. You might have experienced motion blur when taking a shaky phone photo, wearing foggy glasses, or watching a baseball fly past at incredible speed. But blur is not always a flaw.

In the world of art, blur has often been a feature rather than a mistake. Think of Claude Monet’s Water Lilies (link, copyright by The MET — highly recommend seeing it in person and viewing it from different distances): soft edges, blended colors, shapes shimmering in the light. Or consider long-exposure photographs of city traffic, where headlights stretch into glowing ribbons. In these cases, blur captures motion, mood, and mystery, transforming the ordinary into something extraordinary. Even in classic cinema, motion blur helps create a sense of speed or dreamlike atmosphere. In sports, blur can tell an entire story. The fastest recorded baseball pitch reaches 105.8 miles per hour, far too fast for the human eye to follow clearly. To freeze it, cameras must shoot at over 1,000 frames per second. A racecar streaking past the finish line or a sprinter in motion may appear as streaks of color, yet our brains still understand exactly what is happening. Motion blur, in these cases, is not a mistake; it is evidence of speed and energy.

In science, blur can reveal a very different kind of truth. Consider echocardiography, an ultrasound imaging method for the heart. These moving pictures help doctors assess heart function, blood flow, and valve performance. Yet even the tiniest shake of the probe, a restless patient, or the natural motion of the heartbeat can smear crucial details. There is even a trade-off between frame rate and depth of view: a typical knee ultrasound operates at around 20 frames per second, while heart ultrasound often reaches about 50 frames per second. A blurry heart chamber is more than an inconvenience; it can obscure the clues doctors need to make the right decision. Other imaging fields, such as X-ray or MRI, face similar challenges with motion blur. Interestingly, scientists also study the patterns of blur to improve image quality, since sometimes the “smudge” itself contains useful information about movement or structure.

Blur can be playful, expressive, and at times essential. It reminds us that seeing clearly is not always straightforward and that what appears imperfect can still hold meaning. From the sweep of a painter’s brush to the rhythm of a beating heart on a screen, blur reflects a world that is always moving and changing. Sometimes, beauty and truth live within that very imperfection.

Now for the fun part — using blur in a sentence by the end of the day:
Serious: Did you notice the blur in the long-exposure shot of the city at night? The headlights look like flowing rivers of light.
Less serious: While running to catch the bus, I accidentally created a blur of people in my phone photo. What a perfect accidental art piece.

…I’ll see you in the blogosphere.

Qifan Yang

Qifan Yang’s Personal Reflection

My name is Qifan Yang, and I am an incoming third-year student with Statistics Major and Mathematical Applications in Finance and Economics Specialist at the University of Toronto. This past summer, I had the opportunity to work on an ROP299 research project with Professor Tyrrell, and I would like to share my four-month journey in research, a completely new experience for me.

When I started, I was a complete novice in medical imaging and unfamiliar with the full process of scientific research. Before our first meeting, I felt quite nervous. I still remember Professor Tyrrell, during the interview, warning me about the potential challenges ahead. Coming from a statistics and mathematics background, I initially found both machine learning concepts and medical terminology quite intimidating. Although I had completed a few Kaggle courses, I lacked hands-on experience with building models from raw datasets and running end-to-end training and testing.

My research journey began along two paths: first, learning the fundamentals of machine learning and medical imaging, where review papers became my best starting point, and second, exploring rheumatic heart disease (RHD) and its potential for automated diagnosis using transthoracic echocardiography (TTE). The first obstacle I encountered was the lack of publicly available, large-scale datasets for RHD with detailed labels. This led me to pivot toward studying image quality in TTE, since I found a large echocardiography database with quality labels. However, a second challenge soon emerged: I struggled to identify a research question that was both technically meaningful and scientifically impactful.

This is where Professor Tyrrell’s mentorship made all the difference. In one group meeting, he mentioned severe motion blur he had observed in knee ultrasound images. That sparked the idea for my project: detecting and correcting non-uniform motion blur in echocardiography using deep learning. This was the turning point when the project truly began to take shape.

The real research work involved splitting and labeling datasets, designing a neural network model, training and testing on GPUs, and visualizing and evaluating results. Each of these steps was entirely new to me, requiring both technical learning and persistent problem-solving. I am deeply grateful for the guidance of Professor Tyrrell, as well as the support from Giuseppe, Noushin, and other members of the lab, including previous students whose work provided valuable reference points.

By the end of the summer, I had taken full charge of the project, running it from start to end. This responsibility taught me far more than technical skills. I developed a stronger sense of self-motivation, learned to manage my time effectively, and built the resilience needed to handle research setbacks. I realized that research is not just about repetitive lab work; it is about thinking critically, asking meaningful questions, and telling a compelling story through data and results.

The experience was more than an introduction to the research world; it taught me to think boldly and work carefully. I learned not to let ideas live only in conversation or in my head, but to translate them into small, testable experiments that turn speculation into evidence. Each modest prototype, whether a quick data split, a minimal model, or a rough visualization, sharpened my questions, exposed constraints, and informed the next step. Gradually, those incremental wins compounded into a coherent pipeline and credible results. The discipline I gained is simple but powerful: think wild, start small, measure honestly, and move steadily. This balance of wild curiosity with careful craftsmanship now guides how I approach complex, unfamiliar problems, and it’s the mindset I’ll carry into future research and professional work.

Winnie Ye in STA299

This was my first course related to research, and also my first time working with medical imaging. When I heard that we would be doing independent research, I immediately realized that this course would undoubtedly be a great challenge for me. Independent research meant there was no clear “standard answer”; instead, I had to explore and persist on my own.

At the beginning of my ROP project, I was actually the first student in the class to finalize a research direction. I quickly chose skin tone bias in melanoma detection as my topic and decided to work with the ISIC dataset. At that time, I felt well prepared: even though I noticed that dark-skin samples were rare, I believed the number would be “enough.” I even imagined finishing the project in less than two months.

But soon, reality hit me. Out of more than 30,000 ISIC images, there were almost no dark-skin cases. After that, I kept switching datasets: PAD, Fitzpatrick17k, MSKCC. However, each of them had serious problems: some had almost no melanoma cases, some had almost no dark-skin samples, some images contained a lot of background noise rather than just lesions, and some lacked skin tone labels altogether. Even when I combined them, the total number of dark-skin melanoma images was barely more than one hundred. During that period, I felt like I was constantly “starting over,” and every time I thought I had found a breakthrough, it quickly fell apart.

In this struggle, I tried almost everything I could think of. I trained my own U-Net, experimented with CLIP, SVM, EfficientNet, and ResNet; I tested light-skin-trained models directly on dark-skin data; I even used YOLO to crop lesions in order to reduce background noise. My research focus also shifted again and again: from melanoma, to pigmented lesions, and finally to red scaly diseases; and my tasks shifted from classification to segmentation and back again. Altogether, I must have attempted more than a dozen different approaches, yet none of them produced satisfactory results.

As the deadline drew closer, my anxiety grew stronger. By the last month, despite all the models, tasks, and research objects I had tried, I still had no meaningful results to show. At times I felt completely lost, unsure of what else I could even do. In desperation, I wrote Dr. Tyrrell a very long email, confessing that I might not be able to continue and even considered abandoning the project altogether. I told him that if I could start over, I would never choose to study bias so hastily, but would first spend more time carefully understanding the limitations of the datasets.

That month was probably the hardest part of the entire ROP. I stayed up late almost every day, exhausted and anxious, sometimes even afraid to run my code because I expected yet another failure. Dr. Tyrrell was sometimes worried and even a bit frustrated, which made me feel sad, but I was also deeply grateful that he cared so much. In the final weeks, Giuseppe also began to support me more closely, and I truly appreciated his help. During that time, even the smallest result—no matter how unrepresentative—felt important enough for me to immediately share with Dr. Tyrrell and Giuseppe for feedback.

Finally, near the very end, something changed. About ten days before the deadline, I obtained a result that was still imperfect, but at least demonstrated a sign of bias. It was not a breakthrough, but it was enough to build a conclusion. In the last week, I focused on writing the report, experimenting with bias-mitigation methods, and managed to finish everything just in time.

Looking back on these four months, I went through so many emotions: the early excitement of being “ahead,” the anxiety of being overtaken, the regret and despair of repeated failures, and the relief of a small last-minute success. If you ask me what kept me going, I honestly don’t know, perhaps the support from Dr. Tyrrell and Giuseppe, perhaps the stubborn voice in my head saying “try one more time,” or perhaps just a little bit of luck.

Through this course, I developed a new understanding of medical imaging and machine learning: they are not only technical problems but also involve fairness, data limitations, and persistence throughout the research process. I realized that the true value of research is not in quickly achieving a perfect result, but in continuously experimenting, reflecting, and learning from failures. In the future, I hope to further explore fairness in medical imaging, especially to investigate why my findings differed from previous studies and how I can avoid or better explain such discrepancies. I believe this will not only help me improve my research methods but also allow me to move forward more confidently on my academic path.