MiWord of the Day Is… Roentgen!

Welcome to the first Medical imaging Word of the Day! Here is how it works:


1- I introduce and discuss a word.
2- You have to use the word in a sentence by the end of the day. No need to use it in the correct context – actually out of context is more fun and elicits a more entertaining response!




OK, here we go. The word of the day is Roentgen – typically pronounced “Rent-gun”.

Wilhelm Roentgen was a physicist from northern Germany who in 1895 was the first to detect the now famous x-ray. Interestingly, he was not the first to produce them. The x-ray is part of the electromagnetic spectrum that contains shorter wavelengths (0.01 to 10 nm) than visible light (390-700 nm). We will talk about this in another post as today it is about Roentgen.

 
The interesting discovery was that it was a new kind of light – one that could not be seen but could be detected. Most importantly it gave physicians the ability to peer inside the body of a patient without having to cut it open – a camera that can see inside the body.
 
An interesting and maybe ironic fact is that Roentgen – the discoverer of a new way to “see” – was blind in one eye (from a childhood illness) and color blind
 
Here are some other interesting facts:
 
  • Following his discovery the “Roentgen unit” was described and used to measure x-ray exposure (one R is 2.58×10−4 C/kg). About 500 R over 5 hours is considered a lethal dose for humans.
  • Roentgen was the first scientist to receive the Nobel prize in physics in 1901. He refused to patent his discovery and gave the entire prize money to his university. Wow, what a guy!
  • He died of colon cancer in 1923.
 
So, now we have to use “Roentgen” in a sentence. Here are two examples:
 
Serious: Hey Frank, I see you just came back from having a chest x-ray. Did you know that you just received about 1/20 of a Roentgen? Oh, and I am glad to hear you don’t have pneumonia…
 
Not so serious: Hello, I will be travelling to Europe this summer and will need to exchange some Canadian dollars for Euros. Could you tell me the exchange rate? And while you’re at it, what is today’s rate on the Roentgen? Never heard of that currency? Really? It’s German I think…
 
 
OK, unbelievably I found a music link to Roentgen! Hyde produced an album named “Roentgen” and one of the main tracks is aptly called “Unexpected“. Yup, I’m serious…
 
 
See you in the blogosphere,
 
 
Pascal Tyrrell

You like potato and I like potahto… Let’s Call the Whole Thing Off!

We have been talking about agreement lately (not sure what I am talking about? See the start of the series here) and we covered many terms that seem similar. Help!


Before you call the whole thing off and start dancing on roller skates like Fred Astaire and Ginger Roberts did in Shall We Dance, let’s clarify a little the difference between agreement and reliability. 


When assessing agreement in medical research, we are often interested in one of three things:


1- comparing methods – à la Bland and Altman style.


2- validating an assay or analytical method.


3- assessing bioequivalence.




Agreement represents the degree of closeness between readings. We get that. Now reliability on the other hand actually assesses the degree of differentiation between subjects – so one’s ability to tell subjects apart from within a population. Yes, I realize this is a subtlety just as Ella Fitzgerald and Louis Armstrong sing about in the original Let’s Call the Whole Thing Off.


Now, often when assessing agreement one will use an unscaled index (ie a continuous measure for which you calculate the Mean Squared Deviation, Repeatability Standard Deviation, Reproducibility Standard Deviation, or the Bland and Altman Limits of Agreement) whereas when assessing reliability one often uses a scaled index (ie a measure for which you can calculate the Intraclass Correlation Coefficient or Concordance Correlation Coefficient). This is because a scaled index mostly depends on between-subject variability and, therefore, allows for the differentiation of subjects from a population. 


Ok – clear as mud. Here are some very basic guidelines:


1- Use descriptive stats to start with.


2- Follow it up with an unscaled index measure like the MSD or LOI which deal with absolute values (like the difference).


3- Finish up with a scaled index measure that will yield a standardized value between -1 and +1 (like the ICC or CCC).


Potato, Potahtoe. Whatever. 




Entertain yourself with this humorous clib from the Secret Policeman’s Ball and I’ll…

See you in the blogosphere!




Pascal Tyrrell

2 Legit 2 Quit

MC Hammer. Now those were interesting pants! Heard of the slang expression “Seems legit”? Well “legit” (short for legitimate) was popularized my MC Hammer’s song 2 Legit 2 Quit. I had blocked the memories of that video for many years. Painful – and no I never owned a pair of Hammer pants!





Whenever you sarcastically say “seems legit” you are suggesting that you question the validity of the finding. We have been talking about agreement lately and we have covered precision (see Repeat After Me), accuracy (see Men in Tights), and reliability (see Mr Reliable). Today let’s cover validity.




So, we have talked about how reliable a measure is under different circumstances and this helps us gauge its usefulness. However, do we know if what we are measuring is what we think it is. In other words, is it valid? Now reliability places an upper limit on validity – the higher the reliability, the higher the maximum possible validity. So random error will affect validity by reducing reliability whereas systematic error can directly affect validity – if there is a systematic shift of the new measurement from the reference or construct. When assessing validity we are interested in the proportion of the observed variance that reflects variance in the construct that the method was intended to measure.


***Too much stats alert*** Take a break and listen to Ice, Ice, Baby from the same era as MC Hammer and when you come back we will finish up with validity. Pants seem similar – agree? 🙂




OK, we’re back. The most challenging aspect of assessing validity is the terminology. There are several different types of validity dependent of the type of reference standard you decide to use (details to follow in later posts):


1- Content:  the extent to which the measurement method assesses all the important content.


2- Construct: when measuring a hypothetical construct that may not be readily
observed.


3- Convergent: new measurement is correlated with other measurements of the same construct.


4- Discriminant: new measurement is not correlated with unrelated constructs.

So why do we assess validity? because we want to know the nature of what is being measured and the relationship of that measure to its scientific aim or purpose.




I’ll leave you with another “seem legit” picture that my kids would appreciate…





See you in the blogosphere,




Pascal Tyrrell







Mr Reliable

Kevin Durant is Mr Reliable

Being reliable is an important and sought after trait in life. Kevin Durant has proven himself to be just that to the NBA. Would you agree (pun intended)? So, we have been talking about agreement lately and we have covered precision (see Repeat After Me) and accuracy (see Men in Tights). Today let’s talk a little about reliability.

 
As I mentioned last time, the concepts of accuracy and precision originated in the physical sciences because direct measurements are possible. Not to be outdone, the social sciences (and later in the Medical Sciences) decided to define their own terms of agreement – validity and reliability.
 
So the concept of reliability was developed to reflect the amount of error, both random and systematic, in any given measurement. For example if you were to want to assess the the measurement error in repeated measurements on the same subject under identical conditions or to measure the consistency of two readings obtained by two different readers on the same subject under identical conditions. 
 
The reliability coefficient is simply the ratio of variability between subjects to the total variability (sum of subject variability and measurement error). A coefficient of 0 indicates no reliability and 1 indicates perfect reliability with no measurement error.
 
Being Mr Reliable (see the trailer to this cool old movie from the sixties) is always desirable but keep in mind that when you consider reliability remember that:
 
1- A true score exists but is not directly measurable (philosophical…)
 
2- A measurement is always the sum of the true score and a random error.
 
3- Any two measurements for the same subject are parallel measurements in that they are assumed to have the same mean and variance.
 
 
With these assumptions in place, reliability can be also expressed as the correlation between any two measurements on the same subject – AKA the intraclass correlation coefficient or ICC (originally defined by Sir Francis Galton and later further developed by Pearson and Fisher). We will talk about the ICC in a later post.
 
Phew! That was a mouthful. All this talk of reliability is exhausting. Maybe Lean on me (or Bill Withers, actually) for a bit and we will talk about validity when we come back…




See you in the blogosphere,






Pascal Tyrrell

Men in Tights?

One of the first movies my parents took me to see was Disney’s Robin Hood in 1973. This was back in the days when movies were viewed in theaters and TV was still black and white for most people. One of Robin’s most redeeming qualities is his prowess as an archer. He simply never misses his target. Well maybe not so much in Mel Brook’s rendition of Robin Hood Men in Tights!


We have been talking about agreement lately and last time we covered precision (see Repeat After Me). We discussed that precision is most often associated with random error around the expected measure. So, now you are thinking: how about the possibility of systematic error? You are right. Let’s take Robin Hood as an example. If he were to loose 3 arrows at a target and all of them were to land in the bulls-eye then you would say that he has good precision – all arrows were grouped together – and good accuracy as all arrows landed in the same ring. Accuracy is a measure of “trueness”. The least amount of bias without knowing the true value.  Now if all 3 arrows landed in the same ring but in different areas of the target he would have good accuracy – all 3 arrows receive the same points for being in the same ring – but poor precision as they are not grouped together.



As agreement is a measure of “closeness” between readings, it is not surprising then that it is a broader term that contains both accuracy and precision. You are interested in how much random error is affecting your ability to measure something AND whether or not there also exists a systematic shift in the values of your measure. The first results in an increased level of background noise (variability) and the latter in the shift of the mean of your measures away from the truth. Both important when considering overall agreement.


OK, take a break and watch Shrek Robin Hood. The first of a series is always the best…


Now the concepts of accuracy and precision originated in the physical sciences. Not to be outdone, the social sciences decided to define their own terms of agreement – validity and reliability. We will discuss these next time after you listen to Bryan Adams – Everything I Do from the Robin Hood soundtrack. Great tune.






See you in the blogosphere,




Pascal Tyrrell

Michener Institute Series: Princess Margaret Hospital, Toronto, Ontario



As first year Radiation Thearpy students here a The Michener Institute, we are currently in our 4th week of clinical placements! As promised, here’s a little update about the experiences Jennifer and Ori are going through at Princess Margaret Hospital.

Jennifer: I’ve been placed in Unit 10 which specializes in treating patients with Genitourinary, Gynae and Lower Gastrointestinal cancer.

Ori: I’m on Unit 14 and we treat breast cancer and palliative cancers.

We are proud to say that we are enjoying our experience here. Our duty as students in training is to follow the radiation therapist and learn what they do. The job of a therapist is to treat cancer using a machine called Linear Accelerator (Linac) to deliver ionizing radiation. Patients will typically come once a day for the next couple of weeks, so we see the same patients every day and therefore really get to know our patients well. There is a fair amount of patient interaction, which is one of our favorite parts of the job. Along with patient interactions, we also get to use the equipment, which mainly includes operating the Linac machine (the machine that delivers the radiation) and taking X-rays or CT scans to make sure the patient is in the right position. Every day is a new experience and we are constantly learning new skills. We get a better insight of the patient’s perspective during the entire span of their radiation treatment. For example most patients in unit 10 are required to have a full bladder and empty rectum. Having to hold their pee can be quite difficult for some patients, especially when there are delays, which pushes Unit 10 to be a very fast paced environment. Overall our first 4 weeks of clinical has been an exceptionally valuable experience and we’re looking forward to our next 4 weeks!


Until next time!

Jennifer and Ori


Michener Institute Series: Clinical Placement Site – Kingston Ontario

 
 
(Kingston City Hall)
It has been a month since the start of summer clinical placement, and I am currently
completing my placement in Kingston General Hospital (KGH) here at Kingston, Ontario.  Kingston is a nice beautiful town located at the north side of the entrance of outflow of St Lawrence River from Lake Ontario; it was the
first capital of Canada when Canada was still a province of British colony.
 
KGH host one of the most eastern cancer center in Ontario and it has a beautiful view because it is situated by the side of Lake Ontario, its front entrance open to the water. It is a perfect place for lunch and enjoys the sun during summer time.
 



      (KGH cancer centre front entrance)    
               
          (MacDonald
Park by water, in front of cancer centre)
 



 
The past month was phenomenal, words cannot fully describe the knowledge and experience we gain from clinical practice. The transition from purely academic to hands on
practice is eye-opening and a bit hectic; because each patient is unique and no knowledge from books can prepare you how to interact with all patients.  It is interesting to learn from the therapists, the way they educate patients on their first day of treatment, the type of approach to each patient base on the assessment they do during the conversation with them. It’s amazing how much compassion the therapists have for patients and how much they care for them.
 
 
During the first two weeks in CT simulation unit, I made my first mask and had my own mask made for treatment to head and neck regions. The mask is made of pliable plastics. They come in as a sheet of plastic in a frame, and are put into a warm/hot water bath for 2-4 minutes to makes it pliable, after the mask is taken out of the warm water bath there is a 30-60 seconds window before it hardens. The therapist takes out the mask, tower dry it as much as possible and covers it on patient’s head as fast as possible.  The therapists are very efficient at their job, but what is amazing are the patients going through the process; imagine a warm and moist piece of plastic cover you face, harden in an instant and lock your head into position, and afterword you cannot move for 5-10 minutes for CT scan. I never had thought of the discomfort till I experience it myself.
 
 

 

(My 1st  mask, can kinda see my face print)
 
So far the experience here is amazing, and hopefully the coming June will be equally fantastic as well.
 
Till next time.
 
 

 

Gordon

Repeat After Me…

So, in my last post (Agreement Is Difficult) we started to talk about agreement which measures “closeness” between things.  We saw that agreement is broadly defined by accuracy and precision. Today, I would like to talk a little more about the latter.


 The Food and Drug Administration (FDA) defines precision as “the degree of scatter between a series of measurements obtained from multiple sampling of the same homogeneous sample under the prescribed conditions”. This means precision is only comparable under the same conditions and generally comes in two flavors: 


1- Repeatability which measures the purest form of random error – not influenced by any other factors. The closeness of agreement between measures under the exact same conditions, where “same condition” means that nothing has changed other than the times of the measurements.


2- Reproducibility is similar to repeatability but represents the precision of a given method under all possible conditions on identical subjects over a short period of time. So, same test items but in different laboratories with different operators and using different equipment for example.




Now, when considering agreement if one of the readings you are collecting is an accepted reference then you are most probably interested in validity (we will talk about this a future post) which concerns the interpretation of your measurement. On the other hand if all of your readings are drawn from a common population then you are most likely interested in assessing the precision of the readings – including repeatability and reproducibility.


As we have just seen, not all repeats are the same! Think about what it is that you want to report before you set out to study agreement – or you could be destined to do it over again as does Tom Cruise in his latest movie Edge of Tomorrow where is lives, dies, and then repeats until he gets it right… 






See you in the blogosphere,




Pascal Tyrrell

Agreement Is Difficult: So What Are We Gonna Do? I Dunno, What You Wanna Do?

It is never easy to come to an agreement – even amongst friends! The Vultures from Disney’s The Jungle Book (oldie but a goodie) certainly know this. In medical research measuring agreement is also a challenge. In this series of posts I am going to talk about agreement and how it is measured.


Agreement measures the “closeness” between things. It is a broad term that contains both “accuracy” and “precision”. So, let’s say you are shopping for screen protectors for your wonderful new phone. You head to the internet and start going through the gazillion links advertising screen protectors of all sizes and styles. As you just spent your savings on the phone, you do not have much money left over for the screen protector. You decide on a generic brand and order a pack of 10. After an unbearable wait of a week to receive them in the mail you open the pack and find that even though you ordered the screen protector to fit your specific phone they are a little small… except for two that fit perfectly! What? That’s annoying.


So, how close are the screen protectors to being “true” to the expected product? This is agreement. Now, most of them are a little small. This represents poor “accuracy”. This is because there exists a systematic bias. If you took the mean size of these 10 protectors you would find that it deviated from the true expected value – size in this case. Furthermore, you found that two of the 10 protectors actually fit your phone screen rather well. This is great, but this inconsistency between your protectors represents poor “precision”. This time we are interested in the degree of scatter between your measurements – a measure of within sample variation due to random error (see my Dickens post for more info).


Now the concepts of accuracy and precision originated in the physical sciences where direct measurements are possible. Not to be outdone, the social sciences (and then soon to adopt medical sciences!) decided to establish similar concepts – validity and reliability. We will discuss these in a latter post but for now simply remember that the main differences are that a reference is required for validity and that both validity and reliability are most often assessed with scaled indices.


Phew! That was a little confusing. Have a listen to We Just Disagree from Dave Mason to relax.


Next post we will look a little more closely at two special kinds of precision – repeatability and reproducibility.




See you in the blogoshere,


Pascal Tyrrell

It’s All Relevant According to Einstein… or Was It Relative?

One of Einstein‘s many theories is that a light beam always appears to have the same speed, no matter how fast you are moving relative to it. This theory is also one of the foundations of Einstein’s special theory of relativity. So why the Mini Einstein Bobble Head? Because of the Night at the Museum 2 movie, of course! Have a peek at the trailer and come back.

OK, the last letter in our F.I.N.E.R. mnemonic – a convenient way to remember what makes a good research question – is R. We covered E for Ethical last time and today we will go over R for Relevant – not relative (pay attention now!).

So, you are now a junior researcher with a newly minted pocked protector and have decided to step back a minute and assess your research question using F.I.N.E.R. Among the 5 characteristics we have discussed this last one is an important one. Let’s go back to F is for Feasible where we were thinking of a way to survey your friend’s about going camping at the end of the semester to celebrate the start of summer. The results of your survey will provide you with important information. Not only will they influence your decision to have the event or not, but they will also allow for promotion of the event as being “really popular” (important to many participants) and for better planning (important to the organizing committee). The results of the survey are “relevant”.

 
Make sure the results of your study will contribute to research knowledge and influence change in your field. Maybe a mentor can help shed some light on it if you are unsure. The broader the relevance of your results the better. This touches on “knowledge translation” and we will chat about that later on in our blog.
 
That’s it for FINER! Practice it a little this week-end and let me know how it goes.
 
 
See you in the blogosphere,
 
Pascal Tyrrell