SCOTT'S THOUGHTS
Thank you for joining me once more as I continue my blog series on advanced assessment methods for the data collected by your PA program. Today we will begin seeing the practical applications of advanced assessment, starting with the basics: cause and effect.
When your PA program fails to meet a benchmark, the natural response is to question why. Where did we go wrong? What must change? What must improve? But until we find the root cause of the problem – in fact, if there even is a root cause – devising a solution will be equivalent to throwing darts at a board. Your solution may hit the target, may come close, or maybe be on the wrong side of the board altogether, and you will have to wait through another cohort, or several more cohorts, to understand whether your solution solved anything.
Instead, I propose using advanced assessment to find relationships between collected data, to pinpoint where a problem originates, and define solutions and interventions.
Let us look at some concrete examples.
As we see in this illustration, this program’s PANCE scores in cardiology were below benchmark. They can ask a natural progression of questions – one leading to the next. Note that at each point we run, a statistical analysis, such as a regression, between the data sets to find statistically significant correlation. We will talk about that in upcoming blogs. For now, let us just work through the process of drilling down to root causes.
The PA program asks:
Are we also failing to meet benchmark for cardiology in our EOR or PACKRAT scores?
If no, perhaps the failure to meet PANCE benchmark was an anomaly.
If yes, we look at the next point.
Are our didactic summative module scores in cardiology also low?
If no, perhaps the problem is in how students prepare for the EOR/Packrats. Better test preparation and coaching could be an answer.
If yes, we look at the next point.
Are our cardiology course module scores low? If not, once more, this may be a study skills problem for our students preparing for major tests.
If no, then let us see where our program needs to bolster preparation for didactic summative modules
If yes, we look at the next point.
Are the evaluations of our clinical medicine course director poor?
If no, once more we return to the previous point. We can focus on metacognitive methodology, for example, to help students in understanding and apply the material in their cardiology course.
If yes, we look at the next point.
NCCPA mapping shows that we have insufficient coverage of topics in the curriculum, meaning we should modify it.
If no, then our clinical medical course director must improve or modify methods of imparting the required information
If yes, then we modify the curriculum to provide sufficient coverage for the topic.
The pieces are not always connected, but this gives you an example of how you can reverse engineer, and then basically think about the downstream effects of phenomena. If you find there is a relationship between these aspects, you will note that the arrow goes both ways. Fixing the root cause, wherever it is found, should result in improvement in all subsequent processes.
Here is another example. Let us say you have an 83% first-time taker pass rate on PANCE, which is going to resort in an ARC-PA report.
In this scenario, you look at your EOC scores and find that those are below 1450 for the students that failed the PANCE. What does that tell you? The average EOR score is 75% or less for students who failed the PANCE. Then you look at PACKRATS: scores less than 135 on PACKRAT II; scores less than 110 on PACKRAT I. Finally, you may look at the average score of less than 80 in predictor courses, or the undergraduate GPA of less than 3.25 in four out of six students. You can see the cascade of events goes both ways. It could be that the students are coming in with weaker study skills, which causes a longitudinal effect.
Sometimes these analyses are a little like time travel: we investigate the past to divine the reasons for the present. By doing so, we can determine how to help incoming and current students avoid difficulties. Advanced assessment methods let you determine statistically significant correlations so that your remedies are effectively addressing the areas of need.
Be sure to join me for my next blog. I will discuss more ways to examine and determine the correlations of the intersecting elements of data so that your program’s students and initiatives have the best chance for success.
Thank you for joining me once more as I continue my blog series on advanced assessment methods for the data collected by your PA program. Today we will begin seeing the practical applications of advanced assessment, starting with the basics: cause and effect.
When your PA program fails to meet a benchmark, the natural response is to question why. Where did we go wrong? What must change? What must improve? But until we find the root cause of the problem – in fact, if there even is a root cause – devising a solution will be equivalent to throwing darts at a board. Your solution may hit the target, may come close, or maybe be on the wrong side of the board altogether, and you will have to wait through another cohort, or several more cohorts, to understand whether your solution solved anything.
Instead, I propose using advanced assessment to find relationships between collected data, to pinpoint where a problem originates, and define solutions and interventions.
Let us look at some concrete examples.
As we see in this illustration, this program’s PANCE scores in cardiology were below benchmark. They can ask a natural progression of questions – one leading to the next. Note that at each point we run, a statistical analysis, such as a regression, between the data sets to find statistically significant correlation. We will talk about that in upcoming blogs. For now, let us just work through the process of drilling down to root causes.
The PA program asks:
Are we also failing to meet benchmark for cardiology in our EOR or PACKRAT scores?
If no, perhaps the failure to meet PANCE benchmark was an anomaly.
If yes, we look at the next point.
Are our didactic summative module scores in cardiology also low?
If no, perhaps the problem is in how students prepare for the EOR/Packrats. Better test preparation and coaching could be an answer.
If yes, we look at the next point.
Are our cardiology course module scores low? If not, once more, this may be a study skills problem for our students preparing for major tests.
If no, then let us see where our program needs to bolster preparation for didactic summative modules
If yes, we look at the next point.
Are the evaluations of our clinical medicine course director poor?
If no, once more we return to the previous point. We can focus on metacognitive methodology, for example, to help students in understanding and apply the material in their cardiology course.
If yes, we look at the next point.
NCCPA mapping shows that we have insufficient coverage of topics in the curriculum, meaning we should modify it.
If no, then our clinical medical course director must improve or modify methods of imparting the required information
If yes, then we modify the curriculum to provide sufficient coverage for the topic.
The pieces are not always connected, but this gives you an example of how you can reverse engineer, and then basically think about the downstream effects of phenomena. If you find there is a relationship between these aspects, you will note that the arrow goes both ways. Fixing the root cause, wherever it is found, should result in improvement in all subsequent processes.
Here is another example. Let us say you have an 83% first-time taker pass rate on PANCE, which is going to resort in an ARC-PA report.
In this scenario, you look at your EOC scores and find that those are below 1450 for the students that failed the PANCE. What does that tell you? The average EOR score is 75% or less for students who failed the PANCE. Then you look at PACKRATS: scores less than 135 on PACKRAT II; scores less than 110 on PACKRAT I. Finally, you may look at the average score of less than 80 in predictor courses, or the undergraduate GPA of less than 3.25 in four out of six students. You can see the cascade of events goes both ways. It could be that the students are coming in with weaker study skills, which causes a longitudinal effect.
Sometimes these analyses are a little like time travel: we investigate the past to divine the reasons for the present. By doing so, we can determine how to help incoming and current students avoid difficulties. Advanced assessment methods let you determine statistically significant correlations so that your remedies are effectively addressing the areas of need.
Be sure to join me for my next blog. I will discuss more ways to examine and determine the correlations of the intersecting elements of data so that your program’s students and initiatives have the best chance for success.
Subscribe to our newsletter
© 2024 Scott Massey Ph.D. LLC