SCOTT'S THOUGHTS
Today we continue our review of how to meet the requirements ARC-PA’s 5th Edition Standard’s Appendix 14F requirements, which include your presentation of PANCE outcomes in your PA program, looking at how admissions, course grades, test grades and other points of data correlate with these outcomes, and which of these are predictors of success or failure on the test. In this edition of our blog, we will discuss the purpose of correlating PANCE scores with 1) number of C-grades; 2) number of students remediations and 3) preceptor ratings.
When a student receives a near-failing or failing grade in a class, remediation comes swiftly. Something is clearly wrong. But C-Grades, which technically imply “average” performance in a classroom setting, tell their own story. Correlation of C-grades to PANCE scores should indicate whether the number of C grades is statistically significantly correlated with PANCE scores. We have already seen how outstanding performance in certain classes can be strong indicator of PANCE success. If your correlation is statistically significant, a student who receives a C-grade might require entry into an academic improvement plan.
You can also look at the number of remediations and student performance, to determine whether the number of remediations is statistically significant or correlated with PANCE scores.
Remediation is meant to improve a student’s performance and return them to a more level playing field. A struggling student may require more than one remediation, and you might find yourself with those who require intervention repeatedly. When you see the fact that increased number of remediations leads to lower PANCE score, it should give you some pause to say, “Is the remediation process working?” If these often-remediated students are getting lower PANCE scores, if their performance is not up to their peers, there may be a need to look at the process itself.
Finally, you can look at preceptor evaluations. That regression might inform you that preceptor evaluations (which often are inflated) are not predictors of success. I have seen many cases where overall, a student’s preceptor rating in the clinical year was impressively high, but their cognitive component was not such that they were able to pass.
In the case below, I took a total aggregate of all preceptor evaluation scores for students that failed, versus students that passed. There was a very small percentage of a difference there. By breaking it down into the sum of different rotations, we might see student/preceptor comments that shed some light. In general, however, it is not particularly helpful.
We collected the following data from a certain program’s students who failed the PANCE, to shed light on their own perceptions of why they failed and how they might have better prepared themselves. Conducting such a survey or interview could demonstrate where your program might provide better preparation/coaching.
Here are some actionable items we discovered, based on the qualitative feedback from each of the students:
Student One did not utilize large enough practice exams; this led to a loss of focus. Also, Student One did not take any of the NCCPA practice exams that could have given feedback before sitting for the exam
Student Three commented about ROSH questions giving X a false sense of confidence because of the high scores and did not use any of the NCCPA practice exams. Student Three reported not understanding how to use the time to breaks effectively
Student Four admitted that X had a false sense of confidence due to the EOC score, which meant the benchmark according to PAEA. This student also described feeling unprepared for the cognitive fatigue.
Student Five felt like ExamMaster questions were more representative of PANCE then the ROSH review and described the PANCE review course is more buzz words, and not particularly helpful. This student did use the NCCPA practice exams effectively to monitor improvement
The program made the following modifications, related to student feedback:
Resetting minimum pass rate for the EOC to 1475, based upon the comparison of students who passed vs fail
Incorporating use of the NCCPA practice exams as a component of the student success process.
Increasing coaching to better prepare students for the rigor and cognitive fatigue of the PANCE exam.
Appendix 14’s Sections E and F relate mostly to PANCE scores. Enhancing your curriculum related to the PANCE-related data is based on multifaceted elements, as displayed in the following spider diagram. As you can see, there are all these elements to consider, and when you generate data-driven modifications you can connect them to a specific data point which then leads to an action plan, an area needing improvement. Then, you may follow up with your committee structure.
In our next blog we will begin examining Appendix 14G, in which the commission requires us to examine the sufficiency and effectiveness of principal and instructional faculty and staff.
Today we continue our review of how to meet the requirements ARC-PA’s 5th Edition Standard’s Appendix 14F requirements, which include your presentation of PANCE outcomes in your PA program, looking at how admissions, course grades, test grades and other points of data correlate with these outcomes, and which of these are predictors of success or failure on the test. In this edition of our blog, we will discuss the purpose of correlating PANCE scores with 1) number of C-grades; 2) number of students remediations and 3) preceptor ratings.
When a student receives a near-failing or failing grade in a class, remediation comes swiftly. Something is clearly wrong. But C-Grades, which technically imply “average” performance in a classroom setting, tell their own story. Correlation of C-grades to PANCE scores should indicate whether the number of C grades is statistically significantly correlated with PANCE scores. We have already seen how outstanding performance in certain classes can be strong indicator of PANCE success. If your correlation is statistically significant, a student who receives a C-grade might require entry into an academic improvement plan.
You can also look at the number of remediations and student performance, to determine whether the number of remediations is statistically significant or correlated with PANCE scores.
Remediation is meant to improve a student’s performance and return them to a more level playing field. A struggling student may require more than one remediation, and you might find yourself with those who require intervention repeatedly. When you see the fact that increased number of remediations leads to lower PANCE score, it should give you some pause to say, “Is the remediation process working?” If these often-remediated students are getting lower PANCE scores, if their performance is not up to their peers, there may be a need to look at the process itself.
Finally, you can look at preceptor evaluations. That regression might inform you that preceptor evaluations (which often are inflated) are not predictors of success. I have seen many cases where overall, a student’s preceptor rating in the clinical year was impressively high, but their cognitive component was not such that they were able to pass.
In the case below, I took a total aggregate of all preceptor evaluation scores for students that failed, versus students that passed. There was a very small percentage of a difference there. By breaking it down into the sum of different rotations, we might see student/preceptor comments that shed some light. In general, however, it is not particularly helpful.
We collected the following data from a certain program’s students who failed the PANCE, to shed light on their own perceptions of why they failed and how they might have better prepared themselves. Conducting such a survey or interview could demonstrate where your program might provide better preparation/coaching.
Here are some actionable items we discovered, based on the qualitative feedback from each of the students:
Student One did not utilize large enough practice exams; this led to a loss of focus. Also, Student One did not take any of the NCCPA practice exams that could have given feedback before sitting for the exam
Student Three commented about ROSH questions giving X a false sense of confidence because of the high scores and did not use any of the NCCPA practice exams. Student Three reported not understanding how to use the time to breaks effectively
Student Four admitted that X had a false sense of confidence due to the EOC score, which meant the benchmark according to PAEA. This student also described feeling unprepared for the cognitive fatigue.
Student Five felt like ExamMaster questions were more representative of PANCE then the ROSH review and described the PANCE review course is more buzz words, and not particularly helpful. This student did use the NCCPA practice exams effectively to monitor improvement
The program made the following modifications, related to student feedback:
Resetting minimum pass rate for the EOC to 1475, based upon the comparison of students who passed vs fail
Incorporating use of the NCCPA practice exams as a component of the student success process.
Increasing coaching to better prepare students for the rigor and cognitive fatigue of the PANCE exam.
Appendix 14’s Sections E and F relate mostly to PANCE scores. Enhancing your curriculum related to the PANCE-related data is based on multifaceted elements, as displayed in the following spider diagram. As you can see, there are all these elements to consider, and when you generate data-driven modifications you can connect them to a specific data point which then leads to an action plan, an area needing improvement. Then, you may follow up with your committee structure.
In our next blog we will begin examining Appendix 14G, in which the commission requires us to examine the sufficiency and effectiveness of principal and instructional faculty and staff.
Subscribe to our newsletter
© 2024 Scott Massey Ph.D. LLC