SCOTT'S THOUGHTS
I am glad to have you join me once more as we continue our discussion of meeting the ARC-PA’s requirement of 85% or better first-time PANCE pass rates, and how to complete the necessary report if this benchmark is not met. In our last issue, we looked at the first of the ten ARC-PA points to analyze as it correlates to PANCE scores, admissions criteria. Today we move on to the second and third success predictors as defined by ARC-PA: individual course performance, and course and instructor evaluations.
Stratification analysis of academic performance in certain classes often demonstrates significant differences in students who failed PANCE versus higher levels of performance. This can provide some guidance about levels of performance within academic classes that may demonstrate risk, thereby requiring additional academic coaching. As a result, PA programs can incorporate the threshold levels described in both the descriptive analysis and the parametric analysis to identify potentially low PANCE performance. This can be part of the program student success process involving student success coaching and student study skill development
Our studies have looked at descriptive and parametric analysis between PANCE and both academic course performance and clinical outcomes. For example, we performed analysis of didactic course performance for a class of 2021, Parametric analysis between PANCE and academic courses, aggregate parametric analysis of classes 2019-2021, and predicting scores below 400 with aggregate parametric analysis from a class of 2019-2021
In one program, first-summer GPA was a significant predictor of performance. The average first summer GPA of the 12 students who failed the PANCE was 3.10. In addition, the overall average number of courses that performed <= 3.33 for the students who failed is 9.25. This is significantly more than students who performed higher in this stratification scale. Therefore, the program will incorporate the first summer GPA as an indicator of inclusion into the academic improvement plan process and student coaching. The threshold will be set at a first summer GPA below 3.10.
Correlating individual course scores to the PANCE means the ability to identify students who score at or below the thresholds, who can then be placed on an academic improvement plan and identified as potentially as being at risk. Annually, programs should identify courses that are valid predictors of PANCE performance. These courses will be incorporated into a risk modeling system in conjunction with the courses identified that predict PANCE scores below 400. We recommend then that programs conduct an aggregate analysis each year, to utilize three years of data to strengthen the validity of the system.
This type of analysis has the following aims:
Providing a longitudinal comparison of negative responses related to specific courses over time that could indicate a decrease in instructional quality thereby impacting PANCE
Conducting an analysis on the instructor evaluations to provide insights about the following trend: An increase in negative responses within academic courses could indicate a decrease in instructional quality, thereby impacting PANCE.
When students are asked to evaluate a class, they may respond to multiple questions and be permitted to submit “more information” in the form of essay answers. Essay answers are somewhat harder to pin down statistically but it is possible to check for recurring themes that arise, such as “buzz words,” or complaints that are listed than once. Such responses can be a part of descriptive analysis.
We conducted one analysis using these sample course evaluation questions:
The syllabus/course outline provided accurate and sufficient information about the design and content of the course.
I learned to apply the material covered in this course to improve my thinking, problem-solving, decision-making or skills.
The textbook(s) and other course materials appropriately supported the course content.
The assignments and course activities helped me achieve the learning outcomes of this course.
There were opportunities for active learning (e.g., group work, discussions, and hands-on experiences).
The learning experience stimulated me to think in new ways about the course content.
The instructor was accessible outside of class, either electronically or in person.
Feedback received on my course work helped me understand how to improve in subsequent coursework
The classroom environment allowed students to freely ask questions and express their views
I clearly understood how my work in the course would be assessed.
The instructor was prepared for class throughout the term.
The answers to these questions are rated based on “positive” versus “negative” responses. Course evaluations that show increasing percentages of negative responses over time (or a marked increase in negative responses compared to the previous year or to other classes) can indicate the need for examination of the course content and/or its instructor. Benchmarks may be set up to determine at what point a course must be examined for failing to meet student expectations and requirements.
Tips for analyzing course evaluations:
Once you have established the specific courses are below benchmark, the next step is to triangulate content within the course to specific performance within PANCE
Analyze the organ systems within the course, and then evaluate objective instruments such as PACKRAT, EOR, EOC for any concordance
If there is no relationship, do not use operational items related to course improvements as a modification
Next time we’ll look at more variables for analysis: program instructional objectives, student summative evaluation results, and remediation practices, all of which can correlate significantly with, and serve as useful predictors of PANCE performance.
I am glad to have you join me once more as we continue our discussion of meeting the ARC-PA’s requirement of 85% or better first-time PANCE pass rates, and how to complete the necessary report if this benchmark is not met. In our last issue, we looked at the first of the ten ARC-PA points to analyze as it correlates to PANCE scores, admissions criteria. Today we move on to the second and third success predictors as defined by ARC-PA: individual course performance, and course and instructor evaluations.
Stratification analysis of academic performance in certain classes often demonstrates significant differences in students who failed PANCE versus higher levels of performance. This can provide some guidance about levels of performance within academic classes that may demonstrate risk, thereby requiring additional academic coaching. As a result, PA programs can incorporate the threshold levels described in both the descriptive analysis and the parametric analysis to identify potentially low PANCE performance. This can be part of the program student success process involving student success coaching and student study skill development
Our studies have looked at descriptive and parametric analysis between PANCE and both academic course performance and clinical outcomes. For example, we performed analysis of didactic course performance for a class of 2021, Parametric analysis between PANCE and academic courses, aggregate parametric analysis of classes 2019-2021, and predicting scores below 400 with aggregate parametric analysis from a class of 2019-2021
In one program, first-summer GPA was a significant predictor of performance. The average first summer GPA of the 12 students who failed the PANCE was 3.10. In addition, the overall average number of courses that performed <= 3.33 for the students who failed is 9.25. This is significantly more than students who performed higher in this stratification scale. Therefore, the program will incorporate the first summer GPA as an indicator of inclusion into the academic improvement plan process and student coaching. The threshold will be set at a first summer GPA below 3.10.
Correlating individual course scores to the PANCE means the ability to identify students who score at or below the thresholds, who can then be placed on an academic improvement plan and identified as potentially as being at risk. Annually, programs should identify courses that are valid predictors of PANCE performance. These courses will be incorporated into a risk modeling system in conjunction with the courses identified that predict PANCE scores below 400. We recommend then that programs conduct an aggregate analysis each year, to utilize three years of data to strengthen the validity of the system.
This type of analysis has the following aims:
Providing a longitudinal comparison of negative responses related to specific courses over time that could indicate a decrease in instructional quality thereby impacting PANCE
Conducting an analysis on the instructor evaluations to provide insights about the following trend: An increase in negative responses within academic courses could indicate a decrease in instructional quality, thereby impacting PANCE.
When students are asked to evaluate a class, they may respond to multiple questions and be permitted to submit “more information” in the form of essay answers. Essay answers are somewhat harder to pin down statistically but it is possible to check for recurring themes that arise, such as “buzz words,” or complaints that are listed than once. Such responses can be a part of descriptive analysis.
We conducted one analysis using these sample course evaluation questions:
The syllabus/course outline provided accurate and sufficient information about the design and content of the course.
I learned to apply the material covered in this course to improve my thinking, problem-solving, decision-making or skills.
The textbook(s) and other course materials appropriately supported the course content.
The assignments and course activities helped me achieve the learning outcomes of this course.
There were opportunities for active learning (e.g., group work, discussions, and hands-on experiences).
The learning experience stimulated me to think in new ways about the course content.
The instructor was accessible outside of class, either electronically or in person.
Feedback received on my course work helped me understand how to improve in subsequent coursework
The classroom environment allowed students to freely ask questions and express their views
I clearly understood how my work in the course would be assessed.
The instructor was prepared for class throughout the term.
The answers to these questions are rated based on “positive” versus “negative” responses. Course evaluations that show increasing percentages of negative responses over time (or a marked increase in negative responses compared to the previous year or to other classes) can indicate the need for examination of the course content and/or its instructor. Benchmarks may be set up to determine at what point a course must be examined for failing to meet student expectations and requirements.
Tips for analyzing course evaluations:
Once you have established the specific courses are below benchmark, the next step is to triangulate content within the course to specific performance within PANCE
Analyze the organ systems within the course, and then evaluate objective instruments such as PACKRAT, EOR, EOC for any concordance
If there is no relationship, do not use operational items related to course improvements as a modification
Next time we’ll look at more variables for analysis: program instructional objectives, student summative evaluation results, and remediation practices, all of which can correlate significantly with, and serve as useful predictors of PANCE performance.
Subscribe to our newsletter
© 2024 Scott Massey Ph.D. LLC