Each academic year, the following questions are examined:
UNCW has adopted an approach to assessing its Learning Goals at the University Studies level that uses assignments that are a regular part of the course content. A strength of this approach is that the student work products are an authentic part of the curriculum, and hence there is a natural alignment often missing in standardized assessments.
The student work products collected are scored independently on a common rubric by faculty scorers. The results of this scoring provide quantitative estimates of students' performance and qualitative descriptions of what each performance level looks like, which provides valuable information for the process of improvement. The normal disadvantage to this type of approach when compared to standardized tests is that results cannot be compared to other institutions. This disadvantage is mitigated in part by the use of the AAC&U VALUE rubrics for many of the learning goals.
Metarubrics, such as the VALUE rubrics, are constructed so that they can be used to score a variety of student artifacts across disciplines, across universities, and across preparation levels. But their strength is also a weakness: the generality of the rubric makes it more difficult to use than a rubric that is created for one specific assignment. To address this issue, a process must be created that not only introduces the rubric to the scorers, but also makes its use more manageable.
Volunteer scorers attend a workshop on the rubric they will be using. During the workshop, scorers review the rubric in detail and are introduced to the general assumptions adopted for applying the rubrics. After reviewing the rubric and initial assumptions, the volunteers read and score sample student work products. Scoring is followed by a detailed discussion, so that scorers can better see the nuances of the rubric and learn what fellow scorers saw in the work products. From these discussions, scorer norming begins and assumptions are developed for applying the rubric to each specific assignment.
After reviewing the rubric and initial assumptions, the volunteers read and score sample student work products. Scoring of each work product is followed by a detailed discussion, so that scorers can better see nuances of the rubric and learn what fellow scorers see in the work products. From these discussions, assumptions can be developed for applying the rubric to each specific assignment. After the norming event, scorers score the student work products independently. Each scoring set of work products contains common papers which two scorers score individually. These are used to measure interrater reliability.
A general education assessment report is written annually by the General Education Coordinator. This report is first presented to the University Assessment Council, which may make recommendations to the Provost's Office based on results. The results are also reported to the University Assessment Council.
Bodies responsible for implementing program improvements based on general education assessment results include the Provost's Office, the Faculty Senate, the University Studies Advisory Committee, and the faculty in departments that offer University Studies courses.