Assessment at UNCW

General Education Assessment

Methodology

Introduction

Each academic year, the following questions are examined:

  • What are the overall abilities of students taking University Studies courses with regard to the UNCW Learning Goals?
  • What are the relative strengths and weaknesses within the subskills of those goals?
  • Are there any differences in performance based on course delivery method or demographic and preparedness variables, such as gender, race or ethnicity, transfer students vs. freshman admits, honors vs. non-honors students, total hours completed, or entrance test scores?
  • What are the strengths and weaknesses of the assessment process itself?

UNCW has adopted an approach to assessing its Learning Goals that uses assignments that are a regular part of the course content. One strength of this approach is that the student work products are an authentic part of the curriculum, and hence there is a natural alignment often missing in standardized assessments. Students are motivated to perform at their best because the assignments are part of the course content and course grade. The assessment activities require little additional effort on the part of course faculty because the assignments used for the process are a regular part of the coursework. An additional strength of this method is the faculty collaboration and full participation in both the selection of the assignments and the scoring of the student work products. The student work products collected for General Education Assessment are scored independently on a common rubric by trained scorers. The results of this scoring provide quantitative estimates of students' performance and qualitative descriptions of what each performance level looks like, which provides valuable information for the process of improvement. The disadvantage to this type of approach when compared to standardized tests is that results cannot be compared to other institutions. This disadvantage is mitigated in part by the use of the AAC&U VALUE rubrics for many of the Learning Goals. This concern is also addressed by the regular administration of standardized assessments, in particular, the CLA and the ETS Proficiency Profile, giving the university the opportunity to make national comparisons.

Tools

For the UNCW Learning Goals of Critical Thinking, Information Literacy, Inquiry, Thoughtful Expression (Oral), and Thoughtful Expression (Written), the Association of American Colleges and Universities (AAC&U) Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics are used, some with modifications based on scorer feedback in the past. The VALUE rubrics, part of the AAC&U Liberal Education and America's Promise (LEAP) initiative, were developed by over 100 faculty and other university professionals. Each rubric contains the common dimensions and most broadly shared characteristics of quality for each dimension. Locally created rubrics are used for assessing Diversity, Global Citizenship, and Inquiry (Data Analysis).

Rubrics can be viewed here.

Benchmarks

The VALUE rubrics and most locally created rubrics are designed on a 0 to 4 scale. According to AAC&U, "the capstone level reflects the demonstration of achievement for the specific criterion for a student who graduates with a baccalaureate degree. Milestones [2 and 3] suggest key characteristics of progressive learning as students move from early in their college 10 experience to the completion of the baccalaureate degree" (Rhodes, 2010, p.2). Based on the design of these rubrics, UNCW uses the capstone level 4 as the benchmark for attainment of graduating seniors. For first- and second-year students assessed in lower-level general education courses, the milestone level 2 is the benchmark for achievement. The rationale for this is that performance at the milestone level 2 indicates that, given addition opportunities to learn and practice, they are on track for achieving a level 4 by the time of graduation. Most locally-created rubrics were designed to follow these same levels.

Sample Selection

The sampling method lays the foundation for the generalizability of the results. No one part of the University Studies curriculum, nor for that matter no one part of the university experience, is solely responsible for helping students meet UNCW Learning Goals. These skills are practiced in many courses. Each component of University Studies has its own student learning outcomes, and each of these outcomes is aligned to the Learning Goals. The University Studies Curriculum Map displays this alignment. For General Education Assessment purposes, courses are selected that not only meet the learning goals, but are also among those that are taken by a large number of students, in order to represent as much as possible the work of "typical" UNCW students.

Within each course, sections are divided into those taught in the classroom and completely online, taught by full-time and part-time instructors, and taught as honors or regular sections. Within each subgroup, sections are selected randomly in quantities that represent as closely as possible the overall breakdown of sections by these criteria. Within each section, all student work products are collected, and random samples of the work products are selected (sometimes consisting of all papers). Prior to the start of the semester, the General Education Assessment staff meets with course instructors to familiarize them with the relevant rubric(s). Instructors are asked to review their course content and assignments, and to select one assignment that they feel fits some or all of the dimensions of the rubric(s) being used. Faculty of the selected course sections are instructed to include in the course syllabus the General Education Assessment Statement for Students, which discloses the use of their work for the purpose of General Education Assessment. The General Education Assessment office retrieves a copy of the course roster from Banner in order to compile the student demographic information in university records for the purpose of analysis based on demographic and preparedness variables.

Scoring Process

Scorer Recruitment and Selection

Scorers are recruited from UNCW faculty and, in some cases, teaching assistants. A recruitment email is sent to chairs, sometimes to all university chairs, and sometimes to only chairs in selected departments (based on the Learning Goals and course content being assessed), asking them to forward the email to all full- and part-time faculty in their department. The desire is to include reviewers from a broad spectrum of departments. The intent is to give all faculty an opportunity to participate, to learn about the process and rubrics, and to see the learning students experience as they begin their programs. However, in some cases, the scoring is best done by discipline experts. It is also important to try to have a least one faculty member from each of the departments from which student work products are being reviewed.

Scoring Process

Metarubrics, such as the VALUE rubrics, are constructed so that they can be used to score a variety of student artifacts across disciplines, across universities, and across preparation levels. Their strength is also a weakness: the generality of the rubric makes it more difficult to use than a rubric that is created for one specific assignment. To address this issue, a process must be created that not only introduces the rubric to the scorers, but also makes its use more manageable. The following describes the process for written work products; the process is similar for oral presentations, with the major difference being in the length of the norming session and the scorers' access to student work (for oral projects, scorers either observe the presentations in real-time or via video recording). Volunteer scorers initially attended a two to two-and-a-half hour workshop on one rubric. During the workshop, scorers review the rubric in detail and are introduced to the following assumptions adopted for applying the rubrics to basic studies work products.

Initial assumptions

  1. When scoring, we are comparing each separate work product to the characteristics we want the work of UNCW graduates to demonstrate (considered to be Level 4).
  2. Goals can be scored independently from each other.
  3. Relative strengths and weaknesses within each goal emerge through seeking evidence for each dimension separately.
  4. Common practice and the instructor's directions guide the scorer's interpretation of the rubric dimensions in relation to each assignment.
  5. Additional assumptions will need to be made when each rubric is applied to individual assignments. After reviewing the rubric and initial assumptions, the volunteers read and score two to four student work products. Scoring is followed by a detailed discussion, so that scorers can better see the nuances of the rubric and learn what fellow scorers see in the work products. From these discussions, assumptions can be developed for applying the rubric to each specific assignment.