I’ll be honest: It took me a while to come around to the notion of self-assessment. All I could picture was my sneeringly too-cool high-school self giving my apathetically underachieving high school self A+ after unearned A+.
Can we really trust students to assess themselves? Is a student’s assessment of her own progress or performance reliable? Is it valid? If reliability and validity aren’t guaranteed, then what’s the point? These are important questions to ask, but as long as we think of assessment not just as a tool for bureaucracy and accountability but as an opportunity to empower our learners, and as long as we keep an eye to its limits and its role in a broader assessment system, then such self-assessment is most certainly a worthy undertaking.
The benefits of self-assessment are numerous and widely attested. The most compelling of these are affective: greater learner autonomy, a sense of empowerment, boosted confidence, an increased sense of responsibility, and stronger motivation number among them. Many of these affective effects are tied to secondary benefits, such as increased attendance and persistence. Another plus is the transparency that comes with self-assessment (especially rubrics, discussed below); students understand the criteria on which they are being scored. Involving students in assessment is also a language-learning opportunity in itself because it requires language that students might otherwise not have an opportunity to use. This includes developing metalanguage (“I used the wrong article before the noun”) and nuanced evaluative language (“My paragraph was good, but it would have been stronger with more supporting details”).
Direct and Indirect Measures
Before moving on to individual methods of self-assessment, it’s a good idea to review the distinction between direct and indirect assessment, which comes into play when choosing the method best suited to your needs.
Speaking and writing, the productive skills, can be assessed directly. That is, if we want to assess writing, we can simply look at a piece of student writing. On the other hand, listening or reading comprehension occur inside the head, so barring electrodes or something similarly invasive, we’re stuck with indirect measures and the validity issues attendant thereto. We might assume, for instance, that underlining the main idea is an indicator of reading comprehension, but many students have simply learned tricks for identifying the main idea without actually comprehending the paragraph.
When we as instructors test the receptive skills, we need to take extra care to ensure that we’re assessing what we think we’re assessing. This is even more important when we turn the task of assessment over to students. Some self-assessment techniques are generally better suited to productive skills, and others work for both productive and receptive skills.
Portfolios are a strong option for ongoing self-assessment. A portfolio is generally made up of a collection of student work curated by the student to show their progress over a period of time. Another common component is a commentary composed by the student, reflecting on their work and progress. Although writing is the skill that most readily lends itself to this form, we can also incorporate written responses to reading and listening tasks into portfolios.
For a long time, speaking portfolios were a comically impractical undertaking on par with having your entire class make a mixtape à la 1986. But emerging technologies make them an increasingly viable option. I recently ran a small pilot of speaking portfolios using SoundCloud, with some promising results.
Scoring scales, or rubrics, can be an excellent way to introduce self-assessment while controlling for reliability. Who among us hasn’t, in our own studenthood, composed an academic masterwork, anxiously skimmed the professor’s vaguely positive marginalia, only to be puzzled and frustrated by a lackluster letter grade at the end? Surely we sometimes overestimate our own work, but there are also certainly times when grades are influenced as much by the content of the work as by the teacher’s moods or whims or the state of his digestion. Rubrics help both teachers and students by tethering scores to sets of observable characteristics.
Again, rubrics are well-suited to the productive skills, but they can also add reliability to certain tasks meant to measure receptive skills such as response to TED talks or summaries of news articles.
Keep an eye out for future posts on how to design your own scoring rubrics!
The can-do checklist is a seriously underutilized assessment tool. It’s exceptionally simple to build and customize to your course content, useful at all levels, and can be used for a variety of needs, from placement to summative assessment. I recommend grouping very specific abilities (e.g., “I can use uppercase and lowercase letters correctly”) into broader can-do statements that derive directly from course objectives (e.g., “I can write using proper mechanics”). When used as a pre-/postassessment, such a checklist can be used to easily quantify student progress. The simplicity of this technique is sure to keep students and teachers happy, and the alignment with course goals and objectives will keep admins and funders out of your hair.
Limitations and Further Considerations
As I’ve said, self-assessment has its limits. I use it in conjunction with more conventional assessment methods and peer assessment (which has many of the same positive effects as self-assessment). What I like about the three methods I’ve discussed above is that they can easily be used by both students and teachers. That is, you can use the same rubric for both you and students to assess their work and encourage them to compare their score with yours. The same goes for can-do inventories, and it’s easy enough to incorporate a section for instructor reflections into portfolios.
I encourage you to use the comments section below to share some other self-assessment tools and methods that have worked well for you!