assessment

Developing Assessments/Grading for Project-Based Assignments

As I have been exploring the space of Computer Science and Technology instructors in K-12, I came across a wonderful blog by Lauren Blankenship called GeekyMomBlog. She has all sorts of insightful thoughts and discussion there, and her style is easy to read with a geek edge.

Recently, she posed the question about assessing/grading long-term projects and skill mastery. I have some thoughts on the subject which I wanted to elaborate on more than in the comments section of her blog.

The short answer is be being as transparent as possible about expectations and parameters of the assignment. I believe that even with small projects or tests, this can be done in a fun but fair and systematic way. I’ll first talk about assignment criteria & waiting, then grade expectations,  then the format for assessment.

  1. Assignment Criteria & Weighting. I mean what criteria am I going to use to assign points to aspects of the submission, and how I’m going to weigh the criteria with respect to one another (if need be). Here are some choices, but the list is certainly not exhaustive:
    • Conformance to Assignment Theme
    • Originality of Idea
    • Coverage of Tool-Use or Concepts to be Demonstrated
    • Practical/Business Merit: the project doesn’t have to be commercially successful, only that the merits are articulated and are valid?
    • Technical Merit: how well was the idea executed?
    • Clarity: How well is the project conveyed or documented?
    • Function Achievement: how well does the computation/programming/prototyping achieve the project goal?
    • Planning and Communication (for semester length projects): How well structured was the progress?
    • Style: How well was the project executed?
  2. Grade Expectations (pun intended, since “Great Expectations” is about Pip’s growth and development, gee, aren’t I really clever). When I make an assignment, I would decide on the criteria list, and give a general sense to the students about the weighting scheme. If this is a significant enough project and I have the time, I will meet with students or work in the class to come up with ranges for the outcome, so the students can understand how to temper their effort to achieve maximum results. It also can provide some cover that if the idea is not so great, it tempers everyone’s expectations about getting an “A” grade, yet it gives them motivation to improve their preparation before launching into the project. Again, the weighting factors are key, so a more mundane assignment could depend much greater on technical merit and coverage than the more subjective criteria,such as originality of idea, style, etc.
  3. Format for Assessment. The plain way, of course, is to present the point assignments per category. I fun twist might be to have an Olympic-style judging, even filling out different judging styles to allow the student to see variations in assessment style–one might even have a peer-assessment component (or criteria), if you really are looking for ways to muck things up. Heck, one could even have the class crowdsource their project ideas.

I haven’t taught any AP or standardized test classes, but I think even for more traditional tests, it can be useful to convey to students that the test-taking process should be viewed analytically (another purpose of my more elaborate approach I describe). It would be wonderful if we could get students to think about/estimate time it will take for various questions, and based on the student’s confidence about getting a correct answer (or maximum points), which questions are worth pursuing in which order.

You might think that all this effort means the process of grading takes a lot longer, but I believe that having the parameters and specifications should make it easier. You might feel that being so transparent could allow the students to game the system, or even create an app that allows students to play with values from the criteria to produce an expected result. However, if we are teaching computational thinking and what I call computational fluency, the student’s analytic approach in trying to game the system is valuable in itself. Of course I’m speaking without so much practical experience, but we need to have goals to shoot for.

[Update, January 10, 2015: An interesting possible assignment/application would be to create an application that one could use to maximize their expected grade based on the decision factors/criteria and the weighting. I actually implemented such a tool for a different purpose–that of computing purchasing decisions for capital equipment–but the decision process is very much the same, actually simpler, for the grading scheme I’ve presented. If you want to try out the tool, here is the link to it, and here is a link for creating a free user id]

[Update, February 6, 2015: Someone sent me this nice article by Grant Wiggins about creating a rubric for assessment–are they trying to tell me something?]

In the end, if my assessment strategy can be gamed, or if my parameters do not adequately cover the skills they should master, is that their fault, or is it my fault for trying to take shortcuts by a poor assessment strategy?

I’d love to hear your comments.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shares