Print bookPrint book

Assessment of Learning

Assessment of learning is the snapshot in time that lets the teacher, students and their parents know how well each student has completed the learning tasks and activities. There are two primary types of assessment formative and summative. Both  provide information about student achievement. These resources will help you build skills to ensure that your assessment provides useful reporting information that can be used to improve student learning.  

Site: myMHU
Course: Center for Engaged Teaching and Learning
Book: Assessment of Learning
Printed by: Guest user
Date: Saturday, May 18, 2024, 4:53 AM

1. Formative, Interim, and Summative Assessments Overview

Formative and summative evaluations are both outstanding tools to determine student learning. However, there are big differences in the information these two evaluation methods provide. In this section we will examine both methods and the appropriate uses for each. 

Formative Assessments such as interactive classroom discussions, self-assessments, warm-up quizzes, mid-semester evaluations, exit quizzes, etc. monitor student learning.

  • These are short term, as they are most applicable when students are in the process of making sense of new content and applying it to what they already know.
  • The most striking feature of these types of assessments is the immediate feedback, which helps students make changes to their understanding of the material and allows the teacher to gauge student understanding and adapt to the needs of the students.
  • These types of assessments often do not carry any credit associated with the student grade.

Interim Assessments such as concept tests, quizzes, written essays, etc., may be more formal and can occur throughout the semester.

  • Typically, students are given the opportunity to revisit and perhaps revise these assessments after they have received feedback.
  • This type of assessment can be particularly useful in addressing the knowledge gaps in student understanding and can help formulate better lesson plans during the course.
  • The feedback to students is quick but not necessarily immediate.
  • These types of assessment may count toward a small percentage of the student grade.

Summative Assessments such as midterm or final exams evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark.

  • These assessments are formal and have a direct impact on student grades.
  • The feedback to the student may be limited.
  • Generally students do not have the opportunity to re-take the assessment.
  • The results of these assessments can help students understand where they stand in the class by comparing grades and, if applicable, by looking at the descriptive statistics such as average, median and standard deviation.


1.1. Formative Assessment

Article overview of  Formative Assessment   from KSU CETL

1.2. Student Practice and Feedback

Introduction

The fifth of the seven principles of learning introduced in How Learning Works is: “goal-directed practice coupled with targeted feedback are critical to learning” (Ambrose, Bridges, DiPietro, Lovett, & Norman, 2010). This article describes what practice and feedback for learning entails, shows how the components of this principle relate to each other, provides a description of what quality practice and feedback look like, and offers several research-based strategies supporting this principle.

Practice and feedback are clearly central to the learning process. This is clear when one looks at practical learning contexts such as the apprenticeship model used through the centuries in guilds and in the trades and in the coaching process. Learning theories also include practice and feedback as part of the theory, such as in the cognitive apprenticeship model (Collins, Brown, & Newman, 1989), and in the learning process methodology (Watts, 2018). In all these contexts, learning works best when approached deliberately (addressing clear goals for the learning and using performance criteria to focus the learner’s effort) and when it involves constructive feedback (reviewing the performance for strengths as well as concrete steps for improvement). This feedback is then integrated into additional practice in an iterative cycle of performance improvement.

How do the components of practice and feedback relate?    

The driving force enabling the cycle of practice and feedback is clear learning goals. These goals direct the practice, they surface or make visible the critical elements of performance to be observed while learning, and they shape the feedback provided to the learner to guide further practice. This is shown in Figure 1.

Figure 1. Cycle of practice and feedback (Ambrose, Bridges, DiPietro, Lovett, & Norman, 2010)

Elements of Quality Practice for Learning

As articulated in The Art of Changing the Brain (Zull, 2002) “when we use things in working memory to do some work, to create something new, then that new thing can become part of our long-term memory.” That is what practice in learning is about. We manipulate information to use it in new ways and create new contexts for its use. We then need to relate that information to other information in multiple ways (generally moving from familiar to more complex or distant applications) in order to be able to generalize the knowledge for later use.

What, specifically, does the research tell us about goal-directed practice? As described in How Learning Works (Ambrose et al., 2010), deliberate practice (Ericsson, Krampe, & Tesch-Römer, 1993) focusing on a specific goal or criterion is an excellent predictor of continued learning in a field. It provides an excellent way to monitor one’s progress.

First, instructors can help their students perform deliberate practice (Ericsson, Krampe, & Tesch-Römer, 1993) by writing clear, measurable learning goals or objectives. These should be stated in terms of what students are expected to do, and using language describing things that can be physically observed in some way. Rubrics that show how well these goals are achieved, at several different levels of performance, are helpful as well. Next, identifying the appropriate level of challenge is important for quality practice. Learning goals should be reasonable yet challenging so that students do not become so frustrated that they give up, but do not get bored either. The accelerator model for education settings describes this situation well (Morgan & Apple, 2007). In instructional settings, providing appropriate support or scaffolding at various levels of challenge throughout a learning activity allows students at different levels of performance to be appropriately challenged at the same time. This scaffolding may come in the form of support from other students, from the instructor, or from prompts and information in print or electronic media.

The next element of deliberate practice comes through accumulating a sufficient amount of practice, or time on task. In general, it takes much more than one attempt to learn something well enough to develop working expertise in that area. At first, this expertise may develop slowly, but performance then will begin to grow more quickly with additional practice (especially when scaffolded across increasingly different or more complex contexts utilizing the same knowledge). Later, when students are feeling comfortable with their new learning, additional expertise may appear to develop more slowly again. The learner has now refined their knowledge to a level where perhaps they do not have much more to develop in quality of performance but are simply becoming faster or more nuanced in their ability to apply the knowledge.

In general, because practice time is limited in classroom settings, it is especially important that practice be focused on specific learning targets, and that it is directed at achieving reasonable but challenging goals to help students improve their performance in an efficient manner.

Elements of Quality Feedback for Learning

Fink describes a form of feedback called FIDeLity feedback (2013). This type of feedback is frequent, immediate, discriminating, and loving. When done effectively, FIDeLity feedback is critical to the growth of a learner because it utilizes evidence from their current level of performance to drive future performance by focusing on what made the performance as strong as it was (so that it can be ingrained and repeated), and also where and how the greatest opportunities for improvement can be made (so that the learner can grow in areas that will make the most impact). Further, effective feedback is delivered in an empathetic or loving manner. This is achieved in part when feedback focuses on the performance (learning) rather than the performer (learner). In that way, the feedback can be received as constructive rather than as a judgment of one’s worth or of a perceived ability that is fixed and cannot be improved. Feedback that is formatted in terms of strengths, areas for improvement, and insights (Wasserman & Beyerlein, 2007) is one way to help produce discriminating feedback that can be delivered in an empathetic manner.

What, specifically, does the research tell us about what targeted feedback looks like? As descried in How Learning Works (Ambrose et al., 2010), the purpose of targeted feedback is to help learners achieve a desired level of performance by providing a timely map for how to direct their practice.

Communicating progress and directing subsequent effort is critical to effective feedback. An instructor can provide lots of valuable feedback about all aspects of a student’s performance, but too much feedback may result in a student simply feeling overwhelmed. Also, effective feedback does much more than simply point out where a student is struggling. It tells the student what they are doing well that has brought them to their current level of understanding or performance, and it provides specific advice for how they can direct their subsequent efforts to improve their understanding or performance to meet expected criteria.

Timing of feedback is also critical to helping students learn and improve their performance. Providing feedback early and often is generally more effective. For example, if feedback is provided at the end of a module, without the opportunity to utilize feedback through additional practice, most of the feedback may simply be ignored because it cannot be put to immediate use. Similarly, if a large amount of substantive feedback is provided, but with only a short time to use that feedback in subsequent practice, students may feel overwhelmed about where to start and not use any of the feedback.

In general, quality feedback should (1) target only the key knowledge and skills you want students to develop (2) be delivered frequently and in a timely fashion to provide students ample opportunity to use it, and (3) serve the purpose of guiding further practice by the student. While providing this type of feedback can seem daunting, remember that not all feedback needs to be delivered individually to each student, and not all feedback needs to come from the instructor.

Research-based strategies for effective practice and feedback

There are many ways to help students practice in a deliberate manner, and many ways to provide feedback. Here are three research-based approaches that have been shown to help students learn efficiently and can be implemented in practical ways.

  • Guide practice by providing a rubric and use that rubric to provide feedback at multiple levels (peer, group level, individual level). Make sure this feedback is targeted by prioritizing focus areas in the feedback to just a few dimensions of the performance that are most important, or perhaps even just a single critical dimension of performance at a time.
  • Provide exemplars illustrating strong student work and also examples illustrating common errors/pitfalls. Couple these examples with sample feedback that balances strengths and areas for improvement.
  • Design scaffolded practice that gradually builds in complexity and/or moves into more and more challenging contexts. Further, combine this with the requirement that students show how they are using the feedback provided to improve their performance as they continue practicing.

Summary

The learning process requires deliberate practice and targeted feedback. For effective learning, both practice and feedback should align with specific learning goals or objectives based on observable actions performed while learning. These goals lead to focused practice that is challenging and is accumulated over time. Further, these goals provide guidance for observers to deliver constructive feedback that is future oriented and timed to help the learner perform better when they next practice. Together, goal-driven practice and constructive feedback help learners to optimize their performance with a reasonable amount of effort.

References

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: seven research-based principles for smart teaching (1. ed). San Francisco, CA: Jossey-Bass.

Collins, A., Brown, J., & Newman, S. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. Resnick (Ed.), Knowing, learning, and instruction: essays in honor of Robert Glaser (pp. 453–494). Hillsdal, NJ: Lawrence Erlbaum Associates, Publishers.

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review100(3), 363–406. https://doi.org/10.1037/0033-295X.100.3.363

Fink, L. D. (2013). Creating significant learning experiences: an integrated approach to designing college courses (Revised and updated edition). San Francisco: Jossey-Bass.

Morgan, J., & Apple, D. K. (2007). Accelerator model. In S. W. Beyerlein, C. Holmes, & D. K. Apple (Eds.) (4th ed). Lisle, IL: Pacific Crest.

Wasserman, J., & Beyerlein, S. (2007). SII Method for Assessment Reporting. In Faculty Guidebook: A Comprehensive Tool for Improving Faculty Performance (4th ed., p. 2). Pacific Crest.

Watts, M. (2018). The Learning Process Methodology: A Universal Model of the Learning Process and Activity Design. International Journal of Process Education9(1), 37–48.

Zull, J. E. (2002). The art of changing the brain: enriching teaching by exploring the biology of learning (1st ed). Sterling, Va: Stylus Pub.

Source: https://cetl.kennesaw.edu/practice-and-feedback 

1.3. Best Practices for Designing and Grading Exams

The most obvious function of assessment methods (such as exams, quizzes, papers, and presentations) is to enable instructors to make judgments about the quality of student learning (i.e., assign grades). However, the method of assessment also can have a direct impact on the quality of student learning. Students assume that the focus of exams and assignments reflects the educational goals most valued by an instructor, and they direct their learning and studying accordingly  (McKeachie  & Svinicki, 2006). General grading systems can have an impact as well.  For example, a strict bell curve (i.e., norm-reference grading) has the potential to dampen motivation and cooperation in a classroom, while a system that strictly rewards proficiency (i.e., criterion-referenced grading) could be perceived as contributing to grade inflation. Given the importance of assessment for both faculty and student interactions about learning, how can instructors develop exams that provide useful and relevant data about their students' learning and also direct students to spend their time on the important aspects of a course or course unit? How do grading practices further influence this process?

Guidelines for Designing Valid and Reliable Exams

Ideally, effective exams have four characteristics:

  • Valid, (providing useful information about the concepts they were designed to test),
  • Reliable (allowing consistent measurement and discriminating between different levels of performance),
  • Recognizable  (instruction has prepared students for the assessment), and
  • Realistic (concerning time and effort required to complete the assignment)  (Svinicki, 1999). 

Most importantly, exams and assignments should focus on the most important content and behaviors emphasized during the course (or particular section of the course). What are the primary ideas, issues, and skills you hope students learn during a particular course/unit/module? These are the learning outcomes you wish to measure. For example, if your learning outcome involves memorization, then you should assess for memorization or classification; if you hope students will develop problem-solving capacities, your exams should focus on assessing students’ application and analysis skills.  As a general rule, assessments that focus too heavily on details (e.g., isolated facts, figures, etc.) “will probably lead to better student retention of the footnotes at the cost of the main points" (Halpern & Hakel, 2003, p. 40). As noted in Table 1, each type of exam item may be better suited to measuring some learning outcomes than others, and each has its advantages and disadvantages in terms of ease of design, implementation, and scoring.

Table 1: Advantages and Disadvantages of Commonly Used Types of Achievement Test Items

Type of Item

Advantages

Disadvantages

True-False

Many items can be administered in a relatively short time. Moderately easy to write; easily scored.

Limited primarily to testing knowledge of information.  Easy to guess correctly on many items, even if material has not been mastered.

Multiple-Choice

Can be used to assess broad range of content in a brief period. Skillfully written items can measure higher order cognitive skills. Can be scored quickly.

Difficult and time consuming to write good items. Possible to assess higher order cognitive skills, but most items assess only knowledge.  Some correct answers can be guesses.

Matching

Items can be written quickly. A broad range of content can be assessed. Scoring can be done efficiently.

Higher order cognitive skills are difficult to assess.

Short Answer or Completion

Many can be administered in a brief amount of time. Relatively efficient to score. Moderately easy to write.

Difficult to identify defensible criteria for correct answers. Limited to questions that can be answered or completed in very few words.

Essay

Can be used to measure higher order cognitive skills. Relatively easy to write questions. Difficult for respondent to get correct answer by guessing.

Time consuming to administer and score. Difficult to identify reliable criteria for scoring. Only a limited range of content can be sampled during any one testing period.

Adapted from Table 10.1 of Worthen, et al., 1993, p. 261.

General Guidelines for Developing Multiple-Choice and Essay Questions

The following sections highlight general guidelines for developing multiple-choice and essay questions, which are often used in college-level assessment because they readily lend themselves to measuring higher order thinking skills  (e.g., application, justification, inference, analysis and evaluation).  Yet instructors often struggle to create, implement, and score these types of questions (McMillan, 2001; Worthen, et al., 1993).

Multiple-choice questions have a number of advantages. First, they can measure various kinds of knowledge, including students' understanding of terminology, facts, principles, methods, and procedures, as well as their ability to apply, interpret, and justify. When carefully designed, multiple-choice items also can assess higher-order thinking skills.

Multiple-choice questions are less ambiguous than short-answer items, thereby providing a more focused assessment of student knowledge. Multiple-choice items are superior to true-false items in several ways: on true-false items, students can receive credit for knowing that a statement is incorrect, without knowing what is correct. Multiple-choice items offer greater reliability than true-false items as the opportunity for guessing is reduced with the larger number of options. Finally, an instructor can diagnose misunderstanding by analyzing the incorrect options chosen by students.

A disadvantage of multiple-choice items is that they require developing incorrect, yet plausible, options that can be difficult to create. In addition, multiple- choice questions do not allow instructors to measure students’ ability to organize and present ideas.  Finally, because it is much easier to create multiple-choice items that test recall and recognition rather than higher order thinking, multiple-choice exams run the risk of not assessing the deep learning that many instructors consider important (Greenland & Linn, 1990; McMillan, 2001).

Guidelines for writing multiple-choice items include advice about stems, correct answers, and distractors (McMillan, 2001, p. 150; Piontek, 2008):

  • Stems pose the problem or question.
  • Is the stem stated as clearly, directly, and simply as possible?
  • Is the problem described fully in the stem?
  • Is the stem stated positively, to avoid the possibility that students will overlook terms like “no,” “not,” or “least”?
  • Does the stem provide only information relevant to the problem?

Possible responses include the correct answer and distractors, or the incorrect choices. Multiple-choice questions usually have at least three distractors.

  • Are the distractors plausible to students who do not know the correct answer?
  • Is there only one correct answer?
  • Are all the possible answers parallel with respect to grammatical structure, length, and complexity?
  • Are the options short?
  • Are complex options avoided? Are options placed in logical order?
  • Are correct answers spread equally among all the choices? (For example, is answer “A” correct about the same number of times as options “B” or “C” or “D”)?

An example of good multiple-choice questions that assess higher-order thinking skills is the following test question from pharmacy (Park, 2008):

Patient WC was admitted for third-degree burns over 75% of his body. The attending physician asks you to start this patient on antibiotic therapy.  Which one of the following is the best reason why WC would need antibiotic prophylaxis?

a.His burn injuries have broken down the innate immunity that prevents microbial invasion.
b.His injuries have inhibited his cellular immunity.
c.His injuries have impaired antibody production.
d.His injuries have induced the bone marrow, thus activated immune system

A second question builds on the first by describing the patient’s labs two days later, asking the students to develop an explanation for the subsequent lab results. (See Piontek, 2008 for the full question.)

Essay questions can tap complex thinking by requiring students to organize and integrate information, interpret information, construct arguments, give explanations, evaluate the merit of ideas, and carry out other types of reasoning  (Cashin, 1987; Gronlund & Linn, 1990; McMillan, 2001; Thorndike, 1997; Worthen, et al., 1993). Restricted response essay questions are good for assessing basic knowledge and understanding and generally require a brief written response (e.g., “State two hypotheses about why birds migrate.  Summarize the evidence supporting each hypothesis” [Worthen, et al., 1993, p. 277].) Extended response essay items allow students to construct a variety of strategies, processes, interpretations and explanations for a question, such as the following:

The framers of the Constitution strove to create an effective national government that balanced the tension between majority rule and the rights of minorities. What aspects of American politics favor majority rule? What aspects protect the rights of those not in the majority? Drawing upon material from your readings and the lectures, did the framers successfully balance this tension? Why or why not? (Shipan, 2008).

In addition to measuring complex thinking and reasoning, advantages of essays include the potential for motivating better study habits and providing the students flexibility in their responses.  Instructors can evaluate how well students are able to communicate their reasoning with essay items, and they are usually less time consuming to construct than multiple-choice items that measure reasoning.

The major disadvantages of essays include the amount of time instructors must devote to reading and scoring student responses, and  the importance of developing and using carefully constructed criteria/rubrics to insure reliability of scoring. Essays can assess only a limited amount of content in one testing period/exam due to the length of time required for students to respond to each essay item. As a result, essays do not provide a good sampling of content knowledge across a curriculum (Gronlund & Linn, 1990; McMillan, 2001).

Guidelines for writing essay questions include the following (Gronlund & Linn, 1990; McMillan, 2001; Worthen, et al., 1993):

  • Restrict the use of essay questions to educational outcomes that are difficult to measure using other formats. For example, to test recall knowledge, true-false, fill-in-the-blank, or multiple-choice questions are better measures.
  • Identify the specific skills and knowledge that will be assessed. Piontek (2008, available online: http://www.crlt.umich.edu/sites/default/files/resource_files/CRLT_no24.pdf) gives several examples of question stems that assess different types of reasoning skills, such as:
    • Generalizations: State a set of principles that can explain the following events.
    • Synthesis: Write a well-organized report that shows…
    • Evaluation: Describe the strengths and weaknesses of…
  • Write the question clearly so that students do not feel that they are guessing at “what the instructor wants me to do.”
  • Indicate the amount of time and effort students should spend on each essay item.
  • Avoid giving students options for which essay questions they should answer. This choice decreases the validity and reliability of the test because each student is essentially taking a different exam.
  • Consider using several narrowly focused questions (rather than one broad question) that elicit different aspects of students’ skills and knowledge.
  • Make sure there is enough time to answer the questions.

Guidelines for scoring essay questions include the following (Gronlund & Linn, 1990; McMillan, 2001; Wiggins, 1998; Worthen, et al., 1993; Writing and grading essay questions, 1990):

  • Outline what constitutes an expected answer.
  • Select an appropriate scoring method based on the criteria. A rubric is a scoring key that indicates the criteria for scoring and the amount of points to be assigned for each criterion.  A sample rubric for a take-home history exam question might look like the following:

 


Criteria

Level of performance

0 points

1 points

2 points

Number of references to class reading sources

0-2 references

3-5 references

6+ references

Historical accuracy

Lots of inaccuracies

Few inaccuracies

No apparent inaccuracies

Historical Argument

No argument made; little evidence for argument

Argument is vague and unevenly supported by evidence

Argument is clear and well-supported by evidence

Proof reading

Many grammar and spelling errors

Few (1-2) grammar or spelling errors

No grammar or spelling errors

Total Points (out of 8 possible):

For other examples of rubrics, see CRLT Occasional Paper #24 (Piontek, 2008).

  • Clarify the role of writing mechanics and other factors independent of the educational outcomes being measured. For example, how does grammar or use of scientific notation figure into your scoring criteria?
  • Create anonymity for students’ responses while scoring and create a random order in which tests are graded (e.g., shuffle the pile) to increase accuracy of the scoring.
  • Use a systematic process for scoring each essay item.  Assessment guidelines suggest scoring all answers for an individual essay question in one continuous process, rather than scoring all answers to all questions for an individual student. This system makes it easier to remember the criteria for scoring each answer.

You can also use these guidelines for scoring essay items to create grading processes and rubrics for students’ papers, oral presentations, course projects, and websites.  For other grading strategies, see Responding to Student Writing – Principles & Practices and Commenting Effectively on Student Writing.


References

Cashin, W. E. (1987). Improving essay tests. Idea Paper, No. 17. Manhattan, KS: Center for Faculty Evaluation and Development, Kansas State University.

Gronlund, N. E., & Linn, R. L. (1990). Measurement and evaluation in teaching  (6th  ed.). New  York:  Macmillan Publishing Company.

Halpern, D. H., & Hakel, M. D. (2003). Applying the science of learning to the university and beyond. Change, 35(4), 37-41.

McKeachie, W. J., & Svinicki, M. D. (2006). Assessing, testing, and evaluating: Grading is not the most important function.   In   McKeachie's   Teaching tips: Strategies, research, and theory for college and university teachers (12th ed., pp. 74-86). Boston: Houghton Mifflin Company.

McMillan, J. H. (2001). Classroom assessment: Principles and practice for effective instruction. Boston: Allyn and Bacon.

Park, J. (2008, February 4). Personal communication. University of Michigan College of Pharmacy.

Piontek, M. (2008). Best practices for designing and grading exams. CRLT Occasional Paper No. 24. Ann Arbor, MI. Center for Research on Learning and Teaching.>

Shipan, C. (2008, February 4). Personal communication. University of Michigan Department of Political Science.

Svinicki, M.   D.   (1999a). Evaluating and grading students.  In Teachers and students: A sourcebook for UT- Austin faculty (pp. 1-14). Austin, TX: Center for Teaching Effectiveness, University of Texas at Austin.

Thorndike, R. M. (1997). Measurement and evaluation in psychology and education.  Upper Saddle River, NJ: Prentice-Hall, Inc.

Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass Publishers.

Worthen, B.  R., Borg, W.  R.,  & White, K.  R.  (1993). Measurement and evaluation in the schools.  New York: Longman.

Writing and grading essay questions. (1990, September). For Your Consideration, No. 7. Chapel Hill, NC: Center for Teaching and Learning, University of North Carolina at Chapel Hill.

Source: 
Adapted from CRLT Occasional Paper #24: M.E. Piontek (2008)

1.4. Take Home Exams

Top 5 Tips for Take Home Exams

As you re-think assessmetns in your course, you may be considering online take-home exam format for your final exam. We encourage you to consider the following 5 tips as you prepare your exams:

  1. Consider your course learning outcomes: Have your students already demonstrated that they have met the course learning outcomes? 
    • If your students have already demonstrated that they have met the learning outcomes in the course, can you simplify or reweight the final exam?
    • If you plan a take home exam, highlight how the question or questions align with course outcomes.
    • Create a rubric to help your students see and understand your expectations and to help you with grading.

 

  1. Make your question something that isn’t easily “Googleable” 
    • Instead of asking your students to recall or replicate information from the course, try to have your students apply their knowledge. For example, students may be asked to use their own examples to illustrate knowledge of concepts or theories, solve an authentic problem (e.g., a case study), or analyze a process. Compare-and-contrast questions also elicit this type of knowledge.
    • Reflection questions are also effective for take home exams, for they require synthesis of learning, material, concepts, and processes, which provoke your students to think about how this learning has impacted their worldview. Plus, it is much harder to plagiarize this type of information than information drawn straight from the textbook or lecture notes.
    • Word your questions clearly and simply to avoid ambiguity.
    • For a large class, if you had planned a final exam based on content for the entire semester, you could decide to reduce the content by selecting a limited number of topics to cover before the end of classes. You can still have your students complete the final exam at the scheduled time, but you deliver it as a timed, open-book exam on CourseLink. Let your students know they are expected to work alone within the given timeframe. In addition, consider developing multiple sets of questions and randomize these for your students. While this will not guarantee that your students work alone, it will make it more difficult for them to consult other sources for answers.
    • If applicable, test your question in Google!

 

  1. Make sure your question is time appropriate
    • Depending on how long your students have to respond to the final exam question or questions, make sure they can adequately respond in the time given. It is a delicate balance between giving your students too much time and too little.
    • Set a page or word limit. Think about what suits your context: it might make sense to have a few shorter questions rather than one longer essay-type question.

 

  1. Ask your students to help
    • You could ask your students to submit questions for consideration for the final exam. Keen students like to know that they are a part of the course, and it is also a good way for you to see how they are thinking about topics. In addition, if your students know that their contributions will be considered when creating the take home exam, they may have less anxiety about what will be tested.

 

  1. Set clear expectations
    • Provide clear instructions for how your students should respond to the take home exam (e.g., page/word limit, format, font type, etc.). Specify readings to draw from, if appropriate. Write instructions in a clear format, using bullet points or steps, rather than long paragraphs of instructions. The more information your students have, the more comfortable they will be, the fewer questions they will have, and the better response you get.
    • Reiterate that the exam is time sensitive and they should not spend copious amounts of time reading and researching for the questions. State how long they should actually spend writing.
    • Reiterate that academic conventions of university essay writing apply, if appropriate (e.g., clarify the citation style that your students should use).
    • Source Cite: https://otl.uoguelph.ca/top-5-tips-take-home-exams 

1.5. Remote Exams and Assessments

Link to Rutgers' Page 

2. Critiquing Students’ Projects

In this section we examine how to critique student projects that is meaningful and useful to the student. The use of rubrics will be a major construct in this chapter.

3. Assessment and Technology

Assessment and Technology 

Article:

How to Give Your Students Better Feedback With Technology

4. Grading

In recent years the idea of traditional grading has been under attack.  In this section we examine how to ensure our grading is fair and promotes growth in our students.