statistical strength, lack of context, and efficiency of administration and instead focusing on robust inferences about learning based on learner performance, relationally-focussed, formative practices, and the messiness of transformative approaches to assessment that are deeply tied to pedagogy and the nature of disciplinary knowledge (Moss, 1995).

There are a number of possible explanations for the drive to emulate LSAs in classroom assessment, including:


Beginning to think about assessment in higher education begins with understanding what assessment is. The National Research Council defines assessment generally as being reasoning from evidence and more specifically as being three interdependent components known as the assessment triangle (2001). The three critical components of the assessment triangle are:

<aside> 1️⃣ Among the most important roles of instructors in higher education is the task of certifying that each individual learner in their course has achieved a particular standard in relation to the intended outcomes of the course. The importance of this determination is a reflection of how student achievement data are used in not only summative course assessments, but also predicting future success, awarding scholarships, and determining acceptance into competitive programs (Guskey & Link, 2019). Instructors are often given very wide latitude in how they assign a learner's final grade, usually expressed as a percentage or a letter (A+ - F).

</aside>

Lipnevich et al. (2020) report that 78% of first-year undergraduate syllabi they examined relied heavily on exams to elicit evidence of learning, suggesting an imbalanced focus on primarily the "observation" pillar of the assessment triangle. This imbalance may lead to decreased strength of any conclusions drawn from the evidence. There are, however, faculty who approach assessment differently, eschewing the focus on the "observation" pillar for a more balanced view that considers the nature of the content or skills to be learned, which is then aligned with robust opportunities for learners to practice and demonstrate their new knowledge, leading to warranted inferences in the form of formative feedback to the learner and instructor or, as appropriate, a summative rating.

which forced the vast majority of higher education institutions worldwide to pivot to some form of online delivery of course materials and interactions, including summative and formative assessments. However, as those who have worked in distance education know well, it is difficult to replicate in-person classroom conditions, whether for lecturing or for administering exams, in a remote environment. Test security is a significant challenge, making traditional selected-response exams vulnerable to examinees using outside help. Through this research, I hope to identify methods of assessment, both formative and summative, which do not rely on high-security selected-response exams, and at the same time understand the impact and quality of relational and human-centred assessment practices.

Assessment is foundational to teaching and learning in higher education. Instructor beliefs about the purposes of assessment and how to implement an assessment plan impact the way they lead learners through their courses [@bairdAssessmentLearningFields2017]. At the same time, assessment practices carry significant weight for learners whose progression through higher education and into society is strongly linked to the ratings they are assigned in the assessment process.