Part III

In the following examples identify various modes of assessment and list five

Example 1
(from Fjørtoft, H. (2020), Multimodal digital classroom assessments. Computers & Education, 152:
https://doi.org/10.1016/j.compedu.2020.103892)

Robert’s students found equations difficult, and there was limited time to differentiate instruction. This problem of practice encouraged the design of an MDCA (multimodal digital classroom assessment) practice combining video recordings with pen-and-paper problem solving. The students created instructional videos in mixed-ability pairs, modeling and explaining their thinking while solving equations. Mid- and high-achieving students were recruited to create videos in a room adjacent to the classroom. Cameras captured the students’ think-aloud processes, written explanations, and hand gestures, enabling students to combine the think-aloud activity with visual representations of handwriting and calculation, and body language (e.g., pointing at numbers on paper or screen; see Figs. 1 and 2). The rest of the class was encouraged to use the videos to gain access to the creator students’ mathematical thinking, as well as to prepare at home for upcoming tests. Viewing the videos would allow students to self-assess their own mathematical reasoning by comparing it with the reasoning processes of mid- and high-achieving students.

One classroom observation focused on two students making a video based on a script Robert had provided. The purpose was to create an explanation for how to solve equation 2x – 2 ¼x þ5. While filming, the students combined gestures, writing, and verbal explanations to explain their thinking for others. Although the students did not discuss the prospective audience explicitly, their conversation was replete with deictic markers such as “there,” “like this,” and sequential markers organizing their talk in segments (e.g., “first” and “then”) while pointing to the written mathematical symbols, indicating the process of mathematical reasoning required.

Deictic gestures are defined as “pointing gestures which are used to point to places in real or abstract space” (Stam & McCafferty, 2008,p. 9). The gestures in the video served two purposes: 1) to enhance the explanations by relating the mathematical notations in the script to the handwritten notes and to the deictic and sequential markers in the verbal instructions, and 2) to solve practical problems during the creation of the video. During the first attempt at filming the explanation, one student exclaimed, “You have to record my hand!” before realizing that such comments would be a distracting feature in the video. During a later attempt, one of the students used gestures off camera to discretely guide the other student back on task when her attention wandered.

Robert’s instruction of Spanish as a third language (L3) suffered from limited time for oral language activities and limited opportunities to document students speaking and listening. Thus, samples of student writing constituted the primary evidence used for assessment purposes. These written assignments were of little value for monitoring students’ progress in speaking and listening skills, and represented threats to validity in assessment. Moreover, given the incremental process of acquiring vocabulary and communication skills in an L3 class, students found self-assessment challenging.

To confront this problem of practice, Robert designed a longitudinal self-assessment MDCA practice called My Secret Identity. This practice required students to choose a Spanish name and record samples of their oral language development, providing him with documentation of their skill progression across a longitudinal time span. The students also recorded videos in pairs acting out their identities and documenting their ability to participate in conversations combining language use with gesturing and body language. The digitally stored recordings of the students’ language use increased individual student activity and allowed easier access for assessment purposes, for Robert and for the students. The original files with comments and reflections from the teacher and students were used as part of the assessment. Robert reviewed the samples of language use across a longitudinal time span. The students then used these samples to document and reflect on their language development. The students were also asked to review their development across this time span.

Robert evaluated this MDCA practice as having several benefits. First, it allowed multiple voices to be heard in the assessment. Original files with comments and reflections from the teacher and students provided insight into student metacognition and acted as a stimulus for dialogue and feedback. Second, data and interpretations could be aggregated and disaggregated when necessary. Third, the students were actively engaged in and responsible for the documentation of their language development, allowing them to review samples of language use across a longitudinal time span. These samples could be used in self-assessment practices to allow students to reflect on their language development. Finally, the MDCA was manageable for Robert in that the assessment evidence yielded through this practice was useful and easy to collect.

This MDCA provided an alternative to using pen-and-paper tests as a means of holding students accountable for their learning. Robert wanted to “hold students accountable by publishing,” emphasizing the difference between just learning phrases and moving on through the curriculum, and being able to apply language learning through multimodal means. Students did not appear to see this perspective on accountability as demotivating or stressful. Instead, they seemed to appreciate using the videos as opportunities for recall. Reflecting on the process of using the video as part of language learning, one student wrote: “I think we learn a lot, and if we forget how to say our names, for example, then we can just look at the videos.” This response suggests that this MDCA practice could provide a “soft” accountability mechanism that could avoid construct-irrelevant variance stemming from test-induced stress or low effort.

Example 2
(from Fjørtoft, H. (2020), Multimodal digital classroom assessments. Computers & Education, 152:
https://doi.org/10.1016/j.compedu.2020.103892)

Example 3
(from Fjørtoft, H. (2020), Multimodal digital classroom assessments. Computers & Education,152:
https://doi.org/10.1016/j.compedu.2020.103892)

Elle designed an MDCA practice encouraging students to create videos simulating a cultural practice in which journalists interview authors on stage. She created a set of assessment criteria describing her expectations for the final product based on authentic examples from mass media and merged them with disciplinary objectives. These criteria were divided into two main categories: content (e.g., biographical information about the author, appropriate use of concepts from literary history, the author’s language style and preferred themes, and an analysis of a literary text) and presentation (e.g., students’ ability to engage in the role, diction, and creativity in performance). Effectively, these criteria operationalized the curriculum objectives and were intended to help students navigate the expectations, thus ensuring alignment of the task and curriculum goals. (…) Elle selected one video as an exemplar of how students mimicked genre traits from TV shows. For example, using body language, gaze direction, and camera angles, students showed mastery of how TV show hosts face their audiences. Furthermore, students used standardized spoken language commonly heard in TV shows but rarely used in school or casual conversation. These traits from multimodal forms of representation were integrated with curriculum elements covering the assessment criteria (e.g., using disciplinary concepts such as “contemporary author” and “literary traits” to demonstrate their familiarity with literary concepts or mentioning specific works by the author to show in-depth knowledge of her literary production). Elle interpreted this set of evidence holistically as an attempt at meeting the broadly defined content objectives in the curriculum and the assessment criteria for this specific task.