The video below reflects a nine-minute problem-solving period in my Honors Pre-Calculus class. The instructional method is a mixture of guided and collaborative and includes evidence of my monitoring techniques, pacing, and verbal cues, as well as students' academic ownership and classroom environment.
The problem students solved was a warm-up embedded within the scope of the wheelchair ramp project described in the "Content Knowledge" section of the dossier. Students used a variety of solving methods to calculate missing dimensions of a wheelchair ramp. They would replicate this process in their projects, in which they designed a 3D model of a ramp for a space on our school's campus. To note, the tablet I am holding is wirelessly projecting to the screen at the front of the room. As I write on the tablet, students can see what I write on the board. This allows me to teach and model from various places in the room.
Download the plan for references.
My background in data analytics drives my desire to make research-backed, data-driven decisions in my instruction. The most easily quantifiable practice for collecting this data is the pre/post assessment approach. However, it is worth emphasizing that over-testing students is antithetical to my teaching philosophy. Therefore, testing, in my classroom, is a standardized tool for tracking growth, whereas project-based learning is my primary method for evaluating and promoting student performance.
According to David Dellwo’s 2010 study, a standard pre/post assessment strategy does not effectively capture intermediate knowledge loss or acquisition the way a multi-stage approach can. This theory aligns well with my current routine of offering one-to-two quizzes or other formative assessments within a unit. The timeline, for example, may look like a pre-assessment at the beginning of the unit, a brief quiz every two weeks, followed by a post-assessment at the end of the unit that is nearly identical to the pre-assessment.
Dwello also provides metrics for analyzing the results garnered by this strategy. While I currently feel relatively effective at the data analysis portion of each individual assessment, I lack in compiling and synthesizing longitudinal data for my students. For that, I have relied heavily on the standardized tests they have to take for the district, which are imperfect and not directly aligned to the standards in my course. Dwello suggests measuring the mean percent change in scores for every stage, measuring the normalized gain or loss throughout the unit. Then, one can grade their course’s overall effectiveness by the net gain or loss. To take this a step further, one could also disaggregate the data by class, student, or standard using data visualization software to determine if the aggregate data hides significant outliers that need to be addressed.
I am interested in automating this style of data analysis and allowing that to influence my instructional practices, like reteaching and test preparation. One of my biggest criticisms after two years of teaching has been the misuse of the phrase “data-driven.” I believe that instructors suffer from data overload and spend more time synthesizing data than investigating the story that underlies the numbers and designing strategies influenced by those narratives. This challenge is correlated to a lack of reflection, which is rooted in the reality that educators have more tasks on their plates than there are hours in the day. When something must go, I find that reflection is the easiest option to justify.
Therefore, in my own classroom, I strive to make progress in prioritizing the commitment of “data-driven instruction.” My first step will be devising systems to collect and process data seamlessly based on Dwello’s research. Once I believe these systems are refined, my goal is to scale them to team-wide processes to aid other math instructors’ data strategies. At that point, I believe that I will be able to bridge the divide between educational data analysis and pedagogical development.
The Summative Assessment linked below was given to Pre-Calculus students after completing Unit 1, which covered the basic principles of trigonometry. I developed the assessment to give students multiple opportunities to demonstrate the depth of their understanding.
The test is 20 questions: five are vocabulary questions with a word bank, ten are multiple choice questions, and five are constructed response.
The vocabulary questions are DOK (Depth Of Knowledge) level 1; the multiple choice questions are DOK level 2; and, the constructed response questions are DOK level 3.
The point values of these questions weight the more challenging questions higher than the easier questions. Vocabulary comprises 10% of the assessment points; multiple choice comprises 50%; and, constructed response comprises 40%.
A student submission for this exam is also linked below. Note that the version is different, so the numbers within each problem vary slightly from the blank assessment and key. The grading platform I prefer is called GradeScope. Within Gradescope, I can establish a rubric that is consistent across all students and allows me to grade blindly to ensure I do not grade with any bias. Additionally, the graded exams with my comments are posted to students' accounts on the site, allowing them to read and respond to my comments instantly. The first two pages of the graded exam provide the students' view, which includes the grading rubric and my individualized feedback.
The Formative Assessment linked below was a quiz administered virtually halfway through learning Unit 4 topics, which covered trigonometric identities. While quizzes are usually seen as summative assessments, in my class they serve as formative ones. This is because students may retake them until they achieve a 100% on the assignment, and the data gathered from the results informs the timeline for the remainder of the unit. As an example, for this assessment, more than 80% of students got a 3/4 or higher on this quiz. This let me know that students had a strong enough grasp on the topic to take the Unit 4 test. They did not need a thorough reteaching plan before moving on to test review.
As this is a formative assessment, my method of delivering feedback is less meticulous than that of the summative assessment above. As an alternative approach that still ensures students have an opportunity to learn from their mistakes, I developed a system of quiz corrections through the site DeltaMath. When a student sees which questions they've missed and inputs that data into their DeltaMath account, DeltaMath will automatically generate an assignment that is individualized to the skills the student needs to practice most.
Download the reflection for references.
Expert teachers know how to find the balance between routine and spontaneity. As I work towards achieving this homeostasis, I find myself relying on two main strategies: the gradual release of responsibility model, refined by Webb, et. al. (2019) and required by my school’s administration, and problem-based learning, as explored by Li, et. al (2013). As the majority of my lessons follow the gradual release format, I have collected an assortment of strategies for each of the three components, “I Do, We Do, You Do,” that cater to my teaching strengths and help me achieve my school’s and my personal goals for my classroom.
During the “I Do” portion of the lesson, the primary method I include is the Think Aloud, in which I verbalize my inner thoughts as I work through a problem at the front of a room. This strategy is shown to assist students’ language comprehension even beyond the English Language Arts classroom (Duggirala, 2019). I’ve also modified this strategy by pre-recording myself performing a Think Aloud, and while I show it in the classroom, I am available to monitor students, pause and ask probing questions, and gather information based on student behaviors. Sometimes, I am able to find similar videos made by other teachers on YouTube. When I find good examples, I have the opportunity to learn from others strategies that I may not have originally planned to utilize.
During the “We Do” portion of the lesson, I like to minimize the number of days in a row in which I incorporate the same strategies. Because of this, I have many low-tech and high-tech options in my rotation. Among low-tech strategies, I enjoy implementing Think-Pair-Write-Share, which is an augmented version of Think-Pair-Share, which promotes literacy and reflection (Naim, 2020). Two high-tech options include IXL Jam and Desmos Polygraph. An IXL Jam presents all students with the same problem at the same time. The instructor selects the skill and the difficulty and is able to change those settings as students become more comfortable with the material. Desmos Polygraph is an online activity that pairs students in the class and promotes academic language by prompting students to ask and answer yes-or-no questions on the skill they are learning. Both of these options track and archive student data which helps me pace the remaining “You Do” segment of the lesson and informs the lessons that will immediately follow the current one.
When we arrive at the “You Do” section, I like to give students the opportunity to self-diagnose their level of understanding with the new skill. Then, I provide a variety of assessment options that correlate with each level. Students who feel they need more assistance may continue working on an IXL Jam in a small group. Students who have a moderate understanding of the concept may complete an IXL assignment on their own or engage in another activity through Desmos or Google Slides. Students who are ready to extend their learning may be provided an activity, like a worksheet, in which they must create their own content on the new material or make decisions using the new mathematical principles.
Though I have a variety of tools with which I am comfortable teaching and assessing, my current goal is to use them to develop strong routines that allow students to reasonably predict the flow of each class yet wonder what each day might include.