Principles to Actions: Ensuring Mathematical Success for All

I am thrilled to announce the publication of NCTM’s Principles to Actions: Ensuring Mathematical Success for All, which defines and describes the principles and actions, including specific teaching practices, that are essential for a high-quality mathematics education for all students.

Principles to Actions outlines the productive practices all teachers should adopt to improve their students’ mathematics learning, and it describes practical steps that math specialists and coaches, administrators, policymakers, and parents can take to support a high-quality mathematics education.  It also presents stakeholders the actions they need to dramatically improve mathematics education.

This landmark publication builds on NCTM’s Principles and Standards for School Mathematics and the Council’s previous standards publications. It also supports implementation of the Common Core State Standards for Mathematics. Principles to Actions spells out Guiding Principles and actions that must be taken in each of the following: Teaching and Learning, Access and Equity, Curriculum, Tools and Technology, Assessment, and Professionalism.

The Council first defined a set of Principles that describe features of high-quality mathematics education in Principles and Standards for School Mathematics in 2000. Now in Principles to Actions it articulates and builds on an updated set of six Guiding Principles that reflect more than a decade of experience and new research evidence about excellent mathematics programs, as well as significant obstacles and unproductive beliefs that continue to compromise progress.

Principles to Actions is now on sale from the NCTM Catalog and through nctm@nctm.org. Members receive a discount on both the printed publication and ebook. For information on quantity discounts contact NCTM customer service at (800) 235-7566 or nctm@nctm.org.

Sincerely,
Linda M. Gojak
President

Qualitative and Quantitative Research Validity

So I spent a day or two really diving into attitudinal survey validity. Most of the research uses Cronbach’s alpha to determine reliability or validity. Many papers call for alphas .7 or greater to measure reliable or valid. During my qualitative course however, validity is found slightly different. I used saturation and member checking as proposed by Creswell (2007) as measures of validity that more commonly relate to statistical validity. After forming an analysis, I went back and sampled two individual students of whom were not included in the original analysis to see if what I observed and analyzed were true. Questioning during the interview process was very similar however, I had the ability to know more of what I was looking for as students described their experiences learning in the observed class. I was astonished at how well my observations and analyses matched during resampling. Though Creswell describes thick rich description as a way to produce validity, I used this more as a tool to prove robustness in my analysis methodology. Using the word validity in qualitative research is much different than in quantitative (Creswell, 2007). Perhaps the word understanding would be more appropriate for what most qualitative researchers view as reliability or validity.

Change of Angle… Slightly

After starting my literature review for my dissertation this summer, I’ve come to realize that the learning environment that I’m especially interested in is directly tied to one author with little to no empirical studies scrutinizing or confirming it. This learning environment however has been researched in mathematics education. It is also hard to find in many mathematics classes. I’ve also come to realize that just analyzing curriculum, instruction, and assessment can be hairy. These things can be aligned in any classroom environment and produce specific types of learning. The misalignment is not so much of curriculum, assessment, and instruction, but how these things are used to create certain types of learning that conflict with an instructors view of what is important mathematically. I also believe that the aligning of curriculum, assessment, and instruction are very easy to do; however, alignment of these as learning tools that promote specific depths of knowledge is more difficult. Assessment is most often used only as an evaluative tool for students and is really the key factor in promoting any certain type of learning or learning environment.

Evolving Methods or Emergent Design

I completed one interview and one observation two weeks ago with the teacher and his class. It was very interesting to watch his class and how he taught after interviewing him. I believe at times he was trying to backup his beliefs and other times he was caught in doing what he dreaded. It was also interesting to watch the students. I wasn’t sure if some of their actions were based on what instruction was happening at the time or just random observations at a moment. Based on the observation I could argue both; however, there were definitely more of actions like looking at a cell phone during certain parts of the lesson.

I plan to complete my intereview/observation process today. I am going to watch the class with a specific lens however. I’m going to pay attention to how students behave when the instructor uses specific methods while at the same time monitoring more closely the time it is taking place. After the class, I plan to complete a focus group with the class. I will ask certain questions about how learning takes place for them in the course while trying to draw information from curriculum, assessment, and instruction.

Thus far, this process has really made me think about my role in a qualitative study. Observing the class and interviewing the teacher/students without being a part of the class is very different than actually being a participant observer. I can see how they both have their strengths and weaknesses.

Pilot Test

So I have finally decided to go with a qualitative narrative for my pilot study on how assessment and instruction interact to shape student attitudes. I found a class to sample this summer. I am planning to just watch the class in a few hours to get an idea of how the class operates as far as student and teacher interaction. I will later meet with the teacher to get an idea of how he believes assessment and instruction interact in his class. I will also see how he believes these two things shape student attitudes. At the end of the class, I plan to introduce myself and my research question. I’m hoping for volunteers who will be willing to tell me about a 30 minute story regarding this interaction and their experience in this and other courses. I suppose if I constrain them to this class, I will have a type of case study; however, I hate to refrain students from sharing how their attitudes have been shaped. For this reason, I’m not going to refrain students from discussing other classes.

Trying on a Ethonograpy and Grounded Theory Lense

My focus is continaully evolving by reading the statistical literature. At one point I want to dive into student understanding of particular concepts in statistics. Other times I want to look more at the general aspect of developing reasoning and the thought process of being a statisitician. When I look at the ethnography lense, I can see how the second might surmise in an ethnographic study very easy. If I were to attempt to use an ethonograpy to understand my classroom, there wouldn’t be a clear question. I would hope by being involved in the class enviornment some of the methodologies and interactions would demonstrate this understanding and relationship. Students would perhaps reflect on the opportunities to learn and how these efforts were either successful or not. The grounded theory approach would probably work very well for my first question. How does a student develop or understand a particular concept in statistics. By posing a question to a group of students and analyzing responses, I could understand the misconceptions and previous knowledge of students in order to determine how to best make more questions of the topic I am addressing. I could resample the same group or a different group of students after reformulating my questions to see if this new methods provides more insights. It seems as if a final product for this would be an understanding of how a student comes to know a particular concept in statistics and perhaps even meaningful tasks to use and develop student understanding.

It seems that with each type of qualitative study, the large difference is what is aquired at the end. In order to determine the type of study that you will use, the researcher should make clear what their intentions are for their research. This relates very much to the generalizability found in quantiative literature and the transferability in the qualitative. How would you like to transfer the information in your reserach for use in others.

Formative Assessment

Though evaluation and assessment are many times used interchangeably, there is a subtle difference. Evaluations are many times viewed as summative, a final product that demonstrates student conception or understanding of a concept. Assessments can be used as a means to evaluate students, but assessments should do more. Assessments should move students forward in their understanding of a concept. Assessments geared toward increasing understanding are commonly referred to as formative assessments. Petit, Zawojewski, and Lobato (2010) describe formative assessment as “a way of thinking about gathering, interpreting, and taking action on evidence of learning by both the teachers and students as learning occurs. (p.68)”
According to Petit, Zawojewski, and Lobato (2010), formative assessments should improve instruction by clarifying and sharing learning intentions and criteria for success, providing feedback that moves learners forward, and activating students as instructional resources for one another. Making expectations and objectives clear can help students attempt to achieve satisfactory results as well as demonstrating how students can achieve these expectations (Petit, Zawojewski, & Lobato, 2010). Teachers who provided feedback to student assessments encourage self-reflection; however, identifying special actions a student can do that can support improvement are even stronger examples of teacher feedback (Petit, Zawojewski, & Lobato, 2010). Mathematical practice standards within the Common Core State Standards for Mathematics (NGA & CCSO, 2010) and the Alabama College and Career Readiness Standards (ASDE, 2010) require teachers to have students critique the reasoning of others, one of Petit, and Zawojewski’s (2010) elements to effective formative assessments. Though these three elements provide an excellent starting point for a teacher to use formative assessments to guide instruction, students can be more active in this process.

Petit, Zawojewski, and Lobato (2010) clarify that formative assessments should activate students as the owners of their learning and engineer effective classroom discussions, questions, and learning tasks that elicit evidence of learning. Activating students as owners of their own learning provide openness (NCTM, 1995) within assessment that is not offered in all mathematical classrooms. Students who complete self-assessment tasks may be asked to create an individual work plan, create problem-solving activities to explore, provide opportunities to evaluate their own performance against their work plan or against jointly created criteria for performance, identify specific problems they are having (Petit, Zawojewski, & Lobato, 2010). The use of assessment portfolios is a good example of this type of formative assessment (Garfield & Chance, 2000). Analyzing discourse within the classroom can provide strong evidence of what is valued within the classroom. Orchestrating productive mathematical discussions is an art that not only provides a means for students to build mathematical power, but also continually assess student understanding in the present(Stein & Smith, 2011). Through monitoring, teachers can provide sequenced explanations that build on each student strengths and help build connections between mathematical concepts (Stein & Smith, 2011).
After incorporating formative assessment in the classroom, it is imperative to know what to look for using the assessment triangle (Petit, Zawojewski, & Lobato, 2010). The assessment triangle’s vertices consist of observation, interpretation, and cognition (Petit, Zawojewski, & Lobato, 2010). The cognition vertex refers to the ways students understand, misrepresent, misunderstand, or use prior knowledge that influences this understanding of a particular concept (Petit, Zawojewski, & Lobato, 2010). The observation vertex refers to the descriptions from research that produces specific responses (Petit, Zawojewski, & Lobato, 2010). Interpretation in formative assessment or in the assessment triangle is the tools and methods used to reason from the evidence (Petit, Zawojewski, & Lobato, 2010). “To develop effective assessments that lead to sound inferences, each corner of the assessment triangle must connect to the other in a significant way regardless of the level of assessment. (Petit, Zawojewski, & Lobato, 2010, p. 70-71)”
Formative assessment is one of the most robust tools teachers can use to improve student learning within the classroom (Petit, Zawojewski, & Lobato, 2010). Though it may seem like more work for a teacher who has not incorporated these strategies in the past, students engaging in self-reflection and critique actually take the burden away from teacher grading and move toward teacher facilitation. Good instruction uses formative assessment in ways that when summative assessment takes place, there is no surprise. It is imperative that teachers use these tools appropriately to improve student understanding within the classroom.


Alabama State Department of Education. (2010). Alabama course of study mathematics: Building mathematical foundations of college and career readiness. Montgomery, AL: Author.
Garfield, J. & Chance, B. (2000). Assessment in statistics education: Issues and challenges. Mathematics Teaching and Learning, 2, pp. 99-125.
National Council of Teachers of Mathematics. (1995). Assessment standards for school mathematics. Reston, VA: Author
National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common Core State Standards for Mathematics. Washington, DC: Authors.
Petit, M.M., Zawojewski, J.S., & Lobato, J. (2010). Formative assessment in secondary school mathematics classrooms. In J. Lobato (Ed.), Teaching and learning mathematics: Translating research for secondary school teachers (pp. 67-75), Reston, VA: NCTM.
Stein, M. & Smith, M. K. (2011). 5 practices for orchestrating productive mathematical discussions. Reston, VA: NCTM.