Carnegie Commons

A place to come together to exchange ideas about teaching and learning

Thursday

21

August 2014

Revisiting the Purposes of Practical Measurement for Improvement: Learning from the BTEN Measurement System

Written by , Posted in What We Are Learning

“The failure of educational systems to integrate research evidence productively into practice impedes progress towards making schools and colleges more effective ….” So state the authors of Practical Measurement, a Carnegie Foundation paper describing tenets of measurement and research that support the improvement of practice. For the past three years, the participants of Building a Teaching Effectiveness Network (BTEN) have sought to build the type of integrated system of measurement described in Practical Measurement that is so often lacking in our educational systems—one that hews closely to the processes occurring at the ground level of schools, and that supports the day-to-day efforts of practitioners to improve their work.

The BTEN measurement system, designed to support the improvement of processes by which new teachers receive feedback on their instructional practice from school leaders, consists of varying grain sizes of measures to support and spur improvement efforts. The measurement system spans the conceptual space from the micro-level process being improved to outcomes of this process to the ultimate aim of the effort, teacher development and retention.

As a result of the experiences gained through using measures in BTEN, we can take next steps with the ideas originally discussed in Practical Measurement. One of these ideas concerns the purposes of measurement in the improvement context, which we have revisited, augmented, and reorganized, and now propose anew.

Measures in the improvement context are used for:

  • Learning About Your System
  • Priority Setting
  • Testing the Practical Theory of Improvement
  • Tailoring Interventions to Individual Participants’ Needs
  • Developing Social and Psychological Stances Necessary for Improvement

Leaning About Your System

Enacting widespread and sustained improvements in a system requires knowledge of that system. Often, practitioners are steeped in their context – their department, their classroom, the role that they play – but they do not see the larger system within which their work is embedded. Knowledge of a system precedes improving that system, and practical measurement can support this type of learning. Furthermore, improvement teams can gain important knowledge about enhancing performance from successful members.

BTEN Example
BTEN improvement teams collected information about the frequency with which new teachers receive feedback. In one of the partnering districts, improvement teams discovered that 42 percent of new teachers responding to the survey indicated that they had not received any feedback from their principal, assistant principal, or mentor from September to October. This baseline knowledge provided a reference point from which the improvement team could gauge their success in increasing the frequency and regularity of feedback on instruction.

Priority Setting

Tackling any complex problem requires improvement teams to narrow their focus, to have a place to commit their improvement efforts and resources. Measures in the improvement context can enable practitioners to choose a priority area. These areas may be those of greatest weakness, the “low hanging fruit” for which improvements are likely to be quick and easy, or the area of greatest collective interest among the improvement team members.

BTEN Example
When the improvement team at one BTEN school surveyed their new teachers about their feedback experience, they discovered that some teachers in the school felt that the feedback they received from different feedback providers was not consistent. The team decided to focus their improvement efforts on “coordination meetings” where feedback providers met to discuss the new teachers to align their support efforts. In the next survey administration, they saw an improvement in teachers’ responses about the degree of feedback consistency.

Testing the Practical Theory of Improvement 

A measurement system supporting improvement work provides data against which improvement teams can test their theory of practice improvement. As practitioners make changes in their work processes to spur improved outcomes, they can use data from the measurement system to see if the changes are actually happening and if they are resulting in the outcomes they hoped to see.

BTEN Example
The BTEN improvement teams improved their feedback processes by making feedback more frequent, giving feedback providers a conversation protocol to guide the feedback conversation with the new teacher, and establishing coordination meetings among the feedback providers. They tracked these changes with tools that allowed them to see whether these changes were happening. They were able to consider this information in conjunction with teachers’ survey responses to see if these changes led to improved teacher perceptions of their feedback experiences.

Tailoring Interventions to Individual Participants’ Needs

As improvement teams work on improving a process, they are likely to find that the intended beneficiaries of the improvement differ in their starting points, their needs, and how they respond to the changes being tested. Certain individuals or subgroups may be at higher risk of failure than others, or may otherwise need a specific intervention to achieve the outcome sought. Improvement teams can use measures to identify these individuals or subgroups, ascertain their areas of struggle, and craft targeted interventions.

BTEN Example
At one of the BTEN schools districts, data from a survey of new teachers provided evidence that teachers entering the system through alternative certification routes were more likely to show signs of disengagement and burnout than teachers who entered through traditional certification routes. This prompted school leaders to consider how they might partner with the alternative certification providers to better support these teachers.

Developing Social and Psychological Stances Necessary for Improvement

When improvement teams collectively engage with data relevant to their improvement work, they can develop a psychological approach and a social dynamic that supports their efforts. These cultural and mindset shifts include the development of a shared language and understanding about the problem being addressed and the theory of improvement; increased will and engagement with the improvement work; and a sense of internal accountability, both personally and within a team, to enact changes.

BTEN Example
On this last point about a culture of internal accountability, one BTEN principal explained:
“[The BTEN data] gave us a reason to not make excuses about why we weren’t doing it or how other things got in the way. I guess it sort of prioritized the work for us so that we made a commitment to a certain timeline, we made a commitment to each other that we would stay on that timeline …. I think in that way it kept us accountable.”

As Carnegie moves forward in developing a robust methodology of measurement for improvement, we will no doubt be revisiting and revising this list.

Wednesday

6

August 2014

Testing Stress Impact on Students

Written by , Posted in What We Are Learning

Editor’s Note: Jeremy Jamieson is an assistant professor of psychology at the University of Rochester where his primary research interests focus on emotion regulation and how stress impacts decisions, emotions, and performance. Aaron Altose is an assistant professor in mathematics at Cuyahoga Community College in Cleveland, OH. He is also a Quantway instructor within the Carnegie Foundation’s Community College Pathways. They are working as a team as part of one of Carnegie’s Alpha Labs.

 

“The greatest weapon against stress is our ability to choose one thought over another.”
— William James, philosopher and psychologist

 

Every math teacher in a developmental-level class knows about the challenges of test anxiety. It is understood that in an environment where success is primarily measured by achievement on a proctored assessment, promoting student success means devoting some effort to test preparation and other activities that reduce test anxiety. Carnegie’s Quantway and Statway Pathways are designed and implemented in a way that focuses more on the students’ learning (over the instructors’ teaching) and are increasingly based on what research has demonstrated to be effective learning structures and processes. So, naturally, embracing the challenge of test anxiety follows a similar approach: focus more on what the students understand about their own anxiety as we build off of the insights of research.

In an effort to promote research-based practice, Carnegie has developed Alpha Labs. Within each Alpha Lab there are researchers who partner with a community college faculty member to test interventions that address specific instructional routines, skills development approaches, or mindsets interventions necessary to promote success in the classroom. A growing body of evidence indicates that teaching people to reinterpret stress arousal as a potentially useful coping resource that can help improve outcomes during stressful evaluative situations, such as interviewing for a job, speaking in public, or performing in an academic setting (Crum, Salovey, & Achor, 2013; Jamieson et al., 2010; 2013; Jamieson, Nock, & Mendes, 2012; 2013, Woods, 2014). Given this, one Carnegie Alpha Lab has tested arousal reappraisal routines to develop students’ ability to cope with the stress and anxiety of testing situations. Arousal reappraisal instructs individuals that the physiological arousal experienced during stress is not harmful, but rather can be conceived of as a coping resource that aids performance. This perspective builds directly on reappraisal research from the emotion regulation literature (Gross, 1998; 2002).

Aaron Altose explains: “In my Quantway class, I’ve tried many approaches over the years to help students confront their negative feelings around taking exams. We talk about the proper amount of time to spend studying, finding the right environment, eating and sleeping well, and not cramming before a test. I’ve tried guiding students through progressive muscle relaxation before tests. I had a constantly evolving PowerPoint presentation comprised of information and articles I found online about understanding the ‘fight or flight’ response and high-pressure performers like Air Force pilots and emergency room doctors. But students sitting down to take a math test seemed to never take this information to heart. Negative interpretations of bodily signs of stress persisted. However, this material clearly demonstrated to students that they would have to overcome their performance anxiety to succeed.”

There are methods specifically designed to help people improve their performance in stressful situations by re-interpreting the meaning of bodily signs of stress arousal, such as sweaty palms or racing hearts. Rather than seeking to reduce or eliminate stress or emotional intensity to improve outcomes, this new arousal reappraisal approach seeks to change “bad stress” into “good stress.” The Alpha Lab developed and tested interventions were developed for just that purpose.

When students participate in an Alpha Lab activity to test the efficacy of methods to reverse the negative effects of stress, they complete very specific inventories of their feelings and emotions immediately before they take an exam. The arousal reappraisal intervention involves having students read summaries of scientific articles highlighting the adaptive benefits of stress. After reading the materials, students are asked to respond to two multiple choice questions that ensures they read the materials and to encourage them to accept the information provided.

While the purpose of the experiment is not shared with students, students have said that the activity helped them better understand what they were feeling. “I know before a test, I just feel bad, but maybe what I really feel is determined,” one student said, after participating in the inventories as part of the experiment. Seeing students take a more empowered approach to understanding their stress, just as they take more control of their mathematics learning, has been powerful for faculty as well.

Although data collection is still ongoing, preliminary results are promising. Upon completion of the current project Pathways faculty and researchers will further develop the reappraisal intervention for more widespread dissemination and testing.

By testing interventions like these, Alpha Lab faculty and researchers working together are able to help students overcome key hurdles, stress and anxiety, and to maximize their success in the classroom.

To continue this discussion, Carnegie will be hosting a webinar with Jeremy Jamieson this fall. Join our mailing list to be the first to hear when registration opens. 

Tuesday

29

July 2014

Designing a Collective Learning System

Written by , Posted in What We Are Learning

Editor’s Note: This blog is based on the author’s doctoral capstone done while she was in residence at the Carnegie Foundation. The full capstone is titled, “Walking the Talk, Teaching the Talk: Building a Collective Learning System at the Carnegie Foundation for the Advancement of Teaching” and was submitted to the Harvard Graduate School of Education.

Carnegie’s work rests on the assumption that we need to increase the rate of learning in order to reach the high aspirations we have for the education system–to provide quality education to the millions of children across the country. In our Networked Improvement Communities (NICs), we aim to “learn our way into better systems.” A key component of that vision is being able to build on the learning of others. We are all familiar with the lament of “not wanting to reinvent the wheel,” yet if you have been around education long enough, you have probably seen efforts that seemingly “reinvent the flat tire.” Despite living in a time of connectivity and open access to information, we still struggle with getting the right information to the right people at the right time. We are “information rich, but knowledge poor.” At Carnegie we’ve been studying knowledge management in an effort to address this problem, both within our organization and in the NICs we support.

Knowledge management has a natural fit within NICs because they aspire to produce knowledge that can improve practice and expedite the spread and use of that knowledge. In his original conception of NICs, Doug Engelbart argued that the collective IQ, or “how well people can work together to solve important problems,” depends on their ability to engage in “concurrent development, integration, and application of knowledge.” His description prefigures how the American Productivity and Quality Center, a leading voice in the field, defines knowledge management. They define it as, “a systematic effort to enable information and knowledge to grow, flow, and create value. The discipline is about creating and managing the process to get the right knowledge to the right people at the right time and help people share and act on information in order to improve organizational performance” (O’Dell & Hubbert, 2011, p. 2). Both of these definitions sounded promising, and over the course of the last year we began to explore the knowledge management needs for Carnegie and for our NICs. We also studied what has been learned from previous efforts in other organizations.

The first lesson that came through loud and clear was that many organizations have made the mistake of thinking of knowledge as a set of assets to be managed. This view leads to an over reliance on tools like repositories and lessons learned databases. The theory being that if we can get everything codified and ordered, then the knowledge will travel. This has proven to be insufficient for many reasons. One of the most significant reasons is that such an approach does little to account for how people learn and use knowledge. Here is where our own expertise as educators offered a particularly useful lens. We knew we needed to consider how adults learn, and in particular how they learn in the workplace, if we were to be successful in our efforts. We also know quite a bit about the human processes that structure the use of new knowledge. All these considerations needed to be accommodated in a truly effective knowledge management system.

Design Principles for a Collective Learning System

Rather than design a knowledge management system for Carnegie, I proposed that we need what I called a Collective Learning System (CLS). I define it as a set of interrelated social practices (routines, norms, roles, and processes) supported by technology tools that facilitate the learning necessary to achieve a group’s mission. It provides a means to increase capacity in an organization or in a multi-organizational structure such as a NIC. The name and the definition show a disposition towards social practices and learning which is more in line with the highly collaborative work of NICs. My current thinking about what Carnegie’s Collective Learning System should include is outlined in the seven principles that follow:

1. Attend to practical knowledge and appropriate processes for learning it.
Different types of knowledge call for different types of learning experiences. Given its work, Carnegie’s knowledge needs tend more towards practical knowledge, or the know-how needed to enact improvement. Efforts to teach and learn this practical knowledge need to follow what is known about adult learning.

2. Provide guidance on the social structures needed.
It is not enough to say that people should share knowledge; a CLS should articulate the social structures needed to support learning. Social structures include, but are not limited to, protocols, norms, physical setting, facilitation, and documentation.

3. Attend to psychological safety and lower barriers to entry.
Efforts should be made to create a space wherein people feel safe to learn and fail. A CLS will include conversations with people across different levels of the organization and of unequal power. Social practices should allow the person with the relevant expertise or insight to express it regardless of their status. Designs should allow for people to enter the learning process in various ways since different people learn and express themselves in different ways.

4. Embed the learning processes in the work.
Whenever possible, learning should be embedded in the actual work of practitioners. This will retain the context that generated the knowledge and where it will need to be applied, maximizing the depth of understanding and utility of the learning.

5. Take a system perspective.
Processes should aim to focus attention to all relevant components of the system. And, in crafting social processes, input should be sought from all relevant parties in the system.

6. Integrate learning from across the organization.
Knowledge is being created across the boundaries of units inside the Foundation and outside it. A CLS needs to serve as an integrator across these boundaries. While the work will continue to occur across boundaries, and it would be ineffective to have everyone involved in all the work, collective learning requires integration in order to maximize learning.

7. Take advantage of technology to the extent appropriate.
Technology, especially that which facilitates real-time collaboration, should be utilized when appropriate. However, technology should not aim to eliminate the human learning experience.

Although this is an ambitious list, in the sprit of improvement science, we’ve started small by iterating on the design of specific processes like after action reviews and failure analyses. As with all work at Carnegie, we will continue to learn our way into this problem.

Thursday

24

July 2014

Creating a Classroom Culture for Student Success

Written by , Posted in What We Are Learning

When students walk into developmental math classes they are most likely carrying something weightier than their backpacks, something much more insidious. They bring with them negative mindsets that they can’t do math or that they aren’t a math person, reinforced by the history of past math classes where they experienced failures. And many bring with them the threat of stereotypes, some math-based and others defined by gender or ethnicity.

In designing two alternative mathematics pathways for students who place into college developmental math classes, Carnegie has acknowledged this student baggage as one of the key drivers that must be addressed in order to fully support student success. And they have embedded interventions into the instructional design of the two Pathways—Statway in statistics and Quantway in quantitative reasoning—to address these drivers.

At the annual Community College Pathways National Forum, Claude Steele, the leading expert on stereotype threat, and David Yeager, whose work on transforming student mindsets has been incorporated into the Pathways instructional system since the initial design, suggested approaches that in other settings had been shown to reverse the roles these threats play in negatively affecting the motivation and engagement of students, and thereby their educational outcomes and performance. As Steele explained, these influences are “powerful but not determinative.”

steeleSteele provided an example of stereotype threat especially relevant to Carnegie’s work. Female and male students who excelled at math at the University of Michigan were administered the half hour section of the graduate math exam. The premise was that the gender stereotype that females weren’t as good at math as males would suppress the performance of the female students, not allowing them to do the necessary cognitive work on the exam that they were clearly capable of doing. In this case, it held true. The female students significantly underperformed compared to the male students on the same test. Although both men and women were stressed merely by having to take a test, women experienced the additional pressure of the stereotype.

To mitigate the stereotype—“the preoccupying presence” as Steele puts it—students equally as gifted as the first tested cohort were told beforehand that this particular test was one on which women always did well. Under these conditions, the female students’ performance increased to match that of men. Similar impacts have been observed for various racial/ethnic groups as well.

Steele suggests remedies for stereotype threat. These include changing the cues that educators send to students. Changing the language in a classroom can create relationships between students, advisors, and teachers that tell a student that there is no presumption that he/she lacks something needed to succeed. Instead, there is a presumption that the student can succeed.

And the idea that ability is malleable is a tremendous relief to kids, and a liberating idea, Steele said. “It significantly reduces stereotype judgment.”

Carnegie is introducing the idea of malleability in the Pathways. One of the exercises in the Pathways Starting Strong package is an exercise where students read an article explaining that neuroscience shows that the brain is like a muscle and that with enough effort they can grow their brain.

Through the Starting Strong package, students who come to the Pathways thinking that they aren’t “math people,” or they don’t belong because they aren’t smart enough to succeed are supported in developing a “growth mindset.” In addition to learning from the article that intelligence is not fixed, students are given strategies to support persistence through the course, and the encouragement from the start and throughout the course that gives them the courage to use the strategies to succeed.

yeagerYeager said that in a randomized control trial the introduction of this one article on the concept of brain growth has been shown to have a significant effect on student persistence and success. There is evidence from selected Pathways classrooms that indicate that the effect has been replicated through the Starting Strong activities as well.

The Pathways—which subsume rigorous materials, new and more engaging pedagogies, the productive persistence interventions like the use of this exercise, and the behavior and speech that support it—have produced amazing results. Students have tripled their success rates in half the time and Carnegie has been able to maintain this level of student accomplishment, even as the initiative has grown to include new colleges, new faculty, and many more students over the past three years.

Yeager offered some specific recommendations to those attending this year’s Forum and just beginning to teach the Pathways. He said to create a class culture that supports success, not one that implies expectations of failure. He said to provide praise after accomplishment (not disassociated from effort and accomplishment), encouragement often, and continuous feedback—in class, during office hours, through emails.

brain-workoutHe said to continue to remind students that the brain is analogous to a muscle, that “the more you use it, the better it works.” Or, “the more you practice, the smarter you become.” When students seem to get discouraged, give them a boost—indeed, there are “booster” activities included in the pedagogy that Pathway faculty use. Use phrases like: “other students say that when you come to the difficult part where you have to struggle, it is a particularly helpful and productive part of the learning process” or “when you struggle, then you’re growing.” Yeager said that the really wonderful thing about what he had discovered in his research is that the lowest achievers change the most and become some of our highest achievers.

Yeager concluded with a challenge: “Students have theories about their success,” Yeager said. “It is up to us to shift those theories” to more positive and productive ones.

Thursday

17

July 2014

How to Spur Improvement Activity in Networks

Written by , Posted in What We Are Learning

Over the past five years, the Carnegie Foundation for the Advancement of Teaching has launched a set of three Networked Improvement Communities (NICs). We have played roles in the launch and support of two NICs in particular, the Building a Teaching Effectiveness Network and the Student Agency Improvement Community.

NICs are scientific learning communities distinguished by four essential characteristics. They are: (1) focused on a well specified common aim; (2) guided by a deep understanding of the problem and the system that produces it, and a shared theory of how to improve it; (3) disciplined by the rigor of improvement research; and (4) coordinated to accelerate the development, testing, and refinement of interventions and their rapid diffusion out into the field, as well as their effective integration into varied educational contexts. These characteristics create conditions under which organizations can learn from their own practices and benefit from innovations from both within and outside of their organization to systematically address high-leverage problems.

Through the initiation and development of several NICs, Carnegie has gained some insight into what it takes to spur improvement activity in networks:

Start with a high-leverage problem of practice
A high-leverage problem of practice is an issue that, if addressed, can disrupt status quo practices in an organization and render improvements throughout the system. This is a compelling problem area that, if solved, will propel the organization toward achieving its core mission. Long-term leadership and stakeholder commitment to solving this problem is critical to the success of the NIC, and the determination to work on a recognized high-leverage problem (as perhaps one that the organization has struggled with for a while) can do much to evoke that commitment and will. Moreover, the process for selecting and messaging the NIC’s high-leverage problem needs to be transparent and evidence-based.

Build on work already being done
With more rigorous standards, evolving policy demands, and tight budgets, school personnel are striving to realize increasingly ambitious objectives with limited resources. The desire to apply improvement science in networks often outstrips school personnel’s capacity to conduct such work. To introduce improvement work into a district, we would recommend beginning the improvement activity in an existing team or learning community. Infusing improvement methods into existing collaborative structures (e.g., partnerships, meeting structures, conceptual framings) adds capacity to work already being done as opposed to adding “one more thing” to the work of educators who are already operating at full capacity.

Assemble a diverse team
We have found it efficient to start improvement work in existing collaborative structures, but it is also true that solving high-leverage problems in schools will require a range of perspectives and levels of expertise. Given the interdependencies of processes in complex systems, often the processes of improvement will uncover previously overlooked drivers of the problem. It is not uncommon for new team-members to be added as the improvement work evolves. Organizations can be hierarchical, but in NICs, each member brings an essential perspective to solving the problem at hand. In fact, in improvement work, it is often the case that those workers closest to the “front line” are those with the best ideas about how to solve the problem. NICs can foster a spirit of co-development by demonstrating openness to feedback and rapidly integrating and testing member ideas in the network.

Provide access to improvement guidance
Improvement science offers a set of new frameworks and methods for approaching work. As with most newly acquired skills, users will struggle in the process of integrating this approach into real-life contexts. It is imperative to ensure that just-in-time feedback and support is provided reliably in order to scaffold learning and help members see the value in the improvement work. Early wins will also help to build will for the work in the organization.

Balance in-person and virtual communication
Launching a network is best done in an in-person convening of network members and stakeholders. Often convenings can galvanize enthusiasm around solving the high-leverage problem and build momentum for the work. However, it is often the case that that enthusiasm can wane when members return to face the challenges of their daily routines and momentum often flags. A collaborative online platform can foster continued communication to build upon the sense of community garnered at the first convening. It can also provide for the sharing and spread of what is learned through ongoing improvement efforts.

Certain conditions are obvious prerequisites for seeing improvement work gain traction in practice: building the improvement capabilities of professionals through training and ongoing coaching, for example, or creating the infrastructural capacity by establishing a supporting Hub that provides supports for improvement science, collaborative work, knowledge management, etc. The items introduced here have emerged in our work as more particular issues that require attentions if improvement work is to be pursued in a manner that is deep, widespread, and enduring.

Wednesday

28

May 2014

Iowa Mentoring Program Targets Needs of Beginning Teachers

Written by , Posted in What We Are Learning

As Carnegie Senior Associate Susan Headden writes in her recent report “Beginners in the Classroom,” public education loses a lot of new teachers to attrition, upwards of 750,000 a year, and pays a heavy price in talent and treasure. They leave for many reasons, Headden reports, but at the top of the list are concerns related to lack of support, such as limited professional development, little helpful feedback on performance that supports improvement, and feeling isolated from colleagues.[1]

Mentoring programs for new teachers may help address these issues. Effective mentoring programs, research suggests, promote new teachers’ sense of professionalism and hence their satisfaction and retention. Such programs can improve teachers’ instructional abilities and thereby increase their students’ achievement.[2]

Of course, not all mentoring programs are created equal. Among the success stories is Iowa’s Grant Wood Area Education Agency (AEA). In 2000, Iowa passed a law requiring that every new teacher have a mentor. Today’s iteration of the program benefits from knowledge gleaned from early mistakes. In the beginning, mentor teachers were given a stipend, but no training or release time from their own classroom duties in order to meet with mentees. The program had no oversight, the mentors were accountable to no outcomes, and no data were collected on implementation or results. Turnover among new teachers remained high.

Currently, however, mentors are released from their classrooms for three years; they are full-time mentors who stay with the same group of mentees for two years. Mentor selection is rigorous. Each applicant is interviewed multiple times, illustrates their ability to create model lessons, provides assessments of student work, and writes essays to show evidence of his or her capacity for reflection, a necessary skill for mentor success. Perhaps the most impressive component of the program is the training provided to Iowa mentors through the New Teacher Center, a non-profit organization that helps train new teachers. Sessions are differentiated for both new and expert mentors, and there are options for administrators as well. During training, mentors complete assignments in conjunction with their beginning teachers as well as reflecting on their own assignments.

Collectively, the Grant Wood AEA’s model includes what beginning teachers need in order to feel supported: instructional guidance, frequent and actionable feedback, and meaningful relationships within the school. Careful selection of mentors, as well as the ongoing training they receive, sets mentors up to succeed, giving them the instructional and reflective tools they need to meet their mentees wherever they are in their practice. Mentors must periodically submit evidence of their meetings with mentees, ensuring that new teachers are in fact getting individualized and ongoing feedback from their mentors to help improve their practice.[3] Providing full release time to mentors means that they can spend a significant amount of time with each mentee, forming meaningful relationships based on trust and support. This last piece is perhaps the most important, since these relationships help to tie new teachers to their schools, sustain them in the difficult work of beginning teaching, and keep high-performers in the profession.

Though it is too early to determine the long-term effects of the program, feedback from teachers, mentors, and principals has been overwhelmingly positive. Officials are collecting data on the implementation and impact of the program, including information on which skills mentors are helping their mentees develop. They have discovered that, early in the school year, new teachers’ primary concerns are classroom management and instructional planning—valuable insight that can help schools target future professional development efforts. Data collected thus far show that all mentors are spending 60-90 minutes per week with each mentee. And, critically, beginning teacher attrition is low; of the 33 new teachers who mentors worked with in the 2012-2013 school year, only two have left.[4] Time will tell if Grant Wood AEA’s mentoring program has a lasting effect on teacher quality or teacher turnover, but based on initial results and feedback, it seems that it is providing beginning teachers with the support and sense of belonging they need in order to improve and to stay in the profession.


[1] Beginners in the Classroom, pg. 5.

[2] Richard Ingersoll and Michael Strong, “The Impact of Induction and Mentoring Programs for Beginning Teachers: A Critical Review of the Research,” Review of Education Research. Vol. 81, 2 (2011): 201-233. Retrieved from: http://repository.upenn.edu/gse_pubs/127

[3] Grant Wood AEA, “Mentoring and Induction Program.” Accessed April 28, 2014. http://www.aea10.k12.ia.us/leadership/mentoracademy/

[4] Beginners in the Classroom, pg. 22.

Friday

2

May 2014

Is a Networked Improvement Community Design-Based Implementation Research?

Written by , Posted in What We Are Learning

A new NSSE yearbook chapter, co-authored by Jon Dolle, Louis Gomez, Jenn Russell, and Tony Bryk sheds light on Carnegie’s approach to building professional communities as Networked Improvement Communities and its relationship to design-based implementation research (DBIR). This post summarizes and builds upon that chapter.

For the last five years, Networked Improvement Communities (or NICs) have been at the center of Carnegie’s work. Many observers have been uncertain how to categorize NICs within the field of education research. Design-based implementation research (DBIR), in particular, bears a family resemblance to a portion of the work done by NICs. But NICs are not a research approach, and their raison d’être is not theory building. Here is a brief exploration of similarities, differences, and the productive relationship that can exist between the two.

Similarities
The umbrella of DBIR covers research that generally adheres to four principles:

(1)   “A focus on persistent problems of practice from multiple stakeholders’ perspectives;

(2)   “A commitment to iterative, collaborative design;

(3)   “A concern with developing  theory and knowledge related to both classroom learning and implementation through systematic inquiry;

(4)   “A concern with developing capacity for sustaining change in systems.”

When broadly interpreted, these principles characterize many of the activities in which NICs engage. Carnegie’s Community College Pathways NIC, for example, is organized around the instructional challenges of diverse community college faculty (Principle 1). Its improvement work is conducted through rapid Plan, Do, Study, Act (PDSA) cycles and supported by a variety of analytic approaches (Principles 2 and 3). As Pathways members and leadership test new change ideas and learn more effective implementation strategies, this knowledge gets represented in many different forms, including revised driver diagrams (our theory of change), updated change packages (a mechanism for sharing changes), as well as published reports and white papers (Principle 3). And all of the Pathways work is focused on capacity building with the goal of systems change (Principle 4).

Given these similarities, the temptation to classify NICs as a form of DBIR is understandable.

Differences
Carnegie’s resistance to categorizing NICs as a research approach can be stated succinctly: a NIC is a professional community structured around the accomplishment of a shared improvement aim. It is not an approach to research, though NICs use research as an essential aspect of their work and, on occasion, engage in research themselves. Just as it would be odd to categorize a network of hospitals as an approach to clinical research, the DBIR label fits some NIC activities but it is not their reason for being. In both cases, networks use and sometimes engage in research, but they are not research networks. On its own, producing new and better knowledge is rarely sufficient to affect system-level improvement. NICs are a mechanism for making new knowledge a live resource within a system.

Beyond this fundamental difference in purpose, there is another reason to distinguish NICs from DBIR. As improvement-oriented social organizations, NICs prioritize practical “know how” over theoretical “knowledge that” something might improve a system. The only way to bridge the evidentiary gap between “knowledge that” and “know how” is to learn through the process of actively changing a system. NICs learn about practice by actively trying to improve it. All the elements of a NIC (its membership, its aim, its theory of action, its core capacities, etc.) are organized around enabling the kind of system learning necessary for effective and reliable improvement at scale.

We posit that there are at least four network capacities that can enable distributed improvement work:

  • A rapid analytics infrastructure is a core capacity of the hub that helps collect, manage, analyze, and share data across the network.
  • Common tools and routines that enable disciplined inquiry are critical to coordinating member activities across a dispersed professional network. They facilitate network learning and engagement that is essential to scaling improvement within an education system.
  • Innovation conduits are the way promising ideas inside or outside of the network are identified, tested, refined, and scaled.
  • A culture that embraces a collaborative science of improvement supports the development of professionals committed to collaborative inquiry around a shared problem.

A Productive Relationship
Because DBIR is a research approach, its primary knowledge products are familiar: new, empirically grounded theories and explanations of social phenomena. DBIR recommends developing these theories and explanations in close partnership with practitioners, as well as “developing the capacity of the entire system to implement, scale, and sustain innovations” (Fishman et al, p. 145). However, research typically doesn’t develop capacity on its own. (If it did, academic journal subscriptions would likely exceed those of major newspapers and pop culture magazines!) Consequently, DBIR needs a coordinating entity with the capacity for intelligent integration of the knowledge that it produces into a system. NICs are one such coordinating mechanism.

The confusion over the relationship between NICs and DBIR arises because NICs do, in part, engage in inquiry that can fit under the umbrella of DBIR, and also because the different aims of this inquiry are easily confused. The knowledge that results from academic theory building may or may not develop capacity within a system. Academic theory often plays an important role in improvement efforts, especially as a resource for testing and innovation: it can help improvers understand problems of practice, guide the development of practical theories, and generate change ideas for testing. Research is conducted as a means of making progress towards an improvement aim, but the ends of a NIC—what the community agrees to hold itself collectively accountable for—is the improvement of practice at scale. Theory building is a priority only to the extent that it advances this aim or our collective capacity to pursue such aims.

As the body of knowledge produced by DBIR grows, NICs are a natural mechanism for making these theories a vital resource for improvement within and across educational systems. NICs will also likely contribute to this body of knowledge, but only in so far as it advances shared improvement aims or enhances the collective capacity to improve.

###

To learn more about DBIR, check out the other chapters in the NSSE volume, as well as two excellent articles by Bill Penuel, Barry Fishman, and colleagues.

Monday

14

April 2014

Building a High-Quality Feedback System that Supports Beginning Teachers

Written by , Posted in What We Are Learning

In 2011-2012, nearly a quarter of the teachers in the U.S. had five or fewer years of experience and nearly 7 percent were brand new to the profession.[1]  While novice teachers bring new skills, perspectives, and energy to their schools, they also tend to leave the profession at high rates, with nearly half leaving the classroom in their first five years.[2],[3]

At a recent event hosted by Carnegie’s Washington, D.C., office, this statistic was brought to life as three beginning teachers reflected on their futures in the classroom: one was committed to staying, one was committed to leaving, and the third was looking beyond the classroom to an administrative role where he hoped he might “have an even bigger impact” on students’ lives.

Invited to respond to “Beginners in the Classroom,” a report on the condition of beginning teachers by Carnegie Senior Associate Susan Headden, these teachers spoke candidly about their experiences as novice teachers. And though the contexts in which they entered the teaching profession differed—Lauren Phillips cut her teeth as a New York City Teaching Fellow, Rene Rodriguez as a Capital Teaching Resident in Washington, D.C., and Diana Chao as a university-trained teacher in Montgomery County, Md.—all three agreed that their first year might have been improved by more frequent and more actionable feedback from the instructional leaders at their schools.

Research suggests feedback might do more than simply improve teachers’ early experiences and performance in the classroom; it might help convince them to stay, too. In a 2012 study by TNTP, top-performing teachers who experienced supportive, critical feedback and recognition from school leadership stayed in their schools for up to six years longer than top-performers who did not receive such attention.[4]

Likewise, in a survey of 580 teachers in the Baltimore City Public School system, researchers from Carnegie’s Building a Teaching Effectiveness Network (BTEN) found that teachers who felt engaged in their schools and were made to feel confident about their classroom contributions were significantly more likely to stay at their schools. Among the 25 percent who felt least confident and least engaged in their school communities, fewer than half were likely to stay the following year.

Despite growing evidence that high-quality feedback—feedback that builds trust and leads to improvements in teaching and learning—may be a crucial lever for increasing teaching quality and retention rates, providing such feedback has proven a significant challenge in America’s school systems. Even in districts that have made instructional improvement a priority, the feedback teachers receive is often infrequent, inactionable, and incoherent.

feedback-components

Components of a Prototypic Feedback Process

Carnegie addresses this challenge in its latest publication, Developing an Effective Feedback System, which aims to help districts rethink feedback not simply as a series of isolated conversations between principals and teachers, but rather as a complex system of many interconnected factors at the district, school, and classroom level—all of which shape the nature of feedback teachers receive.

Drawing on scholarly research and in-depth interviews with expert practitioners, the report provides a framework of key drivers—processes, norms, and structures—that should be in place at each level for a district to maintain a coherent, high-quality feedback system that can drive improvement in teaching quality and contribute to the retention of teachers who are successful.  A clear instructional framework, training and support for feedback providers, coherent and coordinated feedback, and a trusting culture committed to continuous learning are among the key drivers explored in greater depth.

To provide even greater direction for school-based educators working with new teachers, the paper also outlines components of a model feedback process, including concrete steps principals and coaches can take to coordinate and improve the interactions they have before, during, and after feedback conversations with novice teachers.  These are conversations that, according to the panelists on Carnegie’s recent panel, tend to lack substance, if they occur at all. And they are conversations that, if done well, have the potential to improve new teachers’ practice and, hopefully, keep them in the classroom for the long haul.


[1] NCES Schools and Staffing Survey, 2011-12.

[2] Matthew Ronfeldt, Susanna Loeb, and James Wyckoff, “How Teacher Turnover Harms Student Achievement,” American Educational Research Journal 50, no. 1 (2013): 4–36. Retrieved from: http://aer.sagepub.com/content/50/1/4.

[3] Richard Ingersoll and Lisa Merrill. Seven Trends: The Transformation of the Teaching Force. Consortium for Education Policy Research. (2013) Retrieved from: http://www.cpre.org/sites/default/files/workingpapers/1506_7trendsapril2014.pdf.

[4] TNTP. “The Irreplaceables” (2012).

Tuesday

25

March 2014

How to Change Things When Change is Hard

Written by , Posted in What We Are Learning

Dan Heath, author of Switch: How to Change Things When Change is Hard, speaking at Carnegie’s Summit on Improvement in Education, acknowledged to those working toward positive change in education that a new approach might be in order. He said that instead of change being about interventions, as we usually approach it now, it is instead about the “long game” of changing direction through motivation.

elephant-riderHeath framed his talk around the compelling elephant-rider analogy—an explanation of the two (often at odds) sides of human nature—borrowed from University of Virginia psychologist Jonathan Haidt. The analogy suggests that everyone has two sides—a rider and an elephant. The rider represents the rational thinker, the analytical planner, the evidence-based decision-maker. The elephant, on the other hand, is an emotional player, full of energy, sympathy and loyalty, who stays put, backs away, or rears up based on feelings and instincts. The elephant is often on automatic pilot. It is the part of the brain that tells us to go ahead and eat the ice cream, after the rider has decided to put us on a diet. Although the rider holds the reins and appears to lead the elephant, the six-ton elephant can, at any time, overpower the rider and the rider, although he may not know this, can’t force the elephant to go anywhere unless he appeals to him and motivates him in some sustainable way. “In order to change the elephant, we have to appeal to a felt need,” Heath said. “Sparks come from emotion, not information.”

Nowhere is the elephant-rider dilemma clearer than in education reform. Policymakers are classic riders, pointing straight ahead and asserting all the while that “this is the right way, the clearest best path … I’ve got this beast under control.” Researchers, too, often act as riders, captivated by their carefully collected data and certain that their objective findings will prove compelling. Meanwhile, educational systems—slow, strong, passionate elephants that they are—plod along, sometimes responding to the switch of the rider and other times arching their backs in resistance. These elephants may try out a few different trails, lumber up a few small hills, but can buck that rider off at any point. After all, they’ve been living on this land for years. Riders come and go.

Don’t ignore the elephant, Heath urges us. The rider can’t just rely on a carefully charted, smartest, best path. He also must appeal to the elephant’s motivations. Good teachers understand this—they don’t get their students to read Shakespeare by telling them that it’s part of a canon that they need to know to be an educated person and be “college- and career-ready.” They show them Shakespeare is about love and hate and all of the raw emotional experiences they’re having in their own lives. And, he said, Carnegie understands this, citing our Productive Persistence interventions that reinforce a student’s sense of belonging as necessary to move the needle on student success in developmental mathematics.

The elephant also needs a well-directed rider, one who can see clearly beyond the trees and steer through the fog to what Heath calls “bright spots”. These spots are not the schools with 100 percent high achievers, but the places where small changes are making big differences. A good rider can lead an elephant to the bright spots since, as Heath explains, what looks like resistance is usually just a lack of clarity.

In the end, Heath leads us to a simple but important lesson for educational change: We need to get our riders and elephants in sync. That means finding smart, evidence-based paths to improve education, and finding those paths of least resistance. He again cited Carnegie’s efforts in developmental mathematics, noting instead of merely changing the course materials, Carnegie developed pathways that would get students through a college-credit math course in one year, instead of the current multi-year path where students often drop out between quarters or semesters.

“For change to succeed,” Heath concluded, “there are three ingredients. We need paths shaped for clear and easy passage. We need riders who know where to go and can see the bright spots. And, perhaps above all, we need enthusiastic elephants.”

Monday

17

March 2014

Performance Assessment for Teachers and of Teachers: Combining the Development of Teaching with Teacher Evaluation

Written by , Posted in What We Are Learning

Editor’s Note: In a previous post by Lee Nordstrom, he warns that we should not conflate evaluation and improvement processes, but he also points out that systems of evaluation and improvement are not mutually exclusive. This post explores the conditions under which these two systems might be coherently integrated.

The national push to revamp systems of teacher evaluation has spurred a growing call for the need to attend to teacher development, not just evaluation. But can school leaders effectively evaluate teachers and simultaneously support their growth? Are these goals too contradictory to combine, or can a single system support both efforts?

In a recent 90-Day Cycle[i] conducted at the Carnegie Foundation, we explored the question of if teacher evaluation and teacher development efforts can and should be combined as aspects of a single system.  From 20 expert scholars and practitioners in education and several key pieces of literature, we heard an emphatic “yes.” These experts argued not only that is it possible to address both goals in a blended way, but it is preferable for these two efforts to be woven together. One important caveat was that the larger school cultural context within which these processes occur matters greatly. If the school culture is focused on professional growth, the potential success of combining evaluation with improvement is much greater than if such a development-oriented culture does not exist.

Combining Formative and Summative: Reconciling Characteristics of Assessment Across Purposes

The scholarly conversation about whether and how formative and summative assessments can be combined has been ongoing for decades in the field of education. There is a widely shared understanding among experts that the differing aims of formative and summative assessment lend themselves to different characteristics of an assessment system. Some of the more prominently discussed characteristics include:

  • the grain-size of the data—formative: specific and actionable; summative: broad and global;
  • the frequency of assessments and feedback—formative: frequent; summative: infrequent;
  • the importance of reliability of data—formative: less critical because context-specificity is valuable; summative: important because valid global and uniform conclusions depend on high reliability;
  • the criteria by which to make judgments about students’ learning—formative: dependent on context and the individual’s own past performance; summative: criterion- or norm-referenced to enable uniform judgments across learners.

While these characteristics may appear contrary across the two purposes of assessments, some scholars assert that these differences are not mutually exclusive.  In “Systems of Coherence and Resonance: Assessment for Education and Assessment of Education,” authors Paul LeMahieu and Elizabeth Reilly point out that some characteristics are necessary for a particular purpose, while others are common but not required. They give the example that frequent feedback is necessary for formative assessment, but summative assessments do not require infrequency—they can also be frequent. In the same vein, in “Assessment and Learning: Differences and Relationships between Formative and Summative Assessment”  Wynne Harlen and Mary James assert that detailed, context-specific data are requirements for formative assessment, but that these data can be aggregated over time to produce a holistic perspective and more reliable data for summative purposes. Summative assessments do not require strictly general and non-specific evidence—even if they are often informed by such data.

What we must differentiate when formative and summative assessments are combined is the lens through which judgments are made. Formative assessments should depend on learners’ own past performance and the particular context of assessment, while summative assessments should be judged against external standards or norm-referenced criteria, so that uniform judgments are made across all learners. The critical point for this discussion is that with thoughtful operationalization, the evidence and the mode of data collection that serves formative purposes can also function for summative purposes with the aggregation of fine-grained and frequent data.  The differentiation comes when making inferences and determining next steps, which require different lenses, but do not necessitate entirely separate systems.

Practitioners Call for Combining Improvement and Evaluation Efforts

In addition to technical characteristics of assessment systems, there is a set of issues articulated by the individuals who experience and utilize processes of assessment and feedback. In a study, “Seeking Balance Between Assessment and Support,” of 83 teachers in six high-poverty urban schools, Stefanie Reinhorn found that most teachers said they want to be evaluated and that the evaluation should be connected to support in the same process. These teachers explained that the combination of evaluation and support led to a professionalization of their work, holding all teachers to clear and high standards. Other experts have also found that teachers prefer to be evaluated by someone who knows their practice well and who has seen their growth over time, rather than an evaluator who visits their classroom infrequently.

On the other side of the feedback relationship, feedback providers also described a preference for combining support and evaluation. These experts explained that teachers are more likely to take feedback seriously and to make changes in their practice when the feedback is connected to evaluation. This is especially the case when the feedback includes critiques of the teacher’s current practice. Brian Yusko and Sharon Feiman-Nemser make this point in their study of two induction programs, “Embracing Contraries,” describing how the feedback from Consulting Teachers (CTs) in Cincinnati had “teeth,” since there were consequences if teachers did not act on the CTs’ feedback.

Experts also discussed some unintended negative consequences of a system where evaluation and development are separated. In such a system, teachers are left to their own devices to “connect the dots” between the multiple sources of feedback. Especially for early career teachers, this may prove to be challenging, leaving teachers feeling overwhelmed or confused.  When coaches and evaluators are not able to align their feedback for teachers, they are also prevented from combining and coordinating their strengths. In a system with a firewall, feedback providers with specific expertise cannot easily enhance the work of their colleagues who lack this expertise through a team-based approach to providing feedback.

Building Trust in the Presence of Evaluation

Trust between teachers and feedback providers is essential for transparency of practice, communication, and the uptake of recommendations that can lead to the improvement of teaching. A reason often given for separating development and evaluation efforts is that teachers will feel more comfortable sharing their practice with someone who is not also responsible for evaluating them. The experts we consulted agreed with the importance of trust to promote transparency and growth, but they argued that whether teachers trust their feedback providers does not depend on whether she does or does not also evaluate.  Instead, they explained that trust depends on whether teachers see the feedback providers as effective aides to their professional growth who are genuinely committed to supporting them. Yusko and Feiman-Nemser found this to be the case for CTs in Cincinnati, who both evaluate and support teachers’ development.  CTs reported that their relationships with early career teachers usually developed trust over time, even though they evaluate the teachers.

Next Steps

The experts whom we consulted laid out a strong set of arguments that it is possible and even preferable for efforts of teaching development to be combined with teacher evaluation. There is research that supports these assertions. But this says little of how school leaders should combine these efforts in their day-to-day practice. CTs in PAR programs offer one powerful example, and we should leverage what we can learn from their work.  However, this is one model, and there is also a need to document and explore other examples in other contexts that can serve as practical guidance for school leaders. Collecting and condensing the wisdom from the field about how, concretely, to combine efforts of evaluation with efforts of teaching improvement should be a next step in this line of inquiry.  Then, taking an improvement science approach, school leaders interested in moving towards an effective model of combining evaluation with development can test these practices in their contexts to ultimately serve the goal of improved teaching and learning in their schools.


[i] 90-Day Cycles are a disciplined and structured form of inquiry adapted from the work of the Institute for Healthcare Improvement (IHI).  90-Day Cycles aim to:

  • prototype an innovation, broadly defined to include knowledge frameworks, tools, processes, etc.;
  • leverage and integrate knowledge from scholars and practitioners;
  • leverage knowledge of those within and outside of the field associated with the topic; and
  • include initial “testing” of a prototype.