Friday, July 24, 2015

Chapter 6- Coach’s Guide to Facilitation Protocols and Activities

As we near the end of the book (and unfortunately summer), we come to an often overlooked or at least underestimated part of the PLC process….training and supporting the leader of the team.  Most PLCs are led by a grade level chairperson or a content department chair.  Generally these people have earned such a title because they are great teachers and probably have also shown the ability to lead around the campus.  However, leading a PLC can be a challenging process for anyone and some of the protocols and activities outlined in Chapter 6 can go a long way in helping a PLC leader prepare. 

While I will not go through the specific activities outlined in this chapter (you can read them yourself), I can’t emphasis enough how important it is for the leaders of the building to set the tone for professional learning on the campus.  The question Venables’ poses on pg. 111 is highly valuable and should be asked over and over again:  Why does this activity have to do with being part of an authentic PLC? While this question is used as part of the Traffic Jam protocol, it should be used as the basis for any agenda for a PLC.

Another important component in successful PLC’s is setting up the expectations.  In Chapter 2 we discussed norms and the need to take the time to discuss and create them as a team. In our own curriculum with students, it has become common place to take time within the “First 30 Days” to set up rituals and routines in the classroom with our students.  We emphasize building relationships between students and teachers and we should do the same with our PLCs.  The first few weeks can and be used to understand the purpose of the meetings, agree upon agendas, expectations, and even practice some of the proposed protocols or activities.  While we may think we don’t have time to “set up” our PLCs, I would argue that we don’t have time not to. 

Although the entire PLC is responsible for student learning and the overall culture of the PLC, everyone will look to the leader to set the tone. When PLC leaders ask difficult questions that challenge the status quo, they are pushing the entire team to get better.  By using protocols, it can help everyone involved, especially the leader, to become more comfortable challenging each other and improving over time.  Richard Elmore, co-author of Instructional Rounds, shared that Schools that show the greatest improvement, generally do so under their own devices”.  This takes leadership from teachers, innovation, and a willingness to challenge each other.  When your PLC has acquired these characteristics on a consistent basis, you will have arrived as an “Authentic PLC”.

Reflective Questions:
·       What has our campus done for PLC leaders to prepare them to move their team’s work forward?
·       What protocol in the chapter, or elsewhere, are you looking forward to using in your PLC this year?



Thursday, July 16, 2015

Chapter 5: Reviewing and Responding to Data


 “In God We Trust, All Others Bring Data”

Throughout the summer, as we have read Venables’s book, The Practice of Authentic PLC’s, a constant has been the reminder of the three purposes of a Professional Learning Community. Very simply, PLCs should do the following:  Look at student and teacher work; design quality common formative assessments, and review/respond to data.  This week’s chapter discusses the last of the three and perhaps the one that is the most misused.  

There are so many versions, variables and ways of looking at data, that one of the biggest challenges individual educators, much less PLC’s, have is to determine what data to use and how to use it.  On page 92, Venables uses a quote from James Popham’s book, Transformative Assessment to sum up the use of data about as well as it can be.  “Formative assessment is a planned process in which teachers or students use assessment-based evidence to adjust what they’re currently doing.”  Looking at data in a PLC must be a planned process (intentional).  What common assignment are we going to look at? Why did we pick this particular assignment (high leverage? Readiness standard?) And what is the learning criteria/standard we expect of the students?  If the PLC answers these types of questions in their PLC (see chapter 4), the type of data they collect will be much more valuable and therefore can be used to guide learning in the immediate future.  If, on the other hand, they bring in random pieces of student work that are not designed around the most important learning standards, is the data (or time used doing it) worth using to assess how a group of students are progressing?   Part of this process may be the need within our PLC to develop common “Assessment Literacy”.  Does everyone in the group know the purpose of the assessment, what they are looking for, and how it is aligned to larger learning standards? If they do not, it can cause misalignment, inaccurate data, or both.

Besides the planning process of looking at data, the value in collecting data only comes from using it correctly. In NISD, our grading policy categorizes assignments/assessments into two categories:  Formative and Summative.  The concept seems sound, formative grades are used to help monitor and prepare students for the larger summative assessments that culminate a unit of study.  However, the reality is that the differentiation between formative and summative has more to do with how the student and teacher use the information than what goes in the gradebook.  

When your PLC looks at data, they should be very intentional about what data they want to look at to assess their own progress as well as individual students.  You should also confront the brutal facts about what pieces of data truly effect your teaching and learning.  The chart below has been modified from the one on pg. 95 to fit the NISD terms for assessments.  It is an excellent visual to show the types of assessments that can, and should be, the most impactful on instruction.  


It is my hope that our PLCs during this coming school year can be curious learners when it comes to looking at student data. No matter if the initial results are good or bad, if we take an “inquiry” approach to looking at data and take out the personal side of it, we are better able to find the trends and answers we are seeking.  And, if we can connect all three “purposes” of a PLC together we can do the following outlined on page 103:

Connecting Learning Gaps to Instructional Gaps
“All too often, teachers use data to discover where their students are weak or to indentify skills and concepts their students have not mastered, and then they stop there. In these instances, teachers are seeing only half of the issue. Unless and until teachers link these student weaknesses to teacher practice, that is, to instructional weaknesses, they cannot move forward in fixing the problem.”

This short video from the Data Wise project at Harvard University does a great job of outlining the purpose of data to instruction and the effect of  intentionally collaborating on data in a PLC.

Harvard University: ACE Habits of Mind (Action, Collaboration, Evidence)

Reflective Questions:
  • What evidence do you collect (more data) that shows a response to data improves student achievement? How do you know?
  • What is the hardest part of looking at data with a PLC?


Friday, July 10, 2015

Chapter 4- Designing Quality Common Formative Assessments"

"Great team members hold each other accountable to the high standards and excellence their culture expects and demands." 
-Jon Gordon
Of all the things that make teaching both challenging and worthwhile, assessing students learning may be the most difficult.  The responsibility of monitoring student progress is far from an exact science, but it can be made easier through collaboration and calibration with other educators.  One of the main areas of focus for our PLCs should be designing quality common formative assessments.  It sounds simple enough, but as Venables points out, creating quality formative assessments are not easy.  This is why is it so important that teachers work together to determine what they want students to know and how they will assess if they do.
First, we must remember be clear about the difference between formative and summative assessment. In NISD, we have a collection of Curriculum Based Assessments (CBAs) that are summative in nature. They are given after a unit of learning and generally speaking should give us information about which students did or did not learn the key points within the curriculum.  STAAR tests, EOCs, AP exams, should all be considered forms of summative assessment as well.  They come near the end of the year and are “suppose” to inform us about what students have learned, but they do very little to guide instruction or help students improve.  To truly know how students are progressing, teachers must use various forms of formative assessment (daily, weekly, etc.) to frequently monitor how and what students are learning so they can tailor their instruction to meet both class and individual needs.  Great teachers do this either through experience or second nature, but the power of sharing connected knowledge is developing these types of assessments through collaboration in PLCs. 

In Chapter 4, Venables spends a lot of time addressing the sometimes negative perception of “teaching to the test”.  As he mentions, and I agree, if it is a well-constructed assessment, there should be no problem with teachers instructing to the level of the test.  However, if the assessment or task a student is to be given is at a low-level, it is likely the instruction will match that as well.  This is why it is essential that teachers work together to create the best formative assessments possible which also allows for consistency and calibration within the school.  The word “common” should not be underestimated. Several campuses use protocols or systems such as "State of the Class" each week that is based on a common assignment. Activities such as this allow for identification of needs, both for the whole class and individuals. If  teachers work together to determine the learning target for the lesson and unpack the standard to decide the level of rigor in which that standard will be assessed then the instruction is much more likely to be on point.  It also gives valuable information when using the protocols to assess student work (chapter 3) and student/class data (chapter 5).  Pages 64-66 in the book do a great job of explaining some ways in which PLCs can begin to identify standards to assess.  Another excellent resource for this work is Learning Targets, because it forces teachers to determine what and how a student will meet the learning criteria.  There are numerous protocols and ways to break down standards, the key is to use the PLC to work together so that all members have the same understanding before beginning instruction.
If you are looking for some great PLC resources, I suggest these books.  All have protocols and ideas to help jumpstart even the most experiences of PLC groups!!


Have your students ever done really well on all the formatives leading up to a summative and then “bombed” the test?  The most common reason for that is misalignment between the two.  In an era in which we are trying desperately to have kids “think” and the assessments to be more “authentic” it is imperative that what they experience daily is rigorous and standards-based.  Pages 67-71 do an excellent job of explaining the rationale for standards-based formative assessment, including a chart on page 71 highlighting some of the differences.  However,  to produce the most authentic forms of assessment (and also the most time consuming), Venables offers a section of “Alternative forms of Common Assessment” starting on page 72 that should prove both challenging and essential in an NISD classroom.  PLCs should be discussing ways students show their learning beyond multiple choice tests, but in doing so, they must keep several key components in mind:
·       Alternative forms of assessment should be rigorous and content-rich
·       Alternative forms of assessment should align to the ELO’s (not merely assess other related skills and concepts).
·       Alternative forms of assessments should be evaluated with a standards-based rubric.  Pg. 72
For the purposes of this blog and space, there are too many implications to the statements above, but I hope that all the readers will take time to reflect on the types of projects and rubrics in use by their PLCs or in their own classrooms.  The author points out several common misconceptions in their design that often lead to misalignment or improper assessment of learning (read: the project becomes a waste of time).    The essential question to keep in mind:  Where in what the student did is there evidence of learning?
Finally, there are two other areas addressed by Venables in Chapter 4.  Grading and Intervention.  While grading is a necessary component to our daily jobs, they key takeaway for me is the need for calibration among the PLC members.  We must design quality formatives and then we must grade them consistently in order to make the valid and informative.  The intervention component of common formative assessments is the lifeblood of why we would do formatives at all. If we do not do anything to inform our instruction or assist students in need, the purpose of the formative is lost.  We will explore this subject in greater detail next week as we look at how data can be used in a PLC.

Reflective questions:
- How does your PLC respond to "teaching to the test" comments?
-How often do my students experience a common formative assessment created by my PLC? And what do we do to ensure it is a high-quality assessment?
- What are the pros and cons of using alternative formative assessments on a regular basis?




Thursday, July 2, 2015

Chapter 3- Looking at Student and Teacher Work


If you want to assess how effective your PLC’s have become, one of the most valuable places to start is the level of student work and how it has improved over time.  This is one of the most used activities in a PLC, but often used inefficiently in terms of formatively assessing practice and making improvements.  The reasons it can be ineffective are simple and two major factors contribute greatly to it.  The first is competence and the second is trust.  Our author, Daniel Venables, discusses building trust through norms and protocols in Chapter 2, but extends the discussion through several examples of protocols that could be used to look at student/teacher work during a PLC.  On page 46, the author mentions feedback as “the lifeblood on nearly every aspect of PLC work, most notably, the lifeblood of looking at student and teacher work.”  However, the word of caution remains that the quality of the feedback is dependent on the willingness of the groups to give and receive honest feedback regarding the quality of the work.  At the very bottom of pg. 46, Venables offers reasoning for why teachers do not give critical feedback and quite frankly, it stings a little bit.  It is because the often not thinking deeply enough about the work and going through a series of compliance steps to satisfy administrative requirements and they are not truly invested in the process.  If that is the case we have not done a sufficient enough job of establishing a learning culture in the PLC.   The protocols will help, but teachers must have the trust and the willingness to make them thrive.  The second reason, competence, is not specifically addressed by the author in Chapter 3, but one that must be addressed to reach the end goal of increased teacher and student performance.  If the participants, especially the facilitator, of the PLC are not competent in their knowledge of both curriculum and instruction, the protocols and the activities become a well-intended activity without the desired results.  While it is true that the PLC in itself is embedded professional learning, when examining both teacher and student work there must be a standard of excellence in which the group is striving.  If that is not the case, it makes it difficult for the critical feedback to occur. One of the first things a facilitator of a PLC must assess, are the individuals in the group “willing and able”, “willing but unable”, or in the worse-case scenario “able, but unwilling.” 
Besides the protocols for looking at student work, Chapter 3 also offers several examples of looking at “teacher work”.  Figure 3:10 on pg. 58 is a great list of the various forms of teacher work that can be completed during a PLC.  While it is hopeful that many of the activities listed would not be done in isolation, but rather as a group, the protocols listed in the book do call for some individual  presentations and accountability.  Too often we sit and plan lessons, assessments, rubrics, etc. together but never take the time to put them through any sort of “quality control” check to make sure they are meeting the objectives we want them to.  In our District, we have used some protocols such as “pre-lesson shares” or the “Targeted Planning Process”, however it those are “events” rather than embedded as part of the culture or don’t have the trust and feedback noted above, they will not push the limits of new learning and progress.
The final piece on teacher learning that Venables discusses in Chapter 3 is Peer Observation.  Perhaps nothing has the potential to be more beneficial to teachers than feedback on the implementation of their planning than respected peers observing them teach and offering honest feedback.  There are so many models of this, including the use of video, that are popular right now, that it is hard to imagine a teacher in NISD that hasn’t participated in peer observation in some form or fashion.  Let’s take two of the more established protocols, Instructional Rounds and Focused Walkthroughs.  Both of these provide time for structured observation however they come with very different purposes.  Instructional Rounds are less about individual teachers and more about determining trends of instruction around the campus.  Our focused-walks have several specific protocols that determine the type of feedback the individual teacher is to be given.  Both of these protocols can be very helpful, IF: They are used on a regular basis and not every once in a while, and IF all the participants understand the purpose and are willing to give critical feedback.  Again, the challenge lies in the instructional culture of building.  Other effective examples of peer observation are when the receiving teacher identifies and asks for observers to look for specific areas in which they hope to improve or focus.  This allows the teacher to “own” the learning and can be more beneficial to individual growth.  The worse thing we can do has have teachers “go watch” a great teacher and expect results!  They most observe with a purpose.
Chapter 3 is full of ideas (some old and some new) of ways to improve our PLC culture and the craft of improvement.  However, never is the skill of the facilitator nor the trust of the PLC put more to the challenge then when looking at student and teacher work.  Remember the lessons from the first two chapters…it won’t happen overnight and it won’t happen without intentional planning.
1.     Picture the PLC in which you spend the most time, are the participants both “willing and able” to do the work necessary?  If the answer is no to either “willing” or “able”, what steps will be necessary to build capacity for the team?
2.     When you “plan” as a team in your PLC, could you or would  you use a protocol that allows for some sort of “quality control” before the lesson is ever presented to students? 

3.     When participating in “peer observation” what protocols do you feel are most beneficial and why?