I have long worked in the field of e-portfolios. I recently found a CD-ROM with the first portolio assessment system I created for a district. It used Hyperstudio and we were creating an assessment system for Vo-AG. I then went on to working with Connecticut Technical High Schools and tried to use Mahara to develop their portfolio system.

My thinking has shifted. I believe in open education, if we want to assess student abilities for future learning, we need to look beyond portfolios. Now we need to think about presence assessment systems rather than portfolio assessment systems. We need to start thinking about teaching students to brand their online identity and measure the signals they share.

I have been thinking about exactly how could we encourage presence in open education. It is not about one spot, or one digital hub, Open learning crosses many networks. You might share a great update on Facebook, write a post on your blog, have an amazing Twitter exchange, or share wonderful photos on Google+ and Flickr.

How could a  student, in a class that values open ed curate this content? Enter Storify.


We have been using Storify to curate the #walkmyworld project and I realized we can utilize the tool to allow students to develop presence assessment systems. Imagine having different time points where students have to tell their story. Imagine having them add artifacts and reflecting on the significance these artifacts played in their own learning.

Then students could share and connect to each other regardless of their preferred network for open learning.

Just an idea but one I want to let percolate.

[relatedkingpro show=”4″ images=true width=”150″ height=”150″ placeholder=false]


This past week I helped all of us celebrate Open Education Week. I joined Teachers Talking Teachers, participated in the Mozilla webmaker challenges, joined the Teacher badges Alliance, and engaged with many of you on social media.

I am left with one overarching theme. You cannot have Open Ed without community. You cannot have open web standards without community. You cannot have Open Educational Resources without community.

In many ways you cannot have education without community.

Community of  Writers

CC 3.0 UX Designs as Communities of Practice. murdocke23. flickr. https://www.flickr.com/photos/murdocke/7356625068
CC 3.0 UX Designs as Communities of Practice. murdocke23. flickr. https://www.flickr.com/photos/murdocke/7356625068

I first celebrated Open Ed by lurking, possibly even trolling, during the ConnectLearning.tv event on #DS106. For those who do not know #DS106 is an open format class on digital storytelling. It exists mainly because a community emerged around a digital hub and then spread like weeds through different social networks.

I stress the role of community for my teacher candidates when we discuss the teaching of writing. I explain that the best writing spaces I have seen  have a shared vision, experts and novices, and recognized practices that support developing writers.

The same holds true for Open Learning and Open Educational Resources. One of the greatest writing projects I contributed to this year (and only on the peripheral) was the development of Mozilla’s Web Literacy standards. The initiative, lead by Doug Belshaw, not only epitomizes how open ed works but resulted in the best thinking designed to prepare online research and media skills.

Community Necessary for Assessment

CC 2.0 Badges and Assessment. DML Competition. Flickr. https://www.flickr.com/photos/dmlcomp/4980762084/sizes/m/
CC 2.0 Badges and Assessment. DML Competition. Flickr. https://www.flickr.com/photos/dmlcomp/4980762084/sizes/m/

I realized among all the talk of assessment and badging during the Teachers Badges Network Hangout on Air that community needs to come before competencies. Christina Cantrill, of the National Writing Project, stressed this point over and over again.

Badging will never be about rigor. Badging is about relevance. There has to be a community around a specific credentialing system in order for badging to succeed. This community would then help to establish skill trees. They might decide what constitutes the criteria and  evidence  for a badge.

Plus, maybe most importantly, a community would recognize the value of a badge once it has been awarded.

Community Necessary in the Workforce


Those that understand the role of community will have marketable skills. Gina Trapani, an open source advocate and co-founder of Think -Up recently commented on the importance of community on the latest episode of TWIG. She stated that in looking for potential employers and mentees that being a developer goes beyond coding.

Gina, noted that most importantly Think-Up looks for those who understand how different communities work. Only then can they provide insights to users. She also noted that developers need to break from the mindset that you need to know code. Instead the act of developing, takes quality writers, designers, and thinkers. As someone who quit coding in 1989 when I was in 6th grade, this idea resonated with me. We need emergent leaders and thinkers who work collaboratively if we want students to be college and career ready.

So What is Open Ed?

I wonder if I am any closer to answering this question than when I was a week ago.  I have learned while playing in so many new spaces this week that Open Ed, if not a set of principles, is a shared mindset that cuts across so many different communities.

I want my students to learn in the open. I want to model what it means to think, fail, and reflect in the open. I want to try and use OER in my teaching. Maybe, just maybe, I am contributing a little back to the Open Ed community.


Related Articles

[relatedkingpro show=”4″ images=true width=”150″ height=”150″ placeholder=false]

Many districts, here in Connecticut, have taken on the task to realign district wide writing assessments to both the Common Core State Standards and the rubrics published by Smarter Balanced Assessment Consortium.

(Please note earlier versions of the post did not correctly refer to SBAC. Images still list it as Smarter Balance and not the Smarter Balanced Assessment Consortium).

This got me thinking, as a teacher of writing teachers, how would I encourage the use of assessment to not only align to political tailwinds but to help ensure students can write a college level when graduating high school.

I see a few options for schools:

  1. School leader developed rubric
  2. Creating them at the district level

Option One: School Leader Developed Rubric

  • School leader with an SLO  focused on argumentative writing  would be volunteered to develop arubric
  •  Examine smarted balance rubric and CCSS writing appendix (description of how pieces were scored).
  • Choose criterion
  • Develop rubrics
  • Test, ………….etc
Option Two: Developing and Testing Rubrics
  • Have schools that have already developed rubrics test theirs. Offer that same version to others buildings to try.
  • Develop and share a rubric based on the CCSS and Smarter Balance Rubric
  • Administer a pilot assessment
  • Score and develop anchor packets that can be used to calibrate raters.
Either option involves a ton of work. What I think needs the greatest focus though is how the rubric translates into improving learning in the classroom. My basic tenants of belief when assessing writing:
  • Evidence of scores cannot be inferred.
  • Teachers need to know that they do not need to focus on every criterion at once.
  • Teachers should (or district should be) developing a library of mini-lessons
  • The teaching of argumentative writing is closely linked to text based analysis of mentor texts
    • Texts should be annotated using codes aligned to the criterion in the rubrics
    • Text annotation needs to be taught and modeled.

So I decided to share my attempt at creating an argumentative writing rubric that could be used at the high school level:

Click Here to Open Rubric


How does it work? Well I attempted to align the rubric to both the Smarter Balance argumentative writing rubric and the Common Core Anchor Standards:

Argumentative Writing Rubric


At the top of each domain you will find a CCSS anchor standard. Then each criterion is a grade level expectation. The scale of each criterion is taken word for word from the Smarter Balance Rubric

How would it work?

Improving Writing Instruction

The entire rubric could be used as a summative assessment to give teachers classroom level or building level snapshots. I would NEVER use such an extensive rubric for formative assessment.

There are 13 criterion and four level of scales across five domains. That would be 52 individual boxes for a student to have to consider. In no way will that help them to become better writers.

Instead teachers could take a piece, and with the student focus on a limited number of criterion. Possibly they would choose a specific domain. Maybe after reading the student work the teacher and student may choose 1-3 criterion as targeted areas of growth.

A Holistic Score not a Mathematical Equation

The teacher, and the young writer, are the ultimate arbiters of quality. Therefore I do not assign different point values to each scale and criterion. No complicated mathematical equation exist. Instead the rubric relies on teacher expertise and evidence from the writing to assign an overall holistic score for each domain.

Assessment Needs to Drive Instruction

The domain and  criterion in the rubric should be used to read mentor texts with purpose. Teachers should develop an annotation system that has students identify the qualities of strong writing.

Each student may have a different focus to improve their writing. Do not be afraid to have students work on only a small piece of the rubric at once. In fact I believe students will find this practice more rewarding.

Use schoolwide or classroom wide data from the entire rubric to identify gaps in knowledge growth. Take this information and cater your mini-lessons to fit this need. Record minilessons using screencasts. Overtime you will have a library of better writing practices.

Next Steps

Feel free to open the Google spreadsheet and use as much or little of the rubric as you desire. You can also contact me and we can develop ideas together to connect writing instruction and assessment.

[relatedkingpro show=”4″ images=true width=”150″ height=”150″ placeholder=false]

You cannot support deep learning without evidence of knowledge growth. I am glad the Deep Learning MOOC uses this as a cornerstone belief. I am a writing teacher, but I am also a teacher of writing teachers. Therefore modeling how to examine student work is a critical component to my class.

I not only want to improve writers, I want to create a community of writers and teachers ready to cretae their own community of writers.

To accomplish this I use blogs an RSS Feed. Specifically I use and happliy endorse Feedly. This allows me to create a rol of my student blogs, displays new posts in a beautiful UI, and helps me track the feedback to students.

Screenshot 1:29:14 4:02 PM

By using an RSS reader I can look across student work and identify patterns. I can also leave students feedback on how to improve their writing by visiting their website and leaving comments.

Screenshot 1:29:14 4:01 PM 2

I try to couch my feedback within the social practices of effective blogging. Like all writing there are no steadfast rules but there are some general patterns to the text structure. For example, I noticed (this was our first two posts) that many bloggers just wrote one long paragraph.

Screenshot 1:29:14 4:01 PM

I also use the blogs and the feedback to students to set individual writing goals for my bloggers. I am not a fan of writing rubrics, well at least for formative assessment. Giving students a rubric with seven criterionand four scales of quality does not do much to improve writing. Students can’t make sense of 28 squares of feedback. Teachers can not use the data. his results in good writers scoring well and developing writers turn into struggling frustrated writers.

Instead I would rather look across a students feed, by using FEEDLY, to identify the specific criterion in a summative rubric that will help that student as a writer.

One area I need to improve upon with my bloggers is peer feedback. I do not require (yet… and not sure if I should) students to comment, let alone offer feedback, on each others post.

Overall I find classroom blogs a great way to examine student work.Cafeteria staff can rest easy.  Feedly and blogsvehas ended my era of milk crate grading. No longer do I need to lug home crates full of journals.

I also find Feedly to be a great tool. There pro features improve everyday and new features will roll out soon that educators will love. I had the pleasure of chatting with Arthur Bodolec @abrodo Feedly’s co-founder about new features. I am not sure if I can share so I will just say the enhanced features coming out will be a great addition for educators.

I know I am looking forward to future use with Feedly to help me and my writers examine student work.

I have gotten more than a few offers from districts to develop Close Reading rubrics. I refuse.

Close reading is a strategy that allows us to interpret a text based on a specific purpose. It is a method and not an outcome. Therefore I believe (my opinion alone) those trying to sell close reading rubrics might as well be selling snake oil.

You do not measure close reading. That would be like measuring a specific tweak to a golf swing. In the end you do not care about the frequency and fidelity of the method. You want more yardage. That is your evidence that the intervention worked.


The easiest way to look for evidence of close reading is to model and teach students to annotate text for different purposes. Teachers can easily quantify and measure the frequencies and types of annotation. So you can examine how they annotate texts. You can have students return to the same text and annotate for different purposes.

Text Discussion

Another way to check for understanding through close reading is through text based talk. This is harder to assess than text annotation as multiple conversations occur at once in the classroom. As a teacher you are looking for evidence of text based inferences. However this is easier to assess and rubrics could be developed for online forums. Basically teachers need to look for evidence that students are returning to the text, using complex vocabulary, respond to prompts about author’s craft.

If you want to assess close reading beyond annotation you must have students create a product with the information they read.

Short Answer Responses

Short answer responses that focus on looking for evidence of the CCSS will work. Students who are better trained as close reading (text annotation on their part) and text dependent questioning (by the teacher or small group forums) should prove better. For this I would just use sample rubrics from the Smarter Balance Pilot items.

Debates, Argumentation, Informational Writing

Another way to measure if students have integrated the process of closed reading into their reading is to examine their writing products. This is where the content portion of your rubric is to critical to your success in argumentative writing. You want teachers develop criteria so students who are able to focus on key vocabulary, claims and evidence, and authors craft out perform students who do not.


These are just my thoughts. Close Reading, like much of the CCSS, cannot be taught in isolation. It isn’t a product of learning but it is the process of reading that college and career ready students use. There are some close reading rubrics floating around the web. I wouldn’t trust most of them. In fact many of these rubrics just have students evaluating key ideas, authors craft, etc within a writing assignment.

I think you are better served by:
Teaching teachers to model and assess text annotation.
Teaching teachers to model, offer guided practice in text dependent questioning techniques.
Using online forums based on text dependent prompts (rubrics could be developed for these).
Using building wide writing rubrics so students who engage in close reading with sources out perform students who do not.
Encouraging teachers to develop performance assessments that will demonstrate evidence of close reading.

I know this may not be the answer you were looking for. It would have been easy to post links to some of the bad rubrics. Yet I believe rubrics are for measuring products and close reading is a process that leads to students developing better products.


Like most lessons in life the best ones come through serendipitous learning. I came to my use of recorded feedback, both audio and video, through a session I stumbled into at the Literacy Research Association. I had no plan to attend the session. I just had no where to go so I opened a door and sat down.

There a colleague, whose name I can never got, changed the way I fundamentally teach writing. The speaker send students auido recordings using digital tapes. I immediately began to use screencasting software for Word Documents. If I was using Google Docs, I used a Chrome Extension called Voicecomments.

It was “Good Change.” My time assessing writing went down. Students commented that the recordings help them to become better writers. In essence I enhanced my pedagogical tools through technology by making assessment more effective and efficient.

Kaizen. Japanase for “Good Change” Also the philosophy behind 121writing rebranding, redesigning, and relaunching of VoiceComments into Kaizena.

I was lucky enough to join a #PATUE and Teachercast.tv simultaneous event when Kaizena was launched. Immediately the Kaizena team blew me away. Nothing but “Good Change.”


Great Update

The update is full of good change:

  • A dashboard to track feedback.
  • Ability to notify different collaborators.
  • Ability to highlight text, record, tag, or leave a comment.

Screenshot 9:13:13 10:48 AM

The Kaizena Dashboard

Pedagogical Uses

The place in a GAFE, or any classroom for Kaizena should be front in center. I see immediate uses for Kaizena.com: teacher feedback, peer conferencing, retrospective think alouds.

Teacher Feedback

I am a big proponent of limiting the number of goals a writer works on at one time. Through conferencing the teacher and student should develop targeted areas of growth. Kaizena.com fits this method perfectly. As a teacher I can go through and discuss the piece with the student. I can evaluate how well they met their goals, discuss revisions, and plan for future drafts.

If I was doing a collaborative writing assignment, which every good writing instructor knows they should do often, I can now give feedback to individual authors and the team as a whole. Good change.

Peer Conferencing

Kaizena will also be one of the most powerful tools for peer conferencing. Students could be given a rubric, or be aware of the author’s targeted areas of growth. They then use Kaizena to provide feedback. As a writing teacher this will help end the empty feedback loop often common during peer conferencing. Now as students have to highlight segments to record their feedback must be directly connected to evidence. This is a skills my students often struggle with.

I will also have a digital archive of the feedback writers give each other. Modeling and teaching peer conference is critical and almost impossible in a room full of students. With all the work archived I can go back and assess not just the product of writing but the process. Good change.

Retrospective Write Aloud

I stress to my preservice teachers all the time that they need to make their thinking as a writer evident to students. I also teach my preservice teachers taht their students should also conduct write alouds.

Write alouds do not have to be completed as we draft pieces. In fact that can often take us away from the deep thinking required to stitch ideas together in a set of coherent clauses. Instead I often encourage students to do retrospective write alouds once they have completed a piece.

Students are gvien a goal or choose one element of writing they want to highlight. They  have to use their piece and explain their design choices and their thinking as a writer. Now with Kaizena I can have students record their thinking. Once again their comments will have to be directly connected to evidence from their writing because of the highlighting tool. Good change.

I will be piloting these three methods with students in my writing intensive class this semester. I hope students will volunteer to share their work and I will share our progress here.

Once again a huge shout out to the Kaizena team.




A firestorm broke out when a study released by Hewlett Foundation suggested that automated scoring systems can produce scores similar (have a high correlation) with those scored by us human folk.

Based on the reactions posted on the #ncte and the #engchat feeds you would have thought armageddon was upon us and Pearson merged with the  Cyberdyne Systems Corporation.

It is affirmative. We do not need to fear the robots. In fact they can be our friends (I will not go into the methodology and limitations of automated scoring systems…mainly because I cannot do a better job that Justin Reich did in his three part treatsie).

Basically the cries over the rise of robots was misguided. It seemed to fall in two strands. The first was they cannot recognize good literature. No one is asking the robots to do this. Basically they are being asked to identify textual elements n patterns that replicate what their human trainers would do.

The second big fear was that the scoring systems could be gamed. Students could  use long sentences and big words but write gibberish. This does not concern me in the least. If you show me a student who is creative enough, and has the ability to say nothing while stringing together a massive vocabulary and complex sentences–well you are showing me a very talented writer.

Overall, automated systems will improve HST testings as it can include the assessment of more complex and open ended questions. However you feel about HST moving away from bubbles has to be a good thing? Right?

High stakes tests and accountability do not get at the practices used by good writers nor does it enage stduents in connected learning. I think the robots, however, can also help on this front.

Assessing the Stream

I recently had the pleasure of setting on in on  #ConnectedLearning Google+hangout panel with Paul Oh, of the National Writing Project, Ellen Middaugh,  Associate Director of the Civic Engagement Research Group at Mills College, and Howard Reinghold. We were commenting on the work of Anetero Garcia is doing wonderful things around agency and active involvement.

A question from the audience came up asking how do we bring in principles of connected and participatory learning in classrooms so focused on student achievement . While these two outcome do seem dichotomously opposed they do not have to be. And the robots can help.

The digitization of literacy creates a lot of data. Achievement folks love data. They salivate for it. Teachers can use this as a hook to demonstrate that participatory learning can lead to gains when you assess what Dan Hickey calls the residue of learning.

Basically, as Justing Reich pointed out in his third post, automated scoring systems can provide wonderful formative assessment data. This also involves assessing the growth over time and looking for gains more in the process and social practices of writing rather than a final product.

Imagine if an automated scoring system could look at drafts of an essay and analyze the amount of sourced material (already possible). You could take this further and what if blogs could be analyzed for their use of having a clear main idea, media, and supporting evidence. The analyzing the stream would allow you to look at discurse patterns in online discussion.

All of this can be used to inform your practice-the essence of formative assessment. The robots just make it quicker-the challenge of most formative assessments.

Replacing the Teacher

Does this mean the teacher isn’t necessary? Of course not. No one said this. All the humans are not gone. You will still conference with writers and set individual goals. That is the heart of what it means to work with young writers. The robots, not even a T-800, would could possibly complete such a feat.

The robots, when trained, can just find elements in a text that we want students to use. I do not think this is a bad thing.