Friday, July 6, 2012

What the Numbers are Telling Us

We’ve been rolling things out in a big way and in my opinion, it would be a huge fail if we didn’t use this momentum, attention and summer code partyin’ to learn something about the stuff we’ve built - specifically for me and my team, about the learning projects. Enter The Survey. We have a survey that is available upon completion of a Thimble project* that asks some pretty basic questions about level of previous experience, fun, learning and if/where people got stuck. It’s not meant to be the most robust thing ever, but instead to do some temperature gauging during this first wave of users/learners.

*Note: we have another version of the survey (duplicate except for the word “Thimble”) for the DIY projects, which are the projects that point people out to different sites or tools. But so far, there are very few reposes there (~8) so this analysis is focused on the Thimble projects.

We have almost 150 responses so far, which is way lower than the estimates of folks using Thimble so far (more in the thousands), but not a bad response rate given that the survey link is a little awkwardly presented - right under the copy-your-link-to-your-finished-project field in the ‘Publish’ flow. 

Here are some highlights on what the numbers are telling us so far…


1) We’ve gone global

People are doing projects all over the globe. From the event registrations, we knew that we had events in 67 countries across the summer, but pretty cool to see this much activity in just a couple weeks.

2) Lots of existing experience.

We have a pretty even distribution of webmaking experience so far, with a slight advantage to those with more experience. This is a bit surprising since the Thimble projects are more targeted at entry level to intermediate, but its likely that some of this is due to the fact that we also just launched Thimble and have a bunch of people that are exploring it just to check it out. I’d also love to believe that there are mentor/facilitator/instructor type of people checking it out for the purposes of using it to teach other people these skills, but that question didn’t make this round of the survey (hindsight!!), so we’ll get those numbers in version 2.0. See below for a cross tab analysis of how this factor influenced other ratings.

3) Fun!

A whopping 74.8% thought that the project was fun and of that 39.4% said super fun. We are aiming for personally engaging, interest-based experiences so the sense of fun is an important piece. 

4) There was perceived learning.

63% reported some learning, with 25% reporting that they learned a lot. This is, of course, self-reported learning, not hard-core assessed learning* - but at this stage, again, to gauge the temperature of people’s experiences, I would say this is a pretty solid number, especially given that fact that over 50% came into the project with some or a lot of webmaking experience.

*Note: On the hard-core assessed learning: 1) we are building in more assessments that will use the work as evidence to validate that certain skills are demonstrated; but 2) all that said, again, ultimately we are after interest-based webmaking with some learning that happens in the process, so if people are engaged and able to make the things that they want to make, then I would call that a success without all the pre-post data hubbub…but we’ll do some of the latter as well next year.

5) People reported getting stuck.

One of our core design principles is to design for graceful failing - or said another, more direct way, don’t let people fail. By the numbers, it would appear we are there yet since 50% of people reported getting stuck. However, looking through the explanations, while there were a few that were overwhelmed by the code, it seems like most people were able to work through their stuck point:

"The </body> and </html> codes were missing and the webpage was showing errors and did not want to complete the project successfully. I figured it out at the end."

"I didn’t put an end bracket in the right place for the hyperlink.  Which is good, because it meant that I had to go back and figure out why it wasn’t working."

"In the beginning.  I had to read more carefully than just skim read."

That’s actually pretty promising then, because the projects were challenging but the learner had enough to solve the problem. It shows trial and error and tinkering which are also really important aspects to our learning philosophy.

There was a trend of ‘stuck’ responses about the publish feature which we need to investigate some more. Right now, when you click “Publish”, you get a URL which you can copy and share through Twitter, Fb, email, whatever but that doesn’t seem to be resonating with everyone. Responses included “I don’t think the publish worked right for me” and “Copy and paste WHERE??”. Apparently some people don’t understand that flow. So we’ll look into that more. 

6) Cross Tab == Cool

Of the people with no webmaking experience:

  • 70.9% reported having at least some fun, with 41.9% reporting a lot of fun.
  • 66.6% reported at least some learning, with 43.3% reporting learning a lot.
  • Only 10% (3) reported not learning anything at all.
  • 58.1% reported getting stuck.

Of the people with some webmaking experience:

  • 86% reported having at least some fun, with 43% reporting a lot of fun.
  • 68% reported at least some learning, with 15% reporting that they learned a lot. 
  • Notably, 0% reported not learning anything at all.
  • 36% reported getting stuck.

Of the people who came in with a lot of webmaking experience:

  • 34% though it was super fun, and 65.8% reported that it was at least kind of fun.
  • 48.7% reported some learning with 23.1% reporting that they learned a lot.
  • 35% reported that they didn’t learn anything at all.
  • 56% reported getting stuck.

6) Some great suggestions

How to make it more fun:

"More animal parts."

This was the number one request. The Zoo was definitely one of the most popular projects and people wanted more options to create more animals. I’ll take that as a notch on the side of ‘this was fun and engaging’. Also points to needing more projects with rich content and topics like this one. 

"When a project asks you to replace one image with another, it would be helpful if it supplied some links to alternate images."

This is an interesting suggestion. On the one hand, its a great idea and something that’s relatively easy to do, but on the other hand, again, part of the learning experience, and of web literacy as a whole, is being able to use the Web to find things that are interesting to you. I think what we definitely can do is provide better instructions on places to find images, how to properly attribute them, etc.

"More cat integration."

Well, duh.

How to foster more learning:

"Maybe a toolset of html elements and drag and drop as well for different html elements."

There were a handful of comments around this same topic of making it more WYSIWYG like or drag and drop. We intentionally built it this way, so that it is exposing the real code and learners/makers have to get their hands dirty in the code to make something. I think the potential for learning relevant webmaking skills is greater working with the real code…that said, we have to balance that with ensuring that the barrier is not too high. We are planning on doing some experiments with some different approaches with different abstractions with our DIY (read:non Thimble) projects later this summer.

"Maybe give the user an end result to aim for.  Can you make your document look like this [screen grab]  or create a page with a story or something and they have to create the next page."  

This is a good suggestion and we’ll consider it moving forward.

"The explanations when you click "read more" are WAY above a beginning coder’s head."
This is great insight - we’re pulling the hints from the MDN, which is awesome, but typically targeted at web developers. So we’ve started a “simple MDN" to start to write more basic descriptions. This is a community project - jump in and help out!


  • We’re on the right path - the projects (+ Thimble), and thus this approach to learning, are providing some engaging and fulfilling experiences for a majority of people. 
  • Even though some of the feedback pointed to Thimble and the projects being too advanced for beginners, the numbers show that they reported the same amount of learning as those with some experience. The beginners reported it being a little less fun and got stuck a little more, but 43% reported learning a lot. I think even though the projects might feel overwhelming b/c of being dropped into the code, it seems like it was actually beneficial for people in terms of learning.  We’ve already started working on making Thimble even more accessible to entry level folks by integrating the X-ray Goggle functionality (you can now click on the right side preview pane and see things highlighted in the code on the left) and creating some more starter projects that focus on one or two elements at a time. It’s also worth noting though, that a lot of people with some and a lot of webmaking experience also reported learning and fun. I would guess that number would go up even more if (when) we have templates that are more targeted to various audiences and skill sets. So we’re on to something here. 
  • We need more projects that provide more interest-based access points for people - things like the Zoo project with rich content and compelling topics. We really want the community’s help with this so if you have an idea for a Thimble project, tell us about it.
  • We need more help and hints baked into the projects so that the barrier to entry is lower and less people can get stuck. 
  • We need to revisit the Publish flow to make that work for more people and (bonus!) use it as a teaching moment for those not used to sharing things on the Web.
  • We need more cats.


Monday, July 25, 2011

Badge Working Group

Last week, I trekked up to NYC for a two day meeting with the badge working group. What is the BWG you might be asking - its a pretty frickin’ cool group of folks exploring badges and innovative assessments for learning. 

This was actually the second meeting of the BWG which is funded by a MacArthur working grant. We first assembled the group from people we had met through various conferences, festivals, meetings and conversations we’d been part of for the previous year and met in Brooklyn, NY at the end of January 2011. These meetings are run “Gunner-style” (ala Allen Gunn, of Aspiration the best meeting facilitator you will ever encounter) meaning that they are unconference-y, participatory and interactive and the agendas are driven mostly by interests and issues of the participants themselves. We do a fun ‘post-it note party’ as I like to call it where all the participants write down topics, questions or issues on post it notes and we combine them all and somehow always manage to let order emerge from the chaos. The post-its are arranged into common themes and those themes then become topics of breakout sessions moving forward.

During the January meeting, these topics ranged from abstract to foundational. Questions like “what do we mean by badges”, “what might a badge system look like for my program”, “how will people distinguish badges”, etc. It was a great meeting for getting people on similar pages, airing concerns and planting the seed for potential projects.

This time around, 6 months later, we were hoping to be able to get more concrete. Our expectations were exceeded by far. We had allotted 1 hour to do short presentations about existing badge projects and ended up spending about 6 hours on them. There were not only more projects than we were aware of, but those projects were far enough along to warrant fairly detailed presentations. On top of that, the participants were so engaged in the presentations that we often had to cut off discussion to make time for others. It was incredible how far people have come in such a short time. We have gone from conceptual conjecturing to solution developing in just a few months. There are some amazing, game-changing things in the works and I am so honored to be working with these people on these important problems. 

If you are interested in joining the BWG moving forward, let me know.


Oh yeah, here is the short report back I drafted for the powers that be. 

Meeting Details:

  • July 18th and 19th at the Social Science Research Council in Brooklyn, NY (super kudos to NYCLN for organizing everything for us and being awesome hosts)
  • Planning etherpad:
  • This is the 2nd meeting of the BWG, the first was in late January 2011 (initial planning etherpad:
  • These meetings were funded by a MacArthur working group grant


For this second meeting, our goals were to dive into much more concrete discussions and call-to-actions, including:

  • Catch up on progress people have made since last meeting.
    • Report back from pilots that have run - SoW, Quinnipiac
    • Update on current projects in the works/currently running - Q2L, MOUSE, Global Kids, etc.
  • Discussion of key research questions and plans
  • Facilitate partnerships, feedback or assistance on badge pilots or ideas that are in the works or planning phases
  • Update on the open badge infrastructure (OBI)
    • OBI Requirements gathering sprint


  • Participants of the BWG have been identified across the various conversations, meetings and conferences we have had with people, starting with a meeting on Open Assessment in September 2010 in Palo Alto, through the Drumbeat Festival and other meetings/conference calls that we have had in the first half of 2011.
  • There were 16 participants in NYC, which worked out as about half returnees from the previous meeting and half, new faces. There were also more folks that could not make it to NYC but are participating through the etherpads and mailing list. In total, the working group now consists of approx. 30 people. 
  • Participants included game designers, educators, academics, researchers, open ed folks, youth developers and programmers/web developers and spanned formal-informal, product/implementation-research, K12-adult, even open-closed, etc. This added a depth of perspectives and insights that is typically difficult to achieve and really added value to the discussions and breakout sessions.


  • These meetings were held in the “Gunner-style” meaning that most of the agenda was determined by the attendees, based on key questions or topics that were important or relevant to them. We did a post-it note exercise early on the first day to identify these topics and plan out breakout sessions. 
  • We also planned for time for people to present their projects, but actually needed much more time for this than anticipated (both b/c there were more projects than we knew about and also because people were eager to discuss each project at length).
  • The agenda can be found here:


  • The goal of having this meeting be more concrete and focused on actual implementation specifics was met and exceeded by far. It was incredible to see the progress people have made, both in their own thinking and understanding of the badges work, but also in their own implementations and planning. As previously mentioned, we initially slotted an hour for mini-presentations of projects, but ended up spending about 5-6 hours total on this. This was because more people had things to present than anticipated, but also because the group was very interested in discussing each at length, which was also exciting. There were many points of collaboration that came out of these presentations as well. 
  • We had fewer breakout sessions than originally anticipated, mostly because of the interest in exploring the specific examples and collaborations. Those breakout sessions we did have were incredibly robust - extending well past the allotted hour with deep dive, energized discussions from all participants. Participants are still adding notes, but many can be found here:
  • Many of the topics and questions came back to the specifics of the open badge infrastructure since that is the core underlying technology to support everyone’s efforts. We were able get some very very helpful requirements gathering done, which also included defining what the infrastructure should not do (i.e. push the innovation out to the edges to the issuers/displayers), which was also very important to work through

Next Steps:

  • There was a great deal of interest in continuing this working group in any way we can. Our grant is up at the end of August so we are exploring ways to leverage social media and other channels to continue discussions, share resources, etc. If we do find additional funding, we would hold another meeting in early 2012 to review all the badge systems that have most likely existed (and have been plugged into the OBI) for a few months at that point and build research agendas for moving forward.
Tuesday, June 7, 2011

New Badges and Assessments (help needed)

We are currently in the midst of planning for the second phase of the School of Webcraft assessment and badge pilot and one of the key elements of this phase is the addition of more skill badges (and associated assessments). 

As I have previously detailed, last February we launched a pilot with 14 assessments and badges including skill badges (Javascript Basic/Expert, PHP Basic/Expert, Open Source Contributor), value badges (Accessibility Foundations/Expert), peer/community badges (Team Player, Peer Mentor, Good Communicator, Community Builder, etc) and some P2PU-specific badges (P2PU Veteran, Course Organizer). 

In this round, we are building out the skill badges significantly, adding at least 12 new skill badges to the mix:

  1. HTML Basic
  2. HTML Expert
  3. CSS Basic
  4. CSS Expert
  5. Python Basic
  6. Python Expert
  7. JQuery Basic
  8. JQuery Expert
  9. HTML5
  10. CSS3
  11. Popcorn.js Demo
  12. Popcorn.js Plug-In

The important part is to make sure that the assessments behind the badges are appropriate and effective at demonstrating the right skill. We don’t want something to easy or too hard, but that’s tough to tell since we aren’t experts in most (or all of this). So we need help. 

Here is an outline of the current thinking on the assessments. Please give us your feedback. 

3 key areas to focus your attention and feedback:

  1. Filling in the blanks
  2. Reviewing existing assessments
  3. Reviewing rubrics 


You will see that there are two that are still pretty blank:

  • Python Basic 
  • JQuery Basic

What are some challenges, exercises or projects that demonstrate basic understanding of python or jquery? All ideas and resources are welcome and appreciated. 


For the other badges, do these challenges/exercises make sense? Are they the right level? Do they sufficiently demonstrate the skill?


For the rubrics, are these the right things to be looking for? What else should be in there to ensure quality work and skill? I am SURE that we have missed things since again, we are not experts in these technologies/approaches. 

Please have a look and give us your feedback either as a comment here or within the etherpad. As with everything we do, we are moving at warp speed but we want to make sure that we get this right so thanks in advance for all of your help! We are happy to give you acknowledgement for your contributions on the assessments and of course, there will be a badge for those who suggest ideas/feedback that gets incorporated. :)


Monday, May 23, 2011

Badge Pilot - Phase 1 - Evaluation

I have several posts that I have been meaning to do over the last few weeks but there has been so much going on that I have been remiss. So expect a flurry of posts (or a few at least) from me in the next few days.

But to kick things off - we have completed the first phase of the P2PU and Mozilla School of Webcraft Assessment and Badge Pilot. It’s a mouthful and rightly so, since it was full of a lot of very cool stuff. These previous posts here and here give some background on the pilot but to quickly summarize, the pilot consisted of new assessments and badges for skills, values, community interaction and participation in the School of Webcraft. These badges are meant to be an alternative pathway to accreditation and credentialing that SoW community members can earn to demonstrate skills and then share with stakeholders like peers, formal institutions or potential employers to network, progress careers and/or find jobs. 

This initial phase of the pilot included 14 pilot badges (ones designed by us and aligned with specific skills, values and community behaviors relevant to web development) and a bunch of participation badges that came with the core system we were using for the dedicated badge environment, OSQA. The latter were meant to encourage and guide participation in the site as a question and answer forum. Since we were not using it as a true Q&A system, but instead simply leveraging the functionality to support the assessments and badge issuing, many of the OSQA badges were not relevant or achievable by users but some were, such as First Responder, Popular Answer, Editor, etc.

The full evaluation report is available here, but for those that don’t want to read a (titallating) 17 page report, here are some highlights below:

Goals of the Pilot:

  • Build proof of concept for a badge system for web development training 
  • Create and roll out initial taxonomy for types of badges 
  • Develop and roll out assessments that fit the peer and interest-driven learning environment
  • Get initial feedback and reactions from the community
  • Learn as much as possible that can be applied to later versions of the pilot or integrated solution
  • Prototype and pilot the open badge infrastructure

Key Findings


  • Overall: Participation was lower than expected, with only 52 registered users (in the dedicated badge environment) and of those, 21 active users (earned a badge, assessed work, etc.) We feel there are a couple reasons for this low participation: 1) communication and 2) lack of integration. 
    • Communication: From a communication perspective, this pilot was intentionally tightly controlled, mostly because we wanted to make sure that course organizers were prepared and that we had assessments closely aligned with relevant courses to encourage more active participation and assessing. But this meant we only touched a small portion of the wider Webcraft audience and did so through course organizers who rightly passed the message along (if at all) on their own schedule, so traffic and attention was intermittent at best. We intend to communicate to participants more directly moving forward so that we can ensure that they are fully aware and have all of the information (including why these badges are worth their time). 
    • Integration: On the integration side, as mentioned before, we used an OSQA system that is separate from the P2PU platform and thus required learners to log into a separate site (we built it so that they could use their P2PU account to reduce this issue but it was still a separate action they had to actively take). We plan to integrate the assessments more directly into the learning environment and experience moving forward to make it more seamless.


  • Overall: Feedback on the assessments was very positive and it seems like we are on the right track with authentic, relevant challenge-based assessments.
  • Types: Of the different types of assessments, we really only saw examples of peer assessment, which again were encouraging, with examples of constructive feedback and reworking of submitted work, as well as learners discussing how much they learned from the process of assessing peer work, but there was some struggle with ensuring that there were peers to assess submitted work. That incentive structure is still a gray area for us - we need to figure out how to attract quality people with the right skills to assess submitted work across the system. We will be exploring this more moving forward. We did not have any submissions for expert-level badges (see below) so we did not see any guru assessment, but hope to in subsequent rounds. There was some stealth assessment in the OSQA participation badges, but none of these were directly tied into the learning. 


  • Overall: The main feedback was that people wanted more badges to cover more skills which we totally expected and plan to build out further as we move along. 
  • Types: There was a good overall response to the types of badges we had and people felt it was important to have a mix of hard skills and soft skill badges, which we also know are important to badge consumers like potential employers, so we will continue down this path. 
  • Levels: There were no submissions for the expert badges which makes some sense given that all of the courses were entry level with some pushing into intermediate for some skills. We do feel the expert level badges are important to have as a goal or benchmark for people to work towards, but we will need some more advanced courses and active advanced community members before we will get more traction on the expert badges. 


  • Prototype: We were planning to run the first phase of the pilot with a prototype of the open badge infrastructure (OBI) that would allow us to port the badges from OSQA into the infrastructure, and then display them on other sites including the P2PU profile. But due to development cycles on both the OBI and P2PU platform, we decided to push this to the end of the second cycle, which will be in late June. 


Overall, the initial phase of the badge pilot was a positive step in the right direction for our assessment and badge work. We had initially planned on starting with 2 badges and ended up with 14 badges which allowed us to explore more types of assessments and badges in this phase. While participation was low, we learned a lot that we will apply to the next rounds in terms of communication and outreach, and have identified areas that need dedicated focus like driving more peer assessors to be actively involved.

Revisiting our goals, we met most of them by building and launching a quality proof of concept badge system, which included a basic taxonomy for badge types and various assessments approaches built around peer learning. We got some great feedback and interest from the community, as well as other stakeholders, and have some solid direction around future versions of our efforts. The only goal that we were not able to meet was the prototype of the badge infrastructure, which again, was pushed because of delayed contingencies on the development sides, but is targeted to roll into the second phase of the pilot. This will allow us to port the badges out of the OSQA environment and into the P2PU profile to give learners more control over sharing and using the badges in other contexts.

Overall, we feel that we produced a good proof-of-concept to build off of moving forward, and initial responses and observations indicate that it is important and valuable to continue to move in this direction.

Phase 2

We are rolling all of the stuff that we learned from this pilot into the second phase of the pilot which will launch in early to midJune and run through July 2011. Look for another blog post shortly detailing the plans for that phase of the pilot.

Over and out,

Wednesday, December 29, 2010
btw, every study of peer review among students shows that students perform at a higher level, and with more care, when they know they are being evaluated by their peers than when they know only the teacher and the TA will be grading Cathy Davidson

Assessment Revisited (#2)

Building off the last post, badges are nothing more than .png files unless they are backed by some assessment and value.  I have been working on defining what assessment looks like in these peer learning, open education environments and it has really been a mind-blowing journey so far. When I first started trying to grasp the task at hand, I realized very quickly that ‘assessment’ mean a lot of different things - it can be the thing that you do to prove that you have learned something (like taking the exam), the design of that thing (question type/writing), the delivery of that thing (paper or online, ‘assessment engines’), the act of comparing the work/answers to some rubric (grading the exam), or the end product itself (the grade).  So needless to say, there are a lot of moving parts to think about when approaching the concept of assessment in general.  But then when thinking about it for these participatory, peer learning environments, there is much further to go.

These environments are intentionally atypical, and with that comes benefits and limitations (in general, but that’s another post, for this one +/- for assessment):

They are open and accessible to anyone with network access.  

What this means for assessment: There will be more people across many different levels and proficiencies that view and/or participate in these courses. The assessments should provide options for these levels and help learners build on their existing skills and develop new ones.  Further, because these courses are open, there is the likelihood that people will float in and out and assessments should allow them to do so, and ‘check’ their knowledge without forcing them to complete the course (if the topic or skills are redundant their existing capacities), but at the same time, assessments should provide milestones to motivate learners to stay engaged in the course as well. 

They are decentralized, meaning that there are not “core” courses or particular paths/sets of courses that people are forced to take.

What this means for assessment: The concept of prescribed degrees does not work here because learners will have unique learning paths across various courses, and even various websites or platforms. Further, the set of courses is not predefined and there may be overlaps, meaning different learners may learn the same skill in different places in different ways.  So the assessments need to be granular enough to capture the learning wherever it occurs, and flexible enough to allow learners to demonstrate the skill in contextual and relevant ways.  Assessments should also be relevant outside of the assessment context itself, and allow people to submit existing work or challenge them to create something meaningful to them to demonstrate competency.

They are peer-driven, and the person organizing the course is not necessarily an expert, but simply guide or facilitator.  Their main goals are to foster a community of learning and provide some scaffolding to guide that community through collaborative learning of a particular topic.  Therefore, there is not the authority figure or typical concept of an instructor.  

What this means for assessment: Short answer, grades won’t work. The simple reason grades ‘work’* in formal environments is that we are preconditioned to expect/accept the instructor-student relationship. The instructor is the expert that pushes information on us and give us top-down ratings of our work and learning**.  But that doesn’t work here.  There are no authority figures - peers are learning from each other and from the interactions and activities. So the assessments need to reflect those relationships and should capitalize on peer assessment as much as possible.  Also, output of the assessment should be more than a flat grade or mark, but should be focused around feedback and guidance.  Also, because these are not expert-driven environments, the assessments need to build in or account for trial-and-error types of approaches.  Learners should be able to learn from the assessment and refine work if they have not met the requirements, etc.

They depend on community development and engagement to be successful. 

What this means for assessment: Again, peer assessment should be incorporated as much as possible.  But we should think about skills and behaviors that support community and build those into the assessment scheme as well.  Perhaps there are lightweight ‘assessments’ based on interactions with peers or automatic assessments and feedback/awards based on behavior through the online learning environment.

I am sure there is more.  And you have noticed that I have intentionally kept badges out of the conversation here.  That’s because badges and assessments are different things.  The badge is the signal of a skill or competency and the assessment is the way to demonstrate/validate those skills.  In our model, each assessment will be tied to a badge, but also in some cases multiple assessments will be tied to a single badge, giving people flexibility in how they demonstrate the skill and earn the badge.

So in summary, for our pilot, the key assessment considerations are:

  • Incorporate peer assessment as much as possible
  • Provide levels of assessments/badges to meet various needs, as well as help motivate people to build skills or continue participating in courses
  • Provide multiple assessment options or paths to the badge
  • Assessments should be relevant outside of the learning context - and should allow for submission of existing work, new interesting and relevant work, and/or peer recommendations or nominations. 
  • Learners should be able to seek out assessments on their own - nothing forced.  (although there may be cases for automatically assessed and issued badges to promote community behaviors)
  • The badge should link back to the work submitted for the assessment, and any feedback or endorsements from the assessors.

I will share the plan over the next couple of weeks and we forge forward.


*I actually started this post with a diatribe against grades and traditional forms of assessment but so many others have expressed it so much better.  I particularly love Cathy Davidson’s (of HASTAC) thoughts on the limitations and obsolescence of grades:

**I have definitely drunk the student-centered kool-aid. From the existing literature and research (not cited here but I can definitely provide), we know that students learn more when they can construct their own understanding of ideas and connect them to their own lives.  We know that people learn MORE and when they can collaborate and interact.  We know that students are more engaged when they have more control within the learning environment. We know that deeper understanding comes from trying out various strategies, getting things wrong, revising, etc.  It’s not enough to have some one push information on us, we need room and flexibility to mash up that information, get our hand dirty, connect it to something that we care about, hear the interpretations of our peers, etc.  I have written and spoken a lot about this to date and I am sure it will make it into the blog over time. But this is one of the reasons I love P2PU and other social learning efforts that recognize and embrace this shift to student-centered, participatory learning.  It’s the future man. 

Monday, December 27, 2010

'Certification' Revisited (#1)

I am currently working with Peer-2-Peer University (P2PU) and Mozilla Drumbeat to integrate assessment and badges into the open and peer learning environments on P2PU, specifically the School of Webcraft. We’ve been doing a lot of thinking about this and I am finally getting around to capturing my thoughts here.  I should get a badge.

What are badges? 

Come on, you’ve seen them before.  Boy Scouts. World of Warcraft. Foursquare.  I do something, demonstrate some skill, defeat some monster, show up in some location, meet some predefined criteria or assessment…and I get a badge.  If I know about the badge, I might be motivated to do the necessary behaviors or meet the requirements to get the badge, or if the badge is a surprise, I might be motivated to keep exploring or trying out various things to earn or unlock more badges. Once I have the badge, I can display it so that others can see it and thus demonstrate my skills or achievements.

There are many crossovers here with learning - motivation, feedback, exploration, achievement.  

Why do we need badges?

Well, we need something.  Is it badges?  Maybe, maybe not.  But there is no question that we need an alternative form of assessment and certification (although I hate that word…it conjures up images big, mean Microsoft gorillas). Here are a few reasons why we need a change:

  • In the current system, the institutions (schools, universities, etc.) have the all the control. They decide what types of learning are “official” and what “counts”.  But most learning doesn’t happen within those confines and constraints and there are lots of examples of people learning outside of the system: open education courses and materials, afterschool programs, peer discussions, books, Wikipedia, the Web in general, LIFE…learning happens everywhere.  But it only counts if it happens through an institution.  Why? Why shouldn’t the learner have control?
  • Current models of assessment (grades, rankings, etc.) currently don’t work well for many kinds of learning - in fact, many argue that they don’t work well for most learning.  In peer learning environments, grades and rankings do not encourage participation and information sharing, and in fact, can constrain the interaction and learning.  In informal learning environments, these models make it feel like school, squashing the inherent value and engagement.  In many open education environments, there is not often a dedicated instructor or authority figure to issue the top-down grade. And so on.
  • There are so many important skills and competencies, some age-old and some new(ish) in today’s world, that are not currently captured or acknowledged. Things like the often referenced 21st Century Skills, or New Media Literacies, which cover everything from information organization and evaluation, to negotiation and trial-and-error prototyping. Or the “soft” skills like critical thinking and teamwork.  None of these skills are captured in my credit, grade or degree.  And yet, these skills are critical to most careers and are often some of the key things that employers are looking for. As a learner, it is difficult, or impossible, to know to seek out or hone these types of skills because they aren’t acknowledged or encouraged…and yet they will be glaringly apparent the first time I fub up in a critical situation that involves one or more of these competencies. When I am applying for a job - my resume and education history tells potential employers nothing about my full set of skills and if I have any of these other competencies. And when I am looking to hire someone, I have to come up with clever questions to try to get a complete picture of someone (above and beyond the resume and education history which everyone knows is a limited resource) in 30 minutes. 


What if there were badges for various skills that you could collect across learning experiences, carry them with you and then share out to various audiences as needed?  You may earn badges that represent more traditionally recognized behaviors or skills like completing a course or mastering a mathematical model, but you could also earn badges for softer skills like critical thinking, teamwork and information analysis.  You could earn badges from authorities, like Mozilla, from course organizers where appropriate, from peers or even from yourself.  The badges would be associated with assessments that once successfully completed, earns you the badge.  There might be multiple assessment paths to a single badge, giving you the flexibility to have a unique and personalized learning path.  But you could also look at the badges of other people to discover things to learn or try for…or what skills to develop or hone for particular disciplines or jobs.  You could even (possibly) carry the badges back to the institutions with you to get credit or help them cater that experience to your interests and needs. 

So that’s what we are currently exploring.  Of course, there are many unanswered questions, some of which I am sure are springing to mind as you read this.  Questions like: What skills should we assess? Are there skills that are better left unassessed?  What do we want to encourage?  How do we avoid encouraging the “wrong” behavior? Who gets to decide which skills to assess? How much influence should outside stakeholders, such as employers, have on badges?  Should they be able to design assessments and badges that are relevant to them?  How can we let them have a say without creating an imbalance in the system or constraining the learning? How granular should badges be? For example, our HTML5.0 badge is at the level of the entire language mastery, but would we want HTML tag level badges?  What granularity is the right level?  Do badges aggregate into larger or higher level badges? Should badges expire?  How do we deal with skills that need to be refreshed or renewed?  How can the badge system grow with learners? How does the introduction of badges affect learner motivations?  If learners were initially intrinsically motivated, how do we avoid “crowding out” those motivations with an extrinsic badge system? How will people game the system?  How much will they do so? How can we discourage gaming or recognize when it happens? Will these badges translate to formal learning environments? And if so, how?  What would be required to make schools or institutions value or accept badges?  Can we meet those requirements without changing the nature of the learning environments?

There are a lot of questions and a lot of unknowns, but we need a change…we need to give the learners the control.  So this is one way we are hoping to accomplish that.  We are building a badge/assessment pilot in the January session of the School of Webcraft, which is a subset of P2PU courses focused around web development and endorsed by Mozilla.  We are hoping to have a core set of badges and assessments, as well as the initial infrastructure to support the issuing, collection and displaying of badges over the next month (or less).  We plan to learn a lot and start to answer the questions above.  But we can’t possibly answer all of these questions alone.  We hope to encourage more interest in badges and these new approaches, get more people researching them and issuing them (within the same open infrastructure ideally) and figure this out together.

I’ll keep you updated as much as possible here.  So buckle up!  Next up, thoughts on assessment and the open badge infrastructure…