Some years ago, my colleague at PSU, Dr. Robyn Parker, developed a protocol to facilitate collaboration among student teams. She called it Rate Your Mate. In essence, the protocol was founded on two ideas: primarily, that teams who negotiated common goals, then described the observable behaviors that would support those goals, would “team better” than groups that did not establish a team-agreement. And second, that individuals would “team better” if they believed they had an effective way to hold their teammates accountable to those shared goals. By “team better,” she meant that student-groups would be able to effectively set expectations that could later be the basis of peer feedback. But even more importantly, that student teams would move from cooperating—divvying up the work without reflection—to collaborating where teammates imagine, plan, and produce the work together so that it fits into a synergistic whole. In this state, teams, rather than individuals, would own the project parts and outcomes.
I was an early adopter of Parker’s Rate Your Mate process and found it to be very helpful in assessing teams in my classes. But the original Rate Your Mate process was onerous for the instructor. Typically, as a class, students discussed goal-setting and “observable behaviors,” then, in their teams, they were asked to come up with a team agreement (or contract) that spelled out these goals and behaviors. They provided a copy of the agreement to the instructor who commented on it and either approved or asked them to revise. That wasn’t the onerous part. At intervals determined by the instructor, each student would evaluate each of their teammates (though not themselves) and turn these in to the instructor.
Here’s where it got laborious. The instructor read and responded to students’ evaluations…then created reports for each student on each team containing the evaluations from each of their team-mates and the instructor’s feedback on those evaluations. In practice, this meant that an instructor spent a great deal of time compiling evaluations. For the report to student V, an instructor had to compile evaluations from students X, Y and Z into a separate report that they could then comment on and share with student V. In a class of 25, merely compiling the evaluations could take hours…finding the time to add your own thoughtful comments often proved difficult.
And so we decided to automate the process of compiling. Plymouth State University generously gave us a grant from our faculty research fund to hire a small team of programmers to build a web application to our specification. In the software we designed, instructors would add a roster of students to a class (by means of a list of email addresses), organize them into teams, then set deadlines for team agreements and subsequent evaluations. Students would log in, create a basic profile, compose a team contract with their peers, then—some time later—evaluate each peer. The application compiled those evaluations in two ways: first, the instructor could view and respond to all of the feedback a student had written about their teammates. Next, the instructor could view and respond to all of the feedback their teammates had written about the student. Because the application took care of the onerous task of compiling, the instructor was able to devote more time to making meaningful comments…or so the thinking went.
And the Rate Your Mate application DID make it easier for an instructor to concentrate on giving students feedback on their peer-evaluations. The hours spent cutting-and-pasting from one document to another were gone. The application allowed instructors to respond to feedback to and from individual students, but also to make overall comments about the feedback a student had given or received. In fact, whereas most instructors using the old manual protocol had necessarily focused on responding to the feedback a student received from her peers, some instructors who use the software now report that they spend more time responding to the feedback students give to their peers than they used to. Certainly, there is value in coaching students as they develop an ability to give meaningful feedback to their peers.
But, on the whole, we just weren’t convinced the software had added value to what was – despite its onerousness – an effective protocol. For starters, while the Rate Your Mate application seemed relatively straightforward to us, there was enough complexity in the process that users—both students and instructors—could easily make errors as they used the software that forced either or both to use time-consuming workarounds. Most commonly, for instance, some students attempted to log in to the system using email addresses that were different than the ones the instructor had used to enroll them. Because the system had no record of that address, the student would find that they could not see the class and team they were supposed to be enrolled in. Thus, they could not participate in creating the team agreement or subsequent evaluations. There were many other such design flaws that, combined with slight user-errors, made the application frustrating for students and instructors alike.
Problems like these revealed the rigidity of our design: you could remove a person from a team, but there was no way to delete a person from a class or to change the email address associated with that person. Within months of implementing the application in our own classrooms and those of a limited number of colleagues, we came to the conclusion that we had made a common mistake among amateur software designers—we had designed an interface that reflected our assumptions about how a user would navigate through the process but we hadn’t anticipated even the simplest mistakes a user might make. The Rate Your Mate application would have to be revised.
By now, we knew that we needed help. Designing effective software requires more than just expertise in the field the software is meant to facilitate. There are many successful software applications for photo-editing, tax-preparation, or architectural-design. That doesn’t mean that an expert photographer, tax-preparer, or architect could design an effective application to facilitate that work. So we needed technical and design assistance.
We reached out to Stuart Cohen, a long-time product manager in the software industry and a former colleague of mine. We told him we had a program in “beta” status that we were hoping to improve and that we were hoping he could advise us on how, and with whom, to take the next steps in the development of this application. Stuart was incredibly generous with his time considering we had no money to offer. He listened in as I trained a new instructor on the existing application, we gave him some fake student accounts to use in his simulations, and then we waited to hear his diagnosis of the design problems and his suggestions for how to move forward.
What he told us, when we convened some weeks later, revealed that the flaws in our design—indeed in our vision—went much deeper than we had ever imagined. “I’m going to start with some bad news. This isn’t a beta.” In the software industry, a product is in “beta” when it is not yet ready to be released to the market but it IS ready to be tested by external users. In other words, the software design has been evaluated and the underlying code functions without errors. At the beta stage, developers are looking to potential users to identify any adjustments that might make the application more usable, and thus more appealing to the market. Stuart wasn’t questioning whether the code was bug-free, he was asserting that we had not asked the tough questions of our design that would be asked by external users.
“Does Rate Your Mate enable collaboration between teammates or does it just enable putting up information for the instructor?”
“Is the Rate Your Mate experience the same for a team in an online class as it is for a team meeting face-to-face? Should it be? What should be different?”
“How do you imagine Rate Your Mate will help students improve as collaborators over time? Are there short-range, mid-range, and long-range goals for these students?”
In that conversation, and those that followed, Robyn and I were struck by the revelation that our application hadn’t missed the mark, we had mis-positioned the mark. Rather than creating software that could truly address our goal of facilitating collaboration and—ultimately—of making students better collaborators, we had simply automated an analog process. It was as if, instead of designing a modern automobile, we had designed a mechanical horse to pull our buggy.
Our path forward is not yet clear. As we think about the questions Stuart put to us, we realize that we need to offer our students a toolkit that can help them master the skills needed to be effective collaborators. That won’t happen in one class. In fact, while the role of an instructor is a crucial one, the student needs to be able to look at data that help them to understand how and why their experience and performance changed from class to class, and from team to team. Students need tools to help them answer basic questions like, “What are the qualities of the most successful teams I’ve worked with? How have my colleagues suggested I could improve? Have I improved in those areas?”
And teams need a tool that plays an active role in their collaboration. A tool that not only helps them set goals, but helps them to track individual and team progress toward those goals. And since real teams often revise their goals over time, our students need to be able to adjust as they learn more about each other and the projects they are engaged in.
I asked my Technical Communication students to review our application and compare it to another, similar, application for giving team feedback. They confirmed everything Stuart had said and they went further: they not only wanted features that anticipated their work habits (more reminders about evaluation deadlines and better notifications when they had received that feedback), they expected the application to be a part of their social media portfolio. By that I mean, they wanted to create a more detailed profile that allowed them to showcase the best feedback peers had given them. They wanted to use Rate Your Mate not only as a mechanism for giving and receiving feedback, they wanted to use it as a way of determining who they wanted to team with before a project had even begun!
Robyn and I are an example of such a team. Where we had once imagined that automating a useful teaching tool for our classes would accomplish our goals, we now understand that software allows us to reach much farther than we had previously imagined. The original Rate Your Mate protocol was founded on solid claims — that teams who negotiated common goals, then described the observable behaviors that would support those goals, would “team better” than groups that did not; and that individuals would “team better” if they believed they had an effective way to hold their teammates accountable to those shared goals. But where the protocol, and its automated version, provided an opportunity to team better in the context of single class, a feature-rich application may provide a more long-range environment that allows students to understand and develop the roles they play on teams, and the roles they could play on teams.
I’m going to use this blog to document our journey towards a beta (and then a Version 1) release of the application. Along the way, I’ll be thinking about collaboration – what we know and what I observe – and PSU’s transformation to integrated clusters, and the design thinking that should inform both.
If you’re an instructor interested in using this alpha version of Rate Your Mate, please contact me. This is (I hope) a collaborative effort and we need your ideas and concerns in order to achieve our ultimate goal – providing a useful tool for collaborative teams.