Wired Ivy

Made to Measure (Dan Marcucci & Kieran Lindsey)

February 22, 2022 Wired Ivy - Kieran Lindsey & Dan Marcucci Season 3 Episode 36
Wired Ivy
Made to Measure (Dan Marcucci & Kieran Lindsey)
Show Notes Transcript Chapter Markers

High-stakes academic assessments create conditions that motivate students to cheat.  At the same time everyone wants a laudable level of academic integrity in higher learning.  Fair or not, for many years there has been a dismissive accusation that online learning was particularly vulnerable to massive cheating.  Then, when universities made the wholesale emergency pivot from in-person to virtual classrooms in March 2020, there was a corresponding and predictable uptick in anxiety over how to prevent cheating when the instructor wasn’t even in the same physical location as the learners. 

This all conveniently ignores the fact that ensuring academic integrity has been a perennial goal and challenge in all forms of education, regardless of the mode of delivery.

Test proctoring software and plagiarism checkers are offered as high-tech solutions to what has been framed as a problem created by technology.  We will set aside, for the moment, legitimate apprehensions raised by these software solutions – collection of bio-metric data, spyware and privacy, promoting a surveillance culture, malware vulnerabilities, to name but a few. 

The important point is this focus on technology is a distraction from the underlying problem.  High-tech fixes only encourage an arms race where  students improve their methods, and educators increase their policing tactics.  It doesn’t mitigate the reason for cheating – we included in the show notes links to research about this. 

But academic integrity shouldn’t begin with Crime and Punishment, it should start with Sense and Sensibility.   What if, rather than trying to win an academic integrity skirmish, we make assessment activities that promote the original learning objectives? 

INTRO MUSIC

KIERAN  00:00
Hello, I’m Kieran Lindsey...

DAN  00:02
… I’m Dan Marcucci, and this is Wired Ivy.  We have combined 20 years of experience in the virtual classroom.

KIERAN  00:10
Dan lives in Pennsylvania, I live in Missouri, and we work for a university in Virginia. As part of a geographically distributed team we know from personal experience that online faculty and staff benefit from having access to their peers.

DAN  00:21
You may have been teaching online for a long time now or relatively new… you may have been thinking about a move to online or suddenly find yourself doing it… regardless, you are welcome in this virtual salon.  

KIERAN  00:32
Our goal is to create a collegial community for real academics working in virtual classrooms… a safe, supportive space where we can learn from one another and share what we’ve figured out. 

MUSIC - PREFACE

DAN  00:45
High stakes academic assessments create conditions that motivate students to cheat. At the same time, everyone wants a laudable level of academic integrity in higher learning. Fair or not, for many years there has been a dismissive accusation that online learning was particularly vulnerable to massive cheating.

Then when universities made the wholesale emergency pivot from in-person to virtual classrooms, in March, 2020, there was a corresponding and predictable uptick in anxiety over how to prevent cheating when the instructor wasn't even in the same physical location as the learners. This all conveniently ignores the fact that ensuring academic integrity has been a perennial goal and challenge in all forms of education, regardless of the mode of delivery. 

Test proctoring software and plagiarism checkers are offered as high tech solutions to what has been framed as a problem created by technology. We will set aside for the moment legitimate apprehensions raised by these software solutions collection of biometric data, spyware and privacy, promoting a surveillance culture, malware vulnerabilities, to name but a few. 

The important point is, this focus on technology is a distraction from the underlying problem: High-tech fixes only encourage an arms race where students improve their methods and educators increase their policing tactics – it doesn't mitigate the reason for cheating. We included in the show notes links to research about this. 

But academic integrity shouldn't begin with crime and punishment. It should start with sense and sensibility.

What if, rather than trying to win an academic integrity skirmish, we make assessment activities that promote the original learning objectives. That's the focus of today's conversation. Kieran and I discuss some of the problems inherent when students are measured with isolated assessments that prioritize performance over mastery.

We challenged whether high stakes exams, term papers, and the like are truly the best options we have for measuring learning outcomes and student success. The question is not, “Will this be on the test?” The question should be, “What is a test?” 

And we brainstorm some creative alternatives to improve learning outcomes, cultivate student engagement and collaboration.  And, as an added benefit, remove the opportunities and incentives to cheat. Academic integrity starts with the design of the learning environment. Lessons that integrate learning objectives, activities, resources, and assessments are Made to Measure. 

MUSIC STING - INTERVIEW

DAN  03:10
Kieran, something that you and I have talked about off and on over the years is this whole idea of academic integrity. Cheating is another word for it. Some of it because it's something that we want to pay attention to in our own work, but some of it is because, well, from time to time, I have certainly gotten comments from people who say, how do you know if they're cheating? Or how do you know if they're the ones taking the test?

KIERAN  03:32
Because you’re an online instructor.

DAN  03:33
Yeah. As though online is some special place where it's going to be super easy to, to cheat and deceive, and that never happens in the in-person classroom. Have you had comments like that yourself? What's your experience with the whole issue of academic integrity? 

KIERAN  03:47
Yes, that comes up often, and not just among academic colleagues. It comes up all the time when the media do stories about online education. One of the problems with a lot of this kind of reporting is that the journalists conflate all the diverse approaches and audiences for this mode of instruction into one all encompassing term – online – whether the story is about grade-schoolers or graduate students. On some level, that's understandable, right?  If the reporter or their editor hasn't been an online student themselves, they may not have an awareness of all the variation and innovation that can happen in a virtual class. Plus, for better or worse, in-person classes tend to be held up as the gold standard.

DAN  04:28
Right.

KIERAN  04:29
And any form of online, including blended hybrid, high flex – all of those are just a poor but sometimes necessary substitute that's plagued with all kinds of vexing challenges… like cheating, for example. 

Of course, what's overlooked in that comparison is that cheating is not a problem unique to online education. Campus-based students can and do turn in assignments that were completed by someone else in a really large class. The instructor may have no idea whether the person is sitting in their classroom. Taking an exam is the individual who enrolled and is listed on the roster unless someone is checking IDs at the door before every class. 

The assumption that online courses provide more opportunities for students to cheat is especially ironic given that the line between in-person and online has become so porous, for example, many face-to-face classes feature exams hosted on the learning management system, the LMS, yet the idea that cheating is a feature of technology driven instruction persists, as does the presumption that technology driven responses are a natural solution.

DAN  05:28
The interesting thing is, I've done a bit of reading on the subject and certainly I've been dealing with it in my own work over the years, lack of integrity, ways of cheating. I have evolved with technology and they're always innovating and they always will be innovating. And I think the point that you make is great, which is, it happens as a function of in-person classes, and it happens as a function of online classes.

KIERAN  05:51
Or as a function of the instruments we use to review student learning. Maybe we've been using a saw to do the work of a scalpel, or we've grabbed a hammer when what we need is a guitar pick. 

My primary frustration when it comes to the issue of cheating is that our main response has been to participate in an arms race. Each time students figure out a new way to hack the. Educators respond by attempting to lock down everything even more tightly, instead of examining our approach to measuring student learning and doing so from a different perspective.

DAN  06:21
Your point really underscores philosophical options. And if it's not militarized, it's at least sometimes discussed in the discourse of criminal justice. They're doing something bad. We have to catch them. And I suppose there is a place that is appropriate to do depending on how you've structured the learning environment, but I don't think it has to be that way. Fundamentally, as academics we're here to facilitate learning and presumably the students, the learners are here for the same purpose.  If we go in with the assumption that learning is the objective here, then it doesn't have to be a war or a crime. It can be something that's a little bit more constructive and positive than that. 

KIERAN  07:00
And maybe we ought to be asking whether traditional high-stakes assessments – exams and quizzes, for example – bring us any closer to achieving our student learning objectives, or do they mostly determine which students are good at memorization?

DAN  07:12
I think that is exactly the starting point for academics who teach online… that, how are we designing our online learning experiences in a way that foster, um, a desire to learn, but also then enables permits, improves, engenders, academic integrity, however you want to… there's a verb there and they're all positive verbs. 

So we, as people who are leading the online learning environment, can create situations where the incentive to cheat doesn't have the same reward, for example. And more importantly, what we want to be doing is incentivizing the learning outcomes, the learning objectives. 

And, I mean, we've hammered throughout our podcast, is how much derives from articulating those learning objectives. How what we do, and the tools, and everything. And I think this is just one more example of that. These are tools for academic integrity. They still start at what are we trying to accomplish in this environment?

KIERAN  08:09
You know, I'm realizing that I've had a huge blind spot when it comes to course design.  I couldn't count how many times I've coached an instructor that the learning objectives should inform every other choice they make –  content, activities, technology – and yet I don't think I've ever included assessments in that list.

DAN  08:27
The learning objectives really should inform the assessment. Because in the end, what you're trying to do is assess a mastery of a subject, what you're trying to do is assess the learning outcome, the improvements that have been made in knowledge, and they really want to be tied together. 

And do we have to give grades? Yes, of course. That's one of the things we get paid to do in most situations is to provide an accurate detailed assessment that looks like a grade. Some people work in programs where students are going to be compared against each other. Fortunately, I don't, but there is… class rank becomes important and so I suppose that's a motivation for cheating. 

We're never going to be able to use learning objectives to get all the motivation for lack of integrity out because the structure of what people's reasons for seeking higher education are but I think we can use them as a guide to really structure a lot of the assessment in a very positive way and the rest of it, yeah, we'll have to have some diligence for, I think. Yeah, I’m with you.

KIERAN  09:26
As I was preparing for this conversation, I read several papers that frame the assessment question as an evaluation of performance versus evaluation of mastery, and I found that really helpful. I, I know you've done some reading on this topic as well, Dan, and I'm sure you have some thoughts.

DAN  09:42
An interesting thing that happens when you start looking at it between those two, because performance then tends to be a concentrated high stakes event. I think that performance is a fairly one dimensional way of assessing learning. It seems as though if you're looking at mastery, then you're really looking at a much broader set of outcomes of accomplishments that you can observe and assess.

KIERAN  10:06
Not to mention when you raise the stakes, you increase the incentive to cheat.

DAN  10:10
What's important for academics who teach online to realize is that we have an expanded toolbox because of all of the technical tools that we use in online education. And in theory, all educators have that expanded toolbox. We have to deal with it all the time. 

And so we have – and we've talked about this in other episodes – we have a lot of tools. There are a lot of ways that we can structure learning activities. There are a lot of resources we have access to. There are a lot of ways we can interact with our students.  And I think that for us, the fun and the challenge and the creativity comes in setting up learning situations where the goal then becomes mastery. The goal becomes accomplishment. The goal becomes learning essentially. And then the assessment just follows after that. 

That's lofty. Maybe I'm not making much sense.

KIERAN  10:54
Well… so I would put it a little differently because, again, I don't think technology is the driver here. I see a different path to arrive at the same place that you've just described… that anybody who's been teaching online intentionally and challenging themselves to do it well quickly realizes the standard approach in a face-to-face classroom isn't actually something you should aspire to mimic because those techniques don't work in an online class. 

Whether they work as well, as we assume in the physical classroom. We're not going to go there because this podcast is about teaching online, but we know they don't work well in an online classroom. 

So what's happened over time is that instructors who teach online and have had some freedom to innovate have done just that. Part of what we’re attempting to do here on Wired Ivy is to document and share the many creative ways our colleagues have come to improve learning outcomes in their online classes. 

And then, because some of the people who design and deliver those online courses also teach face-to-face, they will try those same innovations out in their in-person classes.  When the results are positive, they keep using them. Now the virtual classroom has become a kind of instructional sandbox, which is another reason why the line between online and in person keeps getting blurrier. 

DAN  11:58
Yeah, I think you're spot on. It's a pivotal point to make that the idea that face-to-face is the model that online is supposed to emulate - that's been gone for years now. Maybe it was that way, 15 or 20 years ago, but we know we have examples all over the world about how various innovative approaches can be used to have effective online learning in higher education. And that's where we are. And that's the point that we get to use going forward.

KIERAN  12:24
With that in mind, how do we begin to build a supportive structure for academic integrity into our online courses? It doesn't sound like finding human or software proctors for exams is the place we ought to begin. 

DAN  12:36
It never would be for me, that's for sure. I think the first thing that we do is design course activities in a way that the engagement in the activity is transparent. And then the assessments are based on the activities themselves. 

And I want to add here that I know many in-person courses can do and take very similar innovative approaches. So this is, this sort of innovation isn't unique to online. But perhaps it's a little bit more requisite and online.  

We have these activities that we've already tied into our learning objectives and so then the real question is what's the best way for the faculty person to observe that activity. And that's really a direct line,then, between the learning objective – all the way through to assessment – as opposed to, in the classic form of education that you and I had decades ago, the assessment, the grading, were activities that were completely separated. Possibly you had to study for the test, but in theory, you'd already done all the learning and the test was simply an assessment, and that was it. And this kind of turns that on its head.

KIERAN  13:41
Dan, can you give us an example, something fairly basic, that would illustrate what you're talking about. Maybe juxtapose a classic approach, and then something that's more aligned with connecting assessment to learning objective.

DAN  13:52
Sure. So starting with something as simple as attendance, for example, as a kind of assessment, it's something, that's something that students receive points for. If they get points for attendance. The classic way to judge attendance was to put a check in a roster. You called out names. Somehow a person was there, they were in the seat, they got to check in the roster. And it essentially meant that the person was in the chair.  

In the best case, you might've had attendance and participation that were part of a grade, that were an assessment part of a grade. But even at that, you had to have a roster that was a size where the participation could actually be observed. And if that's the case, then somebody, the faculty person, needed to know who everyone was and had to be able to record this person participated, or this person didn't participate.  

Attendance did get a little more sophisticated with technology, because after you and I were in school a couple of decades ago, these electronic clickers came onto the scene and students had one, and each one had a separate registration number and they literally click their name when it showed up on the screen out front. 

I'm simplifying how participation and attendance have been done, but basically the motivation was let's make sure the student isn't going against their own best interest in skipping my great class. Brilliant things are happening in here! And, oftentimes, brilliant things were being said, but when you're only asking for passive participation, you're barely engaging the students and you're only distantly connecting to the learning objectives, in my opinion.

KIERAN  15:26
And this is also an example of that policing approach to education you mentioned earlier.

DAN  15:31
Yeah. But in an online learning environment, and especially an asynchronous one, attendance really is a completely different proposition than what those ideas of attendance were, because now participation is the only attendance that matters. If it's asynchronous, there's no lecture to show up. You can, you could turn the screen on, and run the video, and have the computer run it and go in the next room and watch TV. 

That doesn't matter. What matters is the participation, how they've engaged in what they do with the information. I don't know any instructor who assesses by the number of minutes that a learner has been logged onto their LMS.

KIERAN  16:03
And thankfully, because that would be so easy to game in the LMS. Just, all they have to do is log on and not have their computer set to sleep at a certain point. So we don't want to go there. What are the alternatives? 

DAN  16:17
The simplest alternative, and it's still a bit passive and a controlled thing to do, is to lock the next step in a lesson plan until some action has been demonstrated or embargo their colleagues’ comments or submissions until some type of engagement has occurred.

And I've seen this done before. I don't happen to use this because of the types of courses that I teach, but I understand why it's used. It's like you can't see the other students’ work until you've submitted something.

KIERAN  16:43
I haven't seen the situation you’re describing In which content becomes available once the previous task has been completed, but I can see if there's a reason why the student needs to move through the content in a very linear way, this would be one way to accomplish that.

DAN  16:58
As the instructor, that in order to have gotten to this certain point, these actions have been taken. So again, it's the participation that is transparent. And so thinking about the assessment perspective, these steps are done in a certain order because you have an educational reason why you want them done in that order, but it's right there. You can see it. It's easy to assess. It's no mystery to it.

KIERAN  17:21
To clarify… we're not talking about embargoed content that's released on a schedule like the start of a new weekly module, rather than having all of the semester's content visible on day one of the new academic term, which is extremely disconcerting and confusing to students in your online class. And please don't do that.

DAN  17:39
Correct. My advice here is simply that a faculty person who's developing a course online, think about things that will indicate a person has been present and active. And that's the way that participation is seen.

And it goes back to the reason why you want participation. It's not that you want someone to spend X number of hours. That's just bean counting. It's that you want somebody to accomplish something and they accomplish something by doing actions and proving it. It's a transparent way of showing activity.

KIERAN  18:07
For example, if you want students to move through the readings in a particular order, you could set it up so that the student has to answer a one question poll at the end of the first paper. Before the second paper becomes available and so on. Do I have that right?

DAN  18:22
It's a great example. And those quizzes are little micro activities that are used to make a very frequent pattern of assessment.

KIERAN  18:31
Got it. Plus whether or not those polling questions are graded, they could provide feedback to the instructor that would help them gain. Whether there are any problems with comprehension, should the instructor post some additional explanatory text or offer to schedule office hours for anyone who has questions or needs some help? That's brilliant!

DAN  18:50
The beautiful thing about that is you might do a small section, a small chunk, and then there's some mini assessment, a micro assessment that goes with that short quiz or something like that. And if you don't get the quiz right, the first time, then you get to go back and try it again. Those become data within the system that lets the faculty person know how quickly people are able to progress through the sequence.

There are a lot of activities that can be used where this is going to come through. It can be discussions, presentations. Research submissions, engagement in collaborative work, i.e. group work. You could have peer-conferencing between the learners in the class. You can have learner-to-faculty conferencing, which would be synchronous, in that case.  You could have demonstrated field work. You could even have these midstream progress quizzes. 

So there's many activities we can use in the online learning environment that really create much more frequent, smaller types of assessments throughout the entire school. The important point being that each of these assessments is designed to accomplish one or more learning objectives, not that these assessments are designed to prove that the person showed up that day. So again, it goes back to the learning objectives. Yeah.

KIERAN  20:00
I wouldn't have thought attendance had so much potential for measuring student learning. That's so simple. And yet powerful. How would you apply this approach to something other than attendance?

DAN  20:11
So when we make sure that assessments are linked to activities that are designed around learning objectives, we not only get away from that isolated high-stakes evaluations, which come with an increase incentive to cheat quite frankly – if you're telling me that 60% of my grade is based on this one test, then I'll do whatever I need to do to get a good grade on that test.

Instead, the assessments are more frequent, which reduces the payoff for any single incidence of cheating. So if we’re worried about academic integrity, all of a sudden we have 15 items during the semester that are being assessed, 15 smaller items that are being assessed. Cheating on any one item is not going to satisfy the desire to, to bump your grade up.

For example, in the courses that I have developed for online learning, there is no single activity that is worth more than 15 – that's one, 5% of the final grade in the course. So it's low stakes, it's frequent, it's connected to activities, and the assessments are based on that then.

KIERAN  21:11
And Dan, that reminds me of when in Episode 31, our guest, Sarah Heath, explained the benefits of assigning points based on gaining mastery of segments or of individual skills, rather than a single high stakes assessment.

Okay, I'm sold on this approach. So let's talk about some of the other assessment options, which can include discussions, short papers, presentations, field work… I'm sure I've missed some, in fact, I'm positive.

DAN  21:41
Discussions is a great place to start. You and I talk about discussions a lot.  A) it's a transparent activity. B) it’s clearly connected to learning objectives.

KIERAN  21:50
Tell us about your approach to designing, facilitating, and assessing participation in discussions. 

DAN  21:55
My approach, at least, in discussions is there's a rubric that they have in their syllabus at the beginning of the semester, and they know what discussion is going to be like. And we have discussions almost every week, in the types of courses I do. I understand, of course, that it might be that discussions are occasional. 

And, of course, depending on what the course is, it could be just part of what you're doing and here could be problem sets or something. And then the discussion is based on the problem sets that everyone's already done. So it maybe, talking about the embargo thing, it is you don't get to get into the discussion until you've submitted your problem set, for example, and then you guys can workshop, you know, what the right answer is.

KIERAN  22:30
Do you start discussions with. Or do you ask students to just share general thoughts about the, the topic du jour? 

DAN  22:38
Yeah. Discussions are prompted because it's the prompt that really connects it back to the learning objective for me. 

KIERAN  22:43
Right… because your job as the conductor of this orchestra is to tie this activity to a specific learning objective.

DAN  22:49
Yeah. I might even say it's my job as the spider weaving this web, because one of the things here is that I want the learners to engage with the resources that have been posted and further resources that they might find. I don't have a way, necessarily, of saying, “Did you read this article or did you watch this video?”  But what happens is the prompt connects between the resources and the learning objective, and then as a public activity that they're doing. So the discussions themselves are going to rely on the prompts and the prompts are then going to be based on the resources that I want them to be able to process.

Then I don't have to asses, in this case, whether or not they read the paper as an independent assessment, what I'm doing is looking for evidence of that in their discussions, the discussion is the activity and that activity can happen if they haven't read a journal article or watched a video or something like that.

KIERAN  23:45
OK. I can see the benefits of having a higher number of lower-stakes assessments and that discussion is one of the options, but I can also imagine faculty being intimidated by the increased demand on their time that this approach could create. 

Let's say I've got an undergraduate course with 75 students. I set my course up. So there's a weekly discussion and they're all participating because it's part of their grade. And I want them to have a conversation, not simply post one thoughtful comment and then respond to at least two other students and call it a day. How do I get a handle on that from a time management standpoint? 

DAN  24:19
There’s no question that we want to be sensitive to our own time commitment as we are developing these assessment strategies and these learning strategies. And if you have a class of 20, you fundamentally have a different situation than if you have a class of a hundred, for example, and that is true online, and that is true in a face-to-face classroom. 

KIERAN  24:39
Absolutely. 

DAN  24:40
How you're going to do it is going to be different. If discussion is an important part of it, my argument would be in the online setting, you're still going to have a much better record of a conversation that happens than say if you broke it into small groups, in an asynchronous world. If you're doing synchronous and you're breaking into zoom groups, then there's a different reporting thing. But so let's just look at asynchronous written discussions for now.

You might break a group of a hundred into groups of 10 and ask them to work together. And then each discussion group has its own space that it works in, and there are many techniques you can use. If there's actually only one of you for a hundred people, and there are no teaching assistants, then you're really going to have to rely on the group to self-report that they're going to have to do peer evaluations and report back to you.

If there are 20, maybe not so much, maybe you're going to have a chance to be able to look at the assessments all yourself. But you have to, you're absolutely right, you have to be aware of what you have and what's going to be possible.

KIERAN  25:37
Or if these are fairly micro assessments, do you have to get granular about grading them? Couldn't you have broader metrics for grading that assessment.  

DAN  25:47
Yes, that's a good point. The rubric isn't… I'm not grading on the brilliance of every individual line that's spoken or written. I'm really grading the discussion on the level of participation and the level of engagement. And that's really what the rubric looks like in this case because that suits the purpose of my courses. This is, this person is engaged. They've clearly demonstrated that they understood the resources that were given them for the week. They've clearly engaged with their colleagues on their thoughts, which is very important in our structure. 

And those are fairly quick to be able to judge. I don't have to necessarily go in and mark individual things that they're saying and give them feedback on individual comments. It's public, it's transparent, but you've got to be able to look at it quickly and say, okay, this satisfies the rubric. 

KIERAN  26:34
I like this idea of breaking a large class into smaller discussion groups. That makes a lot of sense on a couple of different levels. One, it's easier for students to navigate and participate because if 75 people are all trying to have a conversation in a single space, it quickly becomes difficult to follow multiple threads, but it makes things easier for the instructor to see who was really active, whose comment generated the most conversation, whose comments demonstrated an understanding of the content, all of that.

What other ways can you think of that would facilitate discussion without a great deal of involvement from the instructor?

DAN  27:07
Your question, Kieran, reminds me of when we had Doug Ward on, I think it was Episode 14, from the University of Kansas. And he specifically talked about how he used a lot of very dynamic structures within groups. He broke discussion groups into smaller discussion groups, and he had people have specific roles.

KIERAN  27:29
Right!  The Leader, the Devil's Advocate, the Synthesizer.

DAN  27:33
And so that's just, the last thing I can do is say that I'm the expert on all different ways of doing discussions, because I think that's just a very rich opportunity for us, whether it's synchronous or asynchronous.  But the great thing is they can be assessed and they show active participation with the material in the learning activity. 

KIERAN  27:52
Let’s talk about ways to adapt a traditional approach to assessment, such as term papers, to this learning outcome driven approach. What's your take on how this type of assessment might be updated?

DAN  28:03
Not to sound too much like a broken record, but I would advise my colleagues to think very hard about why the paper or how the paper connects to one or more learning objectives.

And I absolutely understand that there are important places for research papers in higher learning. I get it. And I happen to be fortunate that the way they work into my material, they are more essays than research papers and they tend to be shorter.  But yep, there is a place for a 20 or 30 page paper that's research-based if you're working in the field of history or so many other areas.

So there are a couple of ways of doing that. And I think that the problems for lack of academic integrity with papers and online and in the classroom, again, very parallel. That's not going to be a big difference, but the issue of contract cheating also really comes up in term papers where if someone's hired, if someone's being hired to write that paper for you, and that's a hard thing to do.

And the other thing in papers… a big issue for me is the idea of plagiarism. So it's a different kind of lack of academic integrity, and I think that is something that, as an educator, I've always got my antennae out for.

KIERAN  29:15
Many universities have licensed plagiarism checkers for faculty to use, and some LMS have this type of application available and even embedded, I think.

DAN  29:24
Every one that I've been connected to in the last 15 or 20 years has some sort of software that is a plagiarism checker basically.

KIERAN  29:32
Or you can just copy a chunk of text from the paper into a search engine.

DAN  29:36
Which I do.

KIERAN  29:36
Which is what I used to do when I was teaching.

DAN  29:39
Anytime that I'm reading a paper and the font changes – and this is… I'm giving a tip to my students in my classes – and the font changes, I think, did something just get clipped and pasted here? And I check around that font change.

KIERAN  29:54
That looks like a web font, ironically.

DAN  29:59
Yes. Honestly, though, I think a lot of issues around plagiarism is more an issue of, um, not understanding what plagiarism really is, depending on the level of writer you're working with. If you're working with undergraduates, or even sometimes new graduate students, depending on if they've done a lot of writing in the past, you really have to just go over what plagiarism is, so they understand it. 

KIERAN  30:17
You know, Liz, Sanli, our guest on Episode 30, raised the possibility that students may not be as knowledgeable about things like plagiarism and how to cite the work of others as we assume they will be. So providing some auxiliary information for students who need it could go a long way to prevent unintentional cheating. 

DAN  30:36
Yes. And that's great because that goes back to again, learning objectives, because you might have a group of learners in that situation where what you're trying to do is introduce them to the idea of scholarship in a way that advances research and writing and these things.

And if that's the case, then you're building in presumably incremental steps along the way. It's entirely possible that you've got advanced learners who, their big thing for the entire course is to produce a paper, whether it's a thesis proposal, for example, or a literature review, that's the purpose of the course.

So in that case, yeah, you have one big high stakes thing. I suggest that in cases like that, there are still incremental activities that happen along the way that are transparent.

KIERAN  31:17
That’s also helpful for encouraging students to not wait until a week before a major assignment is due to start working on it. And you can reduce the stakes by setting a lower word count that not only lessens the demand on the instructor's time, it's more challenging for the students.

Anyone who writes knows it's harder to be concise and the need to curate the content gives students a chance to demonstrate their knowledge, one could argue, more effectively than an everything-but-the kitchen-sink treatise. 

DAN  31:45
The point is very well taken that if you are asking the learner to really pull down the number of words that you're doing two things, and one of them is, you're really asking them to learn, to curate ideas very well and to synthesize. You're asking them to go beyond the resource material and boil it down to the core of what it is they're trying to talk about. 

When you're talking about the learning objectives and the idea of academic integrity, something that I happen to do that could be useful for others is I tend to keep writing assignments tailored and a bit idiosyncratic for lack of a better word. So I am not asking them to give me a History of Tudor, England that they could buy off the shelf or they could easily contract out. I don't teach Tudor England, so that's probably a good reason that I'm not asking them to do that. 

KIERAN  32:31
I was going to say, we have other reasons for not asking that question!

DAN  32:31
But generally what I do is because I'm dealing with policy and I'm dealing with management, I'm asking them to connect resources. I will ask them to be pulling this reading and this reading together. And so it's very specific as far as, okay we have Donella Meadows talking about systems thinking, and then we have an example, a case study here on cotton. So I want you to pull together Meadows’s work on systems thinking and cotton. It's not something that lends itself to being out there already. It's something where they have to do their own thinking. And so in that way, the learning objective and the academic integrity can reinforce each other.

KIERAN  33:15
I assume the potential for access to contract cheating is higher for undergrad courses that everyone has to take as part of a common curriculum than say a very niche graduate course, just based on the size of the potential market. 

Dan, would you take a minute or two to talk about how you would think about the pros and cons of moving traditional writing assignments, papers from individual work to collaborative group work? 

DAN  33:43
Oh, I love this question because really it gets down to the existential academic question of what is a paper. There's that term paper that we were trained to write, and that we used to write back in the day. And the point of it was to prove that you knew something to your professor. 

But the things that we call papers now, particularly in an online world, don't necessarily look like that. So a lot of times I'll have group work that happens very collaboratively. And in theory, you could say they're creating a paper together, but really if you look at it, what they're doing is creating a website.

It's great when they start working together. Because, for one thing, I think that type of group collaboration encourages academic integrity. If somebody is a no-show I hear about it. I'm absolutely going to hear about it. People want to rise to the occasion if they know that some of the other people are counting on them.  I think it's human nature to try to rise to the occasion, to, to make sure you don't let your peers down. Not for everybody. I'm not saying for everybody, but as a rule, I think that's a great facilitative thing. 

There's an opportunity for people to play to their strengths. You’ve got to make sure you’re not over-relying on that, because if one of the things you're trying to do is to push what their weaknesses are, then you don't always want them necessarily to be playing to their strengths. But there is a dynamic there where people can specialize within the group as far as what they do. So I think there's a whole host of ways that group papers can facilitate learning objectives, can be very transparent, because obviously the point of it is then to share the work, and then can be something that encourages academic integrity. 

KIERAN  35:17
That's a good point you've made that when we give written assignments to groups, we may not call them a paper. We might call them a report for example. And I like your framing especially when we have students working in a digital space like Google docs or some other kind of Wiki. What they're creating is absolutely a website. It's not a public facing website, but still.

From the perspective of potential for cheating, it seems to me that if someone wants to pay for a finished paper, they have to get the whole group to agree. And I would think that's a bigger lift, especially if the students are randomly assigned to groups or at least chosen by the instructor in some way, rather than if they're allowed to self-organize into groups of friends. 

DAN  35:55
Yeah. And I think the points you make are great and they really underscore the idea of let's be intentional and purposeful about how we design the activities, how we design our courses.

And that's all part and parcel. We're creating an environment where we're asking for academic integrity. We're setting them up for success for academic integrity. And we have learning objectives in mind at all times. And the yeah, group papers, the idea that they're going to be accountable to each other, that they're not going to really be able to… not cartel, what's the word when they? My criminal vocabulary is not good…

KIERAN  36:30 
Collusion.

DAN  36:30
Thank you. That they're not really going to be in a position to collude to try to cheat as it were. Those are things you get to control as the designer of the environment and the designer of the activities. 

Inculcating, engendering, encouraging,  – however strong a word you want – academic integrity now, in collaborative work, then is going to carry over to the workplace. Let's not think that academic integrity is a question of not cheating on an exam. Let's think of academic integrity as a question of doing honest work that is to an objective, that is respectful of the process that you're dealing with, whether it's processing information or dealing with new people or whatever it is you're trying to accomplish.

KIERAN  37:15
Broadening the definition of academic integrity beyond the questions of who's cheating and how can we catch them at it?

DAN  37:21
I have a question for you, because there's an area in here that I haven't really worked in… and that's the question of, sometimes the learning objectives are very tools oriented, whether or not the entire course is tools based. 

KIERAN  37:35
I need some clarification. What do you mean by tools? Do you mean something like learning how to use a microscope or a surveying transit, or are you talking about a skill that might not require a physical object other than a computer, like the ability to use GIS or modeling software?

DAN  37:53
Yeah, I think for me, the idea of tool is to be able to master a procedure. So it could be that you, for example, learn how to use a ground-based LIDAR to go do some surveying of the Sistine Chapel, or it could be that you are going to need to do water quality sampling in a stream and the Blue Ridge Mountains.  Or it could be that you need to write even a computer program. I would consider that procedural, although that is, again, it's that existential question: What's a paper? What's a tool? But I, it's this broad range of things which are about following procedures and learning how to do something that's demonstrably at the end that you can do this. You're certified qualified. You can do this.

KIERAN  38:44
In some ways I think procedures, with or without physical tools, lend themselves more easily to the types of alternative assessments we've been discussing. In many cases, a written exam wouldn't help. You don't really care in that situation if the person has memorized when Fortran 77 was first created… or when, or who… 

DAN  39:07
We refuse to date ourselves!

KIERAN  39:07
...
was the inventor who the inventor… right, yes… or how the inverter of this was. You can learn to code perfectly well without knowing either of those things.

By their very nature, though, it may be easier to demonstrate mastery of a skill than understanding of a concept. An example of this would be the skill of writing a piece of computer code. If I'm the instructor, I can challenge each student to write a program that provides a solution to a particular equation, let's say. Ultimately, the assessment is going to involve running the program to see whether it works or crashes and whether it provides a correct or incorrect answer. I mean, there are variations that you can add to that to make it less binary, but that's the ultimate form of assessing whether or not the assignment has been completed successfully.

Another example of a skill, this one with a physical tool, would be something that is often a feature of a forestry program, learning how to use a chainsaw. Typically this is an activity where the instructor and students go to some field site and once students have learned the skill, they're asked to demonstrate that they can use the tool safely. But it's also possible to have an online student submit a video of themselves using the tool.

And there are real upsides to having that kind of documentation because. Let's say the instructor is watching a student at the field site, probably taking some notes or maybe using a checklist so they can assign a grade. But, if later on, that instructor realizes they forgot to check whether the student was in the right position or was wearing safety goggles or whatever.  Well, there's no way to check because a live demonstration is ephemeral. 

But if there's a video it's possible to review it and find out. For all I know, instructors may now be documenting what's happening in those in-person field situations for that very reason.

DAN  40:46
And your example really calls to mind, the episode that we did with Chastity Warren English, where her agricultural education students were needing to demonstrate specific things they could do on the farm, literally. And they were able to record it. They were able to Zoom it, different ways of demonstrating to her what they were able to do.

KIERAN  41:05
Yeah, exactly. 

And this demonstration of mastery approach works just as well for group assignments. If they're designed correctly, for example, going back to computer programming, if we're going to prepare students for real-world situations, they need to be able to work as part of a team, because a complicated piece of software consists of various routines and subroutines. And it's unlikely that any commercially available program was written by a single individual. 

So as an instructor, I can break the class into groups of say, 10 students. Then I would say, This program is going to consist of five different routines. I want you to divide up into pairs and each pair will take on one of the routines. The challenge is for each routine to work. And then for all of the routines to work seamlessly together, to create a working program. That means all of you have to collaborate and communicate as a group of 10 while also working on a semi individual assignment with your partner.

Then when it comes time to grade, I can allocate points to each student based on the individual routines and their contribution. And I can also give points to the entire group based on how well they hit the mark for the overarching goal. I can adjust the stakes of any one of those components by the number of assessment factors I built into the assignment and the number of points that are available for each as well as the percentage of the final grade that each component and the whole project represent.

DAN  42:20
That example really goes back to the original idea that purposeful intentionally designed learning environments are the key to connecting, learning objectives, activities, and assessment again, and there's a certain level of transparency that you were talking about in that example, and that has to be there.

And the collaboration adds to integrity in the opportunity, because if you're doing a collaborative project like that, it's going to be a lot harder to basically get someone else to do your work because you're working in a group. And so the, the instructor in that case is able to design a situation where you're just setting it up for a higher level of integrity.

I'm sure if we looked at learning how to be a surveyor or something in nursing, for example, that there'd be similar ways of doing it. And it's just going to depend on the course. It's going to depend on the skill that's being learned and it can be demonstrated.

KIERAN  43:12
Sometimes this will be a chaos of “and” not “or.” Assessing knowledge AND skills, right.

Using that nursing example. I definitely want that nursing student to demonstrate his ability to safely administer a subcutaneous injection. But before we go there, I want him to demonstrate by passing a test that he knows under what conditions that injection should be given, the appropriate locations for that type of injection, what problems might arise, what he needs to do in case of an allergic region.

So passing the exam can be the entry point for being allowed to demonstrate a tool-related skill. We're not always talking about one or the other. 

Dan, I'm curious… do you have any traditional high stakes assessments in your own classes?

DAN  43:51
I do not. I do not currently use exams. The structure of the program doesn't warrant it and the graduate school doesn't require it. I do know in many universities, undergraduate courses are required to have an exam at the end or some sort of final assessment, usually cumulative assessment. 

KIERAN  44:11
Since we know many Wired Ivy listeners teach undergraduate courses, and you Dan have taught undergraduate courses, so have I… how might you design an exam to lessen the opportunity for cheating? How would you lower the stakes and move towards assessing mastery versus performance, when you have to give an exam?

DAN  44:27
The first thing that you have to ask: is the exam going to be something that is a test of memorized knowledge. And I don't know why the answer to that is yes, as often as it is, but it often seems to be the case.

We don't live in an age of memorized knowledge. We live in an age of smartphones. You don't get to sit at a bar for six hours at night and argue about some arcane point, someone whips out a phone and looks it up and the argument's over and you go onto the next one. Exams, I think, are a little bit needing to make sure they catch up to the times as well.

Open book makes sense to me. One of the problems is:  “Well, what if they opened their book?” It's like, okay, design your exam so it's an open book exam. Instead of creating an examination and assessment that is geared towards repeating knowledge that they might've memorized, design one that requires them to manipulate information and manipulate knowledge. So they need to go find a resource and they need to know what to do with that resource.

KIERAN  45:24
A common protocol for LMS based exams is that once the exam begins, the browser's locked. That's supposed to prevent students from opening other browser, windows, or tabs. Since most, if not all college students, in the US anyway, have a smartphone, locking the browser has limited usefulness. That's another example of that academic integrity arms race we referenced earlier. 

Does the student's ability to access more resources than they would be able to bring to a class for an open book exam, change how you design the questions?

DAN  45:54
I think if you're in a high stakes situation like that, where for some accreditation purpose, or you've determined that there needs to be a highly regulated exam environment. And we're getting back to the policing question here. If you're locking people's browsers, you're policing, is the bottom line, and you're not dealing with the environment and the motivation of learning. 

If what you're saying is, “You have to take this exam, it's timed, your camera has to be turned on in a way that we see your hands at all times.”

And if that's the way people want to go to stop cheating, because they've created an environment where they think that kind of cheating is going to happen, then I guess that tool is available for them. It gets to a certain point where I think you require them to have proctored exams. They, they go to a learning center. I've worked at universities where this was a required thing for online students, as they had to take their exams in these proctored settings. I should hope that's really a very minor proportion of assessment when it comes to online learning.

KIERAN  46:48
Well, speaking as a program director, my initial thought is…

DAN  46:51
Yes, please do.

KIERAN  46:51
…my immediate thought is that if you're going to require a proctor of some kind, organic or inorganic, that's going to limit the potential audience for your program. And that's going to become a bigger concern for institutions as adult learners continue to be a growing target audience for higher education.

DAN  47:09
I would advise them to try to create an environment where A) that assessment isn't so important and B) the motivation isn't as strong. There're other ways of having people demonstrate what they're accomplishing.

KIERAN  47:21
Here's my last question about exams… Can we design tests as another form of group assignment? Is there something about an exam that requires everyone to do their own work or could exams be another opportunity for collaboration?

DAN  47:33
Yeah, I think the answer is you want to think about exams differently. The classical way of looking at an exam is it's done by an individual in a certain amount of time and measures that individual's knowledge and it's arguable, whether delivered that way that is actually what happened. 

But there's no reason you couldn't have an ultimate project or an ultimate activity in a course that was more collaborative and you could call it an exam. And you could say that people exams a lot to work in groups, particularly if it was a course that involved a lot of, of collaborative group work, then wouldn't, it also makes sense to have the final examination be collaborative and group.

And you might say, we don't know what one individual contributed, but by the time you get to the last day of the class, you've already been able to have other things that have happened in the earlier 14 weeks, that gives you a strong idea of what individuals are have been contributing. 

KIERAN  48:23
I actually have some experience with this as a student, which is why it even occurred to me to bring it up.

When I was in high school, my biology teacher had each of us write a paper on a wildlife species of our choice. And we had to present it to the class. Then he designed the exam to test us on the information we'd received from our peers in the class. But he took it further than that. The papers were an individual assignment, but after he handed out the exam, he suggested that should we choose to do so we could collaborate.

It was kind of comical how much time we wasted, because we didn't believe what he just told us that we could talk to each other during the exam. And the next thing that happened was a group of high performing students in the class, tried to create a cliche in one corner of the room, kind of a monopoly on the most reliably correct answers.

But then one student suggested that we could all just sit at our desks. One of us could read the questions and whomever had done the paper on that topic could call it the answer for everyone to write down and all the heads swiveled around to the instructor at the front of the room who just grinned us and said, “It's your call.”

It was such a departure from any kind of testing I'd experienced before that, or since. Students in that class talked about the experience for weeks, and one of the comments was that they learned and remembered more about all the different wildlife species from the test than while they were supposed to be listening to the presentations, or when they were trying to memorize as much as possible to prepare for the exam.

That was over four decades ago and clearly it had a huge impact on me because it's, it’s really the only individual exam I remember from high school, all these decades and decades later. 

DAN  49:58
That's a great example of the core question: What's the purpose of the exam? 

So is the purpose of the exam to ask them to repeat memorized knowledge, or is the purpose of the exam to demonstrate that they understand some sort of professional process or learning activity such as collaboration? And the instructor, the professor, has to understand, Why am I giving this exam, and how does it connect to the learning objectives? 

That's the bottom line, as far as I'm concerned. Yeah. That really goes back to, you know, what we talked about at the top of the program, which is the idea of mastery. The assessment is an assessment of mastery and many incremental steps within that process creates a good learning environment and it also creates a learning environment that is less likely to encourage cheating.

KIERAN  50:43
We're just about out of time and have really just skimmed the surface of this topic. But in all honesty, even if we could brainstorm more ideas, we're always going to be limited by the fact that Dan and I have specific areas of expertise and experience. I don't know anything about the different types of assessments used in a music program, or how I would take a more innovative approach to assessing students in an English lit class. 

But we do know from doing this podcast that our academic colleagues are doing amazingly creative things. And one of our primary goals for starting Wired Ivy was to create a space where we can share those ideas and insights with each other.

So the challenge we're offering our listeners is this: Look at your current approach to assessment, ask yourself what you're doing now, and why, and what might you do differently if you could feel untethered from tradition, expectations, and your own experiences as a student… what might you come up with for your online classes?

Might that new approach not only sidestep the whole issue of needing proctors and surveillance apps, but also provide better learning outcomes, better student experiences, and allow you to stop policing and get back to being an educator?

DAN  51:51
That I think is the perfect question for academics who teach online. So if you have any answers to any of those questions, let us know them. And if we can provide more answers, we'll be sure to get them on air so you can listen to them.


OUTRO MUSIC


KIERAN   52:03
Now we want to hear what you have to say!  Send us your questions, comments, and suggestions. You can record a voice message, send an email, or leave a comment on our website, wiredivy.org. And help Wired Ivy grow by sharing, subscribing, rating, and reviewing us on your favorite podcast app.

DAN   52:20
Wired Ivy is wholly owned by Kieran Lindsey and Dan Marcucci, and we are solely responsible for its content. Views expressed in this podcast and affiliated media are those of Kieran, Dan, and our guests, and do not represent Virginia Tech or any other institution.

KIERAN   52:34
Our audio engineer is Star Path Images, and a license for our theme music, Breakfast with You, was purchased from SmartSound.

DAN   52:41
Until next time, this is Dan Marcucci,

KIERAN   52:44
And I’m Kieran Lindsey. 

KIERAN and DAN   52:45
Let’s stay connected!

INTRO
PREFACE
ACADEMIC INTEGRITY & CHEATING IN ONLINE CLASSES
TYING ASSESSMENTS TO LEARNING OBJECTIVES
ASSESSING PERFORMANCE OR MASTERY
BEYOND PROCTORS & SPYWARE
ATTENDANCE AS ASSESSMENT
EMBARGOING CONTENT
MICRO ASSESSMENTS
ASSESSING DISCUSSIONS
NEW APPROACHES TO WRITTEN ASSIGNMENTS
CONTRACT CHEATING & PLAGIARISM
WRITING ASSIGNMENTS AS COLLABORATIVE WORK
BROADENING THE DEFINITION OF ACADEMIC INTEGRITY
ASSESSING SKILLS
DEMONSTRATION OF MASTERY
ASSESSING EXAMS
COLLABORATIVE EXAMS
A CHALLENGE TO WIRED IVY LISTENERS
OUTRO