Rhetoric and Professional Communication | Applied Linguistics and Technology

Category: Uncategorized

Automated Writing Evaluation: Strengths and Weaknesses

Throughout this semester, we’ve discussed a lot of different ways we can use computers in language learning and other applied-linguistics tasks. One of the technologies that’s come up in this discussion is automated writing evaluation (AWE) tools. Through my assistanship this semester with Elena Cotos, I’ve had the chance to read some of the literature on automated writing evaluation (AWE) systems, and I’ve been able to think about some of the limitations and affordances that AWE tools offer. In this post, I want to synthesize some of my thoughts to see if I can get them somewhat organized.

First the strengths of AWE tools. Anyone who has taught a writing class knows that grading dozens of papers is time consuming at best and painful at worst. Having a computer take care of that grading—or at least a sizable chunk of it—could allow teachers to focus their time on other activities that could also benefit their students more than pointing out grammatical error after grammatrcal error. It could allow a teacher to spend more time giving more holistic assessment or planning lessons that address specific issues a certain class struggles with. From a writer’s perspective, I can also see some of the advantages to these tools as well. It can be really valuable to get immediate feedback on something I’m writing, and all AWE tools provide this immediate feedback.

However, despite these strengths, the weaknesses of AWE tools are still, in my opinion, pretty significant. As a writing teacher, I try to help my students understand an assignment, identify who it is they’re writing for, and write in a way that will be clear and convincing to that audience. If I know that a machine is going to evaluate these students’ writing assignments, I’m almost forced to teach them to write for this machine and not to an audience who might actually be interested in what they have to say. The rhetorician in me cringes at that thought. Writing is inherently communicative. We write to communicate ideas form one human to another, not just to get a grade or to satisfy the requirements encoded in some machine algorithm.

I’ve mentioned that I can see the value in getting immediate feedback, but that value only comes after the evaluation tool has earned a writer’s trust. We’ve all experienced MS Word giving us bad advice, telling us to avoid a passive or that a fragment is problematic when in fact it’s not. It takes skill to interpret a writing tool’s guidance—to know when to listen and when to ignore—and I’m not sure that most students would be able to do that very skillfully. With practice, they might, but I think the risks are pretty heavy.

I don’t think anyone is advocating for AWE tools to fully overtake human-evaluated writing. And that’s a good thing. But in my view, we’ve still got a long way to go before I’ll be comfortable trusting an AWE tool to evaluate my or my students’ writing.

 

Creating an iBook for language learners

I’ve been designing books for a few years–ever since I took an intro to InDesign course as an undergraduate student. I was surprised how much I enjoyed that kind of work. I still look for opportunities to put my book-making skills to the test whenever I can. So being assigned to create an iBook for our final project in our 510 class was something I was really looking forward to.

I’ve never used iBooks before now, but I’ve found it to be really user-friendly. Though, I should qualify that a bit: coming from a program like InDesign makes some of the limitations of iBooks more pronounced. For instance, I’m surprised that I can’t include a guideline on a master page in iBooks and have it show up on the working pages. In our group’s iBook, we want to be sure that all the content appears on the same part of the screen, and the only way to do that effectively that I could find was to place a brightly-colored square on the master page to indicate where content should go with the plan to delete the square once we’ve got all our content placed.

Even though iBooks has its limitations, I’ve been impressed with how easy it is to use. We’ve been able to include a lot of functionality through its built-in widgets (like the assessment widgets) and through third-party widgets as well. Using Bookry’s library of widgets has been really easy, and it’s given us the option to do things in our iBook that we wouldn’t have been able to do otherwise. For example, including areas where students can insert their own text is something that, as far as I can tell, is only an option in Bookry’s Form Builder widget.

We still have a bunch of work to do on the iBook, but it’s been a good experience so far. It’s been great to work with a group with so much expertise in langauge-learning pedagogy and design. I think we’ll end up with a book we’ll all be proud to show off.

© 2024 Jordan Smith

Theme by Anders NorenUp ↑