Solving the Writing Challenge…

Let Software do what Software Can,
so Teachers do what only Teachers Can

In two preceding posts, I explored the context around evaluating student writing. Specifically, this included the time and effort expended by teachers as well as the role technology could play, and our feelings related to both. This post attempts to move past the hysteria and stagnation to gain some clarity around what we really want.

Our Real Goal

To begin with the obvious and inarguable: we want students to keep getting better at writing.  As apparent as this might seem, we should never lose focus on this goal because it seems to have been lost somewhere between what we know (both research and common sense) and what we do (school-based practices around writing). The realities of the classroom and a crowded curriculum, combined with… fear of change? protecting the status quo? honest regard for the art of writing?  Choose your preferred obstacle… but now, LET’S GET OVER IT.  We need to begin from the clear-eyed acceptance that whatever we’re doing systemically hasn’t worked. State and national results in NAPLAN support this and our own experience highlights that for most schools writing is among the most challenging academic skills to teach and learn.  Thus if we accept the premise that our goal is to improve student writing, and that new approaches are required, what do we do?

Let Software Do…

My mantra, as a devout English teacher, writer and long-time ed tech entity is simple and clear: “Let software do what software can so teachers do what only teachers can.”  Can software analyse student writing as well as a trained teacher in writing?  Of course not.  But everyday we all rely on things that software can do, such as spellcheck our work and facilitate editing. Such functionality is second nature to us. It is also about 30 years old. As quickly as technology has changed in that time, especially in regard to crunching data into profiles, noticing patterns, and comparing disparate bits of data, can’t we imagine that the science of “machine reading texts” has evolved? It has. In little steps. Little, because communicating and language are among the most complex things we humans do.

Moves by the Australian Curriculum Assessment and Reporting Authority (ACARA) to trial machine reading of students’ NAPLAN writing seems to be a main cause for the recent hysteria.  The argument against this is that no computational reading of a text can critique, let alone notice, such things as irony and poetic intent. Nor can it reward a particularly well-turned phrase. When we humans engage in our “labour of love”, scribbling detailed feedback on students’ papers, we are often looking for just such things. Unfortunately, we inevitably confront repetitious and limited word choice, poorly structured sentences and paragraphs that lack integrity.  Things that we would hope students addressed in earlier drafts of their work. Drafts?

What Software can, so…

Interestingly, it was also 30 years ago that the Writing Process captured the interests of university researchers, writers and teachers.  We noted that “expert writers” did things that “novices” did not, such as pre-writing, drafting, getting feedback, revising and editing for publication.  We recognised truth in the statement that “good writing is re-writing.”  Fast-forward to our present and this wisdom seems to have been squashed by the daily mountain of other tasks every teacher confronts.  Reading and grading the stack of required tasks in a curriculum is burdensome enough; who would ask for more? Thus, how many students at almost any level of schooling engage in regular cycles of drafting, feedback, revision, feedback and polishing?  It’s safe to say, “probably not as many as we’d like” knowing that such approaches not only develop better writing, but, in fact, can develop writers.

Teachers do what only Teachers Can

I suggest that removing some of the burden of the writing process as well as providing rich analytics and resources related to each teacher’s students is where technology can help.  The fact that software can’t help developing writers craft ironic, poetic or poignant prose, doesn’t mean that it can’t help them with word choice, the mechanics of sentences or more sophisticated paragraphing and text structures.  The way I see it, software can help students take ownership of their writing to the extent that when they submit their work to teachers, it represents their best efforts and warrants critical assessment. Again: “let software do what software can, so teachers do what only teachers can.”

In another article, we will explain in greater detail some of the analytic approaches we’re designing into our writing software at Literatu.  A fair amount of this falls into the category of “secret sauce” so we won’t divulge too much, but enough I hope to inspire your interest in joining a group of teachers try out our beta version.

 

source image from Flickr – https://flic.kr/p/5GSBaB
cc license 2.0

Age of the Assistant – FANGST or Friend?

In the last post, I juxtaposed my past as a dedicated English teacher with the last decades’ amazing changes in technology. The reason for pairing these two is that while technologies have transformed nearly every aspect of our lives, its impact on helping develop better writers has been negligible.  This post therefore sets the scene for how these two worlds can finally sync up.

FANGST

Wall Street has an acronym for the powerhouses of the new digital era: FANG, which stands for Facebook, Amazon, Netflix and Google.  Interesting, isn’t it: the bite that this nomenclature suggests?  Thinking about technology’s encroachment into human experience, I tend to include Siri (with her personal assistant peers) and Tesla because of the new era of artificial intelligence they present. There’s more to think about regarding these companies and their impact on human existence, but the rapid and fundamental changes they bring to daily life understandably raise our collective level of “FANGST” (technology induced angst).

As different as they are, these FANGST technologies (Facebook, Amazon, Netflix, Google, Siri and Tesla) have two main things in common.  First, they provide services in such powerful ways that they seem to verge on magic.  This magic comes from Big Data and the algorithmic machine learning that happens behind the scenes.  The second aspect in common is that such rapid change always stirs anxiety.  It was true for the Luddites two hundred years ago and our elders last century when electricity, the horseless carriage and flying machines redefined human life.

Besides the shared anxiety that comes with such rapid changes, our current variation has its own bitter-sweet flavour: sweet in that we all easily gain more of what we want (anywhere, anytime), but bitter when we’re reminded by the media buzz that such Artificial Intelligence, personal assistants and automations will replace lots of our jobs.  In the area of technology and student writing, the media buzz has taken a particular slant…

Fear and Grading in the age of FANGST

Because overall results in student writing have shown a flat line and backward trend, one aspect that’s getting a lot of attention in Australia is the use of computer software to evaluate student writing.  For decades, researchers and software companies have explored this area from many perspectives, including computer science, linguistics and writing theory.  The research and approaches go by many names and often become highly charged. For example, in our current debate, it’s no surprise that what researchers refer to as the science of Natural Language Processing (NLP), the media whips up hysteria suggesting it’s an invasion of “Robo-Graders” ready to undermine the value of teachers and dilute the art of writing.  Like all technologies, using software to analyse writing is neither inherently good nor bad.  It’s all about what the software is truly capable of analysing and how this approach is used. This is true about the many technologies we’ve already built into our lives…

Our Friendly Assistants

Each of us have already made peace with many of the assistants new technologies provide.  We choose when we want software to help and when we don’t.  We also choose the kind of help we want from software.  For example, we’d rather have software do the mind-numbing aspects, those that are not intrinsically motivating or are prone to our human error.  We’re also pretty happy when software suggests possibilities based on crunching data that we know is there, but can’t see or access.

Do we want to drive into a new city without GPS?  Do we want to plan a trip without the Web? Do we want to chat with friends using Morse code?  Do we want students to write essays using stone tablets and chisels?  And if we recognise that students become better writers by writing more and more often, do we want teachers to read thousands more assignments?  How is this fair when their colleagues don’t?  Of course we want some technologies to help us.  So let’s begin with a commitment to be reasonable and use what works to further the goals we have.

Which is the topic of the next post:  Letting Software Do What Software Can

 

The Next Era of Essay Evaluation: In the Beginning….

Personal Snapshot: 1995

In 1995, I transitioned from English teacher to Web-based Educator.  At the time, I calculated that in nearly a decade of classroom teaching, I’d graded over 10,000 student essays.  A conservative estimate is that this equated to 145 eight-hour days of unpaid work – call it a labour of love – because I was dedicated to not merely giving students a grade, but providing detailed comments at the word, sentence and paragraph level.

Wiping away the misty-eyed idealism of a young teacher, I have to admit that the rushed average of seven minutes I put into each essay was probably more time than my loveable but other-focused students put into reading my comments. And probably than using my comments to improve their texts and develop as more expert writers.  It’s no wonder that I experience a visceral hair-raising akin to a horror movie when I think about grading stacks of essays…

Clearly it’s not sustainable or fair to ask some teachers to give up such unpaid time when colleagues in other subjects don’t evaluate of student writing. So is it any surprise that student performance in writing is a worry?

At the same time, in another part of the ….

Interestingly, at this same time, a new era was just dawning with a crazy thing called the World Wide Web, and in particular, a crazier upstart company was taking the marketplace by storm even as it lost money every quarter: Amazon.  We all know what’s happened with Amazon and its amazing success, but it’s important to highlight what’s powered this success.  It’s not lower prices or better advertising, the old-fashioned approaches to building a business, but algorithms.

Jeff Bezos and his team understood that understanding its customers – at a new, more granular level – was the path to their success.  Some readers might remember the early first fruits of this data profiling that, because you bought one book, the Web site offered some pretty lame suggestions based on, “others who’ve bought this book also bought…”

But the code has gotten better and we’ve become accustomed to gaining the benefit of algorithmic recommendations.  So we do look at what others did buy; we appreciate Google’s tailored search results and Facebook’s channelled news feeds; we consult TripAdvisor for hotels and restaurants; and we’ve come to rely on Apps, personalised maps, streamed music and videos to enhance our lives.  We’ve gone from the World Wide Web, social media and phone-based Internet to enter fully into The Age of the Assistants! (coming soon!)

 

images from WikiCommons:

https://commons.wikimedia.org/wiki/File:FileStack_retouched.jpg
https://en.wikipedia.org/wiki/File:The_Mummy_1932_film_poster.jpg

%d bloggers like this: