Pages

Thursday, October 27, 2016

Robot Overlords

Maybe Black Mirror has broken me, but I've come to believe resistance in futile.

Engineers are going to develop tools that evaluate writing as effectively as a human.

We may be a long way from that reality, but even as we approach, what computers can glean from a text is impressive.

This worries a lot of people, and I get that.

Writing assessment is very complex work. It defines so much of what writing teachers do. But more importantly, writing assessment feels like something humans should own. Writing is, after all, a form of person-to-person communication. Right?

A computer-produced report on my writing is not a valid assessment, because it is not measuring a person's response to the text.

Only people can provide a valid assessment of a text written as person-to-person communication.

And you know what?
Teachers are people!

Teachers can do things computers can't do, like... be people.

This is a reasonable argument, up to a point.

See, teachers are people.
And people are the intended audience of most writing.
But, here's the thing, teachers are not the intended audience of most writing.

This is particularly true in college composition. As students begin college, they enter a new stage in their writing development. They are preparing to join various scholarly, civic, and professional communities.

Writing teachers do not belong to all of those communities.

As a writing teacher, if I am the only person a student has learned how to "write for," I have failed.

So, we cannot say 'the writing teacher provides a completely valid assessment.'

The only completely valid assessment of a text takes place when the actual intended reader reacts to the text.

I need my students to learn to write for a variety of readers.

My job is to help students understand this: From now on, as a writer, you will have to use what you know today in order to learn the rules anew each time you decide to write for a new audience or a new purpose.

There are tools I can give students to do that. So the function of college composition courses remains important.

But I need to leave some of the assessment to others - to my colleagues in other departments, to other students in the class, to other writing teachers, to potential employers, and the list goes on.

So, why not computers?

Are we afraid we are going to be replaced by robot writing assessors?

There is some legitimacy to that fear. If we believe that keeping people in "the assessment loop" is too expensive, or that the labor conditions writing teachers work under are unsustainable, then maybe it's time to welcome our robot overlords.

Maybe.

But that assumes we are satisfied with writing assessment as it is performed today. Are we comfortable saying that?
Try saying it: 
We know how to perform effective writing assessment that reliably predicts a person's ability to write across a wide variety of circumstances. 
I don't buy it. What we do is still very messy, and it's always going to be. That's what drew me to this work.

It was Dalí who said, "Have no fear of perfection - you will never reach it."

So, when IBM rolls out a tool that produces "Personality Insights" based on text samples, I don't see that as IBM working to replace human readers. I see IBM tinkering with just how much computers can learn from writing - and maybe just how much they can contribute to a more comprehensive assessment of writing ability.

Here's what they learned about me in a Sunburst Chart:

Reading that analysis, it feels like I visited a psychic, and not in the "Wow, that's uncanny" way.

It feels like someone sized me up as I walked through the door and then made some vague observations using language intended to sound specific.

In other words, it still feels like a trick. It's a cool trick, and it's rooted in some pretty sophisticated understandings of human behavior and communication, but... it still feels like a trick.

Trick or no, it is progress. That computer made observations about me based on my writing.

I hope these tools get better, and as they do, I hope to use them to add more dimensions to the work I do when I assess the writing my students produce.

As it stands today, I recommend my students consider assessment from their peers, from me, from tutors, from roommates, from professors in their major, from grammar checking software, from study groups, and anywhere else they can get some assessment. And then I ask students to consider the feedback critically and use it to improve their work.

I can't think of a good reason to treat assessment software differently.

If someone tried to suggest that such software will replace me and all those other sources of assessment, I'm ready to tell them why that is absurd. But I think they know. If not, the computers will probably tell them...

Or will they?

No comments: