≡ Menu

poetry translations


Here’s a story right up my blogging alley. I’ve written quite a bit in the past on translation (about Horace and ESL/film), as well as bit on technology and language. I wrote about how Google used the insights of Wittgenstein to overcome the problem of polysemy in search, but ended questioning whether Google could ever overcome the complexities of poetry. Turns out Google has been laboring away at creating a machine translator of poetry.

If I understand it correctly, the poetry translator basically layers several poetic constraints on top of the standard translator: line length, rhyme, meter, etc. Google’s translator uses what Jaron Lanier calls a “brute force” approach to translation. That is, it doesn’t know the rules of grammar—it doesn’t even really have a dictionary. Rather, it scours its database and determines statistical correlation between translations of pages. Put another way, it imitates by means of statistical analysis.

Meta-lord of the cloud-lords of meta of!

Questions of quality aside (i.e., let’s assume Google can be completely successful and create passable—even good poetry translations), would you really prefer Google’s translations Rimbaud over, say, Ashbery’s? Aside from needing a translation in a pinch, I can only imagine an interest in Google’s translation that is analogous to the Turing test: an interest that asks the question “If I didn’t know—could I tell the difference between the results of computer and human translation?”

I have been reading Jaron Lanier’s book You Are Not a Gadget over the last few weeks. He makes a convincing point that Turing’s test is essentially the wrong question. Part of the function of asking “can it fool us?” is a desire to find a computer that can. As a result, we’re essentially willing to dumb down our expectations of what it means to be human in hopes we’ve created machines that think. Ironically, it’s our very human desires that make the Turing test fail. The real judge of the Turing test should be a computer with a merciless set of criteria. No doubt somebody, somewhere has already realized this, and there is a computer slaving away at creating and judging its own intelligence.

Which brings me back to the question: why do we want to read Ashbery’s translations of Rimbaud? I see two motivations: the first is to read Rimbaud without learning French; the second is to read Ashbery reading Rimbaud. Google doesn’t read. To say that it does would actually change the definition of reading, wouldn’t it? Reading implies not a functional end (e.g., Ashbery produces a translation of Rimbaud), since it can exist without a functional end (e.g., Ashbery reads Rimbaud in French).

Perhaps more importantly, Google doesn’t even use language in a way that we recognize as language. Some animals use what we would rightly be called protolanguage. They can acquire a vocabulary, and perhaps even use it in creative ways (I heard a story once about an ape that put two words together to ask for a watermelon: “candy water” or something along those lines). At best, though, animals can only mash together vocabulary, without what we could refer to as “syntax.” Syntax is the ability not only to acquire vocabulary, but to manipulate it according to a deeper intelligence that categorizes vocabulary. It’s the difference between “Micah smile” and “Micah smiles.” The latter indicates not only the fact that I have associated one thing with another (the action of smiling with the word “smile”), but that I can categorize it as a verb and thus deploy it in a sentence (oh the difference an “s” makes). This syntactic ability expands when we think about relative clauses, which nest and hierarchize ideas. We even have words for pure functions of language (e.g., articles). Animals are unable to do this (unless, of course, you’re teaching a gorilla that it will die someday—perhaps death is the motivator of syntax!). Google uses statistical analysis to achieve a kind of protolanguage at best. At best, it “learns” (a word also worth an essay) to associate certain phrases with one another. But, unlike animals, it has no will to use them.

All this is to say that there is something uniquely motivating about a person doing something. A Google poetry translation will never make me reconsider my life, except in a purely serendipitous (i.e., accidental) way.

I suppose deep down I am a personalist, believing there is something utterly unique and irreducible about persons. And I worry sometimes that the whole preoccupation with AI actually takes away from the real achievements of Google’s poetry translator: we clever people have found a way to essentially use an on-off switch (0s and 1s) to do something as complex as creating a passable translation of a poem. But as we are humans wont to do, we get distracted, venerating our creation rather than marveling at the deep mystery inside us which motivated us to create it in the first place.

Here’s a quick overview of the project if you’re interested in reading more about it. Here’s an interview about the project from CBC radio (scroll down to “Do Not Go Gentle Into That Good Digital Night”—below that one, there’s also another very interesting interview with a Canadian student who created a computer program to analyze rap lyrics).