Tuesday, November 27, 2012

To take "Zombie Nouns" seriously, you must've had your brains eaten.

At first, I didn't feel like blogging about the NYT Column on "Zombie Nouns" because I feel like I've been spending too much time being critical here, arguing against usage advice like this is futile, and I knew Mark Liberman would cover it. In fact, I drafted this post all the way back during the summer, and just let it sit. But now, I've seen the column, nearly verbatim, pop up on TED-Ed as a fully animated "lesson", which presumably means some educators are actually assigning it to classrooms of fertile and impressionable minds! It really can't pass without comment now.

Helen Sword says that you should avoid using nominalizations, which she calls "zombie nouns." They're nouns that have been made out of other parts of speech. To take one of her examples, calibrate + ion = calibration.

What is so wrong about nominalizations? Not exactly clear. She seems to take aim at unnecessarily jargonistic writing, which frequently contains novel coinings of words of all types, including nominalizations. So sure, being jargonistic to obscure your other intellectual shortcomings is not so good. But is it really, actually, the mere use of nominalizations that's doing the damage there?

She also seems to take a page out of the anti-passive voice book, saying, "it fails to tell us who is doing what," which just like the passive, is just not true. For example, in the sentence
  • My criticism of her column is a day late and a dollar short.
It's very clear who is doing what, even though I used a nominalization (in bold).

But on top of the half baked usage advice, there are some more reprehensible social attitudes being expressed. For example, she lists epistemology as a useful nominalization for expressing a complex idea, but heteronormativity as one only out of touch academics who are enchanted by jargon use. First off, I would not want to use epistemology as an example when explaining what nominalizations are. What's it derived from? Episteme? Episteme has a Wikipedia page, so I guess it's that. Which brings me to the next issue here. It's embarrassing for me to admit, but whenever someone says or writes epistemology, I have to go look it up on Wikipedia. How does using epistemology not count as being out of touch with how ordinary people speak? Heteronormativity, on the other hand, is pretty easy to wrap your mind around. From Wikipedia:
Heteronormativity is a term to describe any of a set of lifestyle norms that hold that people fall into distinct and complementary genders (man and woman) with natural roles in life. It also holds that heterosexuality is the normal sexual orientation, and states that sexual and marital relations are most (or only) fitting between a man and a woman. Consequently, a "heteronormative" view is one that involves alignment of biological sex, sexuality, gender identity, and gender roles.
That's a pretty complex idea. But you know what? It's pretty easy to decode most of that meaning from the word itself, at least, if you're vaguely familiar with the politics of the time. Hetero(sexual) + normative + ity. It seems to me that she's saying more about her position on sex and gender politics here than she is about usage advice.

But who is this person, and why is she writing an opinion column in the New York Times, and getting the full TED treatment? Just like everyone, she's selling something: the icing on the cake, and my reason for blogging about this at all. She has a book out called The Writer's Diet, which has an accompanying online Writer's Diet Test. No, it's not diet as in "food for thought and inspiration," like a Chicken Soup for the Writer's Soul. It's diet as in dieting as in "drop 20 lbs and get the six pack abs you always wanted." Just paste in a paragraph of your writing into the test, and it'll rate you along a five point scaled labeled:

lean fit & trim needs toning flabby heart attack territory

Ain't nothing like exploiting the collective dysmorphia of a nation to push your quarter-baked usage decrees. But in doing so, Sword actually clarifies the role that books like hers play. The analogy to the diet and weight loss industry is entirely apt. The dieting industry makes their money by sowing seeds of personal insecurity, then reaps their harvest with offers of unfounded, unscientific, and ultimately futile dieting pills, products, methods, 10 step plans, meals, regimes, books, magazines, etc.

I won't mince words. The NYT column and the TED-Ed video have the equivalent intellectual content of the magazines in the supermarket aisle promising you 5 super easy steps to trim your belly fat to get a sexy beach bod in time for the summer. And they serve the same purpose: to undermine the confidence of every-day folk, so that they may be taken advantage of by self-appointed gurus.

Thursday, November 15, 2012

Creative Work

Whenever I hear "creative" people describe their creative process, or more precisely their creative woes, I am always struck by the strong similarities to my own experiences trying to do science. I do consider myself as trying to do science.

Take, for example, this excellent statement on self-disappointment at the early stages of your career from Ira Glass.

Ira Glass on Storytelling from David Shiyang Liu on Vimeo.

This almost perfectly sums up how I felt about almost all of the early work I did in graduate school. I can't say that I've actually gotten to the point where the work I produce meets up with my my own personal standards, but it has been on an upward trend, and I'd say Ira Glass' advice is spot on. If you want to write good papers, just write a lot of papers, and if you want to be good at giving talks, give a lot of talks, preferably in a context where you feel comfortable being bad or mediocre.

That last bit, being comfortable with being bad is really reminiscent of things Brother Ali says in this interview.

Ill Doctrine: Brother Ali Meets the Little Hater from ANIMALNewYork.com on Vimeo.

There are a few things Brother Ali says that really resonate with me.
There was a moment where I was so stressed out. And I'm like, "Man, everything that I ever did that people liked, I just got lucky. I'm a fraud."
...
It's a weird weird thing to have what you create also be your livelihood. What we create is also our sense of self. What we create is also the way the world views us.
...
And so I start thinking about it. Ok, it's not that I'm blocked. It's not that I don't have anything to say. It's that I don't know how to say what I need to say. Or it's that I don't think that it's going to be received well. Or it's that the people that love me and have supported me and have, you know, gave me the little bit of freedom in my life that I have, I don't want to let them down and I don't want to hurt their feelings by saying what needs to be said.
I think almost all academics of any variety feel this way from time to time.

But I wonder if some people might not be surprised that I would feel so similarly to creative artists in the pursuit of my science, or that maybe take it as evidence that I what I do is not science. It is certainly doubted about Linguistics occasionally. But I think these people (probably strawmen) are mistaken in thinking that science is not a creative process. This was recognized by Max Weber in is 1918 essay "Science as a Vocation" (which I've blogged about before).
[I]nspiration plays no less a role in science than it does in the realm of art. It is a childish notion to think that a mathematician attains any scientifically valuable results by sitting at his desk with a ruler, calculating machines or other mechanical means. The mathematical imagination of a Weierstrass is naturally quite differently oriented in meaning and result than is the imagination of an artist, and differs basically in quality. But the psychological processes do not differ. Both are frenzy (in the sense of Plato's 'mania') and 'inspiration.'
He also suggests that the best science and the best art is produced by individuals devoted to the science and art for their own sake, rather than being driven by the express goal of producing something new, for the sake of novelty.

The distinction that Weber draws between art and science is that science is necessarily committed to the abandonment of old science. That is, art from the Renaissance is still, and always will be, art, but science from the same period is no longer science. It has been superseded by more recent developments.

Anyway, here's the song Brother Ali was talking about, which I'm sure almost all academics can identify with, except for the suicide ideation, hopefully.

Wednesday, November 7, 2012

Nate Silver vs.the baseline

The 2012 election has been declared a victory for Nate Silver. As Rick Reilly said:
For me, as a data geek, this is nothing but good news. There's been a lot of talk about how Silver's high profile during the election could have broader effects on how every day people think about data and prediction. There's also talk about how Silver's performance is challenging to established punditry, as summed up in this XKCD comic.


Coming at this from the other side, though, I'm curious as a data person about how much secret sauce Silver's got. Sure, in broad qualitative strokes, he got the map right. But quantitatively, Silver's model also produced more detailed estimates about voting shares by state. How accurate were those?

Well, to start out, there is not some absolute sense of accuracy. When it comes to predicting which states would go to which candidates, it's easy to say Silver's predictions were maximally accurate. But what's tricker is to figure out how many he could have gotten wrong and still have us call his prediction accurate. For example, Ohio was a really close race. If Ohio had actually gone to Romney, but all of Silver's other predictions were right, could we call that a pretty accurate prediction? Maybe. But now let's say that he got all of conventional battle ground states right, but out of nowhere, California went for Romney. It's the same situation of getting one state wrong, but in this case it's big state, and an anomalous outcome that Silver's model would have missed. Would his prediction be inaccurate in that case? What if it was Rhode Island instead? That would be equally anomalous, but would have a smaller impact on the final election result. Now let's imagine a different United States where all of the races in all of the states had razor thin margins, and Silver correctly predicted 30 out of 50. In that case, we might say it was an accurate prediction.

All of this is to say that the notion of "accuracy" is really dependent upon what you're comparing the prediction to, and what the goal of the prediction is.

So what I want to know is how much Silver's model improves his prediction over what's just immediately obvious from the available data. That is, I want to see how much closer Silver's prediction of the vote share in different states was than some other baseline prediction. For the baseline, I'll take the average of the most recent polls from that state, as handily provided by Nate Silver on the 538 site. I also need to compare both the averaging method and the 538 method to the actual outcomes, which I've copy-pasted from the NPR big board. (Note: I think they might still be updating the results there, so I might have to update this post at some future date with the final tally.)

First I'll look at the Root Mean Square Error for the simple average-of-polls prediction and the 538 prediction. I'll take Obama and Romney separately. The "Silver advantage" row is just the poll averaging prediction divided by the 538 prediction.

ObamaRomney
Averaging Polls3.34.1
5381.81.7
Silver Advantage1.82.4

So it looks like Silver has definitely got some secret sauce, effectively halving the RMSE of the stupid poll averaging prediction. I also tried out a version of the RSME weighted by the electoral votes of each state, for a more results oriented view of the accuracy. I just replaced the mean of the squared error by a weighted average of the squared error, weighted by the electoral votes of the state. The results come out basically the same.

ObamaRomney
Averaging Polls3.23.1
5381.51.5
Silver Advantage2.22.0

So what was it about the 538 forecast that made it so much better than simply averaging polls? I think these plots might help answer that. They both plot the error in the 538 forecast against the error in poll averaging.


It looks like for both Obama and Romney, the 538 forecast did more to boost up the prediction in places where they outperformed their polls than tamping them down where they underperformed. The effect is especially striking for Romney.

So, Silver's model definitely outperforms simple poll watching & averaging. Which is good for him, because it means he's actually doing something to earn his keep.

You can grab the data I and R code was working with at this github repository. There's also this version of the R code on RPubs.

Disqus for Val Systems