Three Fingered Fox

-  31 posts

I don’t work in tech. I’m not an influencer, a maker, a creative, or a guru. I’m neither disrupting nor evangelizing. I don't have a snappy slide show for you.

more Taylorism, please, we’re cogs

Read this link: “Having doctors squander their time completing multiple checklists that someone else has determined are necessary and which could well be done by others is a huge waste, and leads directly to worse outcomes.”

This is the same criticism I have with the Obama administration’s approach to education – “outcomes” are defined bureaucratically, by adherence to given algorithms, rather than letting highly trained, experienced people work autonomously, the best way they know how. Stupid checklists are substituted for the expertise of people who used to be honored professionals but are being de-skilled, turned into fungible clockworks for the completion of these checklists. It’s an inherently mistrusting, technocratic, anti-democratic system, insisting that all proper procedures emanate from the dead hand of the central office, and that the accumulated knowledge of the actual workers, being particular, variable, and hard to express in numbers, counts for nothing. The Administration’s proposal to rank-order universities and scholars is in the same spirit, and is equally depressing.

Another way to view it is that the last redoubts of autonomous work are finally being subjected to the capitalist Taylorism that long since wrecked the dignity of craft. It’s not a partisan thing, either – both the Republicans and the Democrats love it, because it produces nice graphs that can be taken to the voters, and, all-importantly, reduces taxes. But when you remove what Aristotle called ‘phronesis’ (the practical wisdom of training and experience) from any human agent-patient relation, we are all reduced to cogs in some giant planetary gear of Molochian justification.

The Use of Lupita Nyong’o as Hollywood Success Story

I like Lupita Nyong’o. Everyone does, right? She’s great.

This is not about her.

It’s about what you might call the white liberal construction of Lupita Nyong’o.

In this construction, Lupita Nyong’o was an attractive nobody who was from “Africa” in some dim sense — probably from some village. That’s what they have in Africa, right? Villages? And also somehow from Mexico a little, too. They have slums there, probably. Whatever — she might as well have been from the poor part of the Moon.

Then she somehow got “discovered”. That’s what happens to beautiful women from faraway villages and/or slums who get famous, right? They get “discovered”? “Plucked from Obscurity”, as the Daily Beast put it.

So she’s from some kind of remote, romantic favela — one has the mental image of her standing alone, barefoot, in a dirty Mexican street, or in a baked dusty Saharan hardpan, when suddenly the disembodied camera-lens of white approval lands on her. And the rest is history.

The real story is more prosaic and more rich. Lupita Nyong’o’s background is one of enviable privilege. Her father is a political scientist with a University of Chicago PhD, one of the most important people in the Orange Democratic Movement — the ruling party of Kenya — and a high minister in the current Kenyan government; at a rough estimate he earns thirty thousand times as much money as the average Kenyan. Having him as a dad would be like being John Kerry’s kid. Lupita grew up in Mexico, New York, and Nairobi, and speaks four languages. She went to Hampshire College and Yale Drama and did PA work in Hollywood. Her life trajectory is full of the kind of experience that money very definitely can buy.

She’s obviously brilliant and beautiful and a great actress with a huge career ahead of her, and that’s all to her credit. She wasn’t “plucked from obscurity”, she went to a really famous, elite drama school and then went to Hollywood and worked her way up. That she comes from money and influence and has a gold-plated acting education separates her not at all from the rest of the Hollywood nomenklatura.

What’s interesting to me is that apparently white liberals simply can’t relate to her unless she had a ‘slumdog’ upbringing, with her life a redemptive story that flatters contemporary whiteness. We’re not racist anymore! We’re not classist anymore! You can be anyone from anywhere — talent, beauty, and moxie are all you need.

But remember, the slumdog kids got tossed back where they came from. The camera loved them and audiences did too, but they were still disposable. Alterity always is. White Hollywood has always seen non-white Others as essentially interchangeable. 

Lupita Nyong’o’s staying power in Hollywood isn’t guaranteed by her tremendous talent, or her beauty or moxie. In Hollywood, being a talented, beautiful actor with moxie who is also a dark-skinned outsider gets you stuffed back in the bag when they no longer need you around to flatter themselves that they once gave you a job or an award. Lupita will be able to continue because she’s not really an outsider — or, she is one only partly. The money, the privileged upbringing, the elite education — that’s the kind of currency it’s still essentially impossible to get along without.

But the prominent role of privilege in her story is a secret that white liberals keep from themselves. We need to; it helps us feel good about ourselves for feeling good about her. To us, Lupita was from nowhere and nothing, and used her past as a nobody to inform her blazing performance as Patsey the slave, the brutal honesty of which surely makes redneck racists feel abashed, and makes white liberals feel proud. We enjoy the daring of our approval of Lupita, and the bravery of our disapproval of chattel slavery.

White Hollywood is now ready to face that past, the past of slavery and vile racism, and put it behind us. White Hollywood feels good about facing that past, as past. It wants you to watch the Oscars, and feel good about it, too, as a white liberal in flyover country. Hurrah for us, the white liberals! …What? Of course it’s all in the past — look at Lupita! She came from Black-nowhere and we let her be a star.

Obama’s president and Lupita has an Oscar — things are great.

Fruitvale Station? Never heard of it.

Duck Dystopia

So Phil Robertson, whose tangled beard and folksiness until yesterday graced A&E’s “Duck Dynasty”, doesn’t like gays, and isn’t too clear that he likes African-Americans either.

It’s a shock, of course. Nobody thought that if we kept cramming doltish shitheels onto the airwaves, that one or two of them might turn out to actually mean it. Two days ago we thought Phil was a role-model; now we know he’s not. It’s a loss, to which we all feel called to respond.

As usual, it was Sarah Palin who was loudest — perfectly enunciating the ideological incoherence of the right. “Intolerants!” she cried. Meaning those who are intolerant of Phil’s intolerance. The word is already a meme.

This, of course, is another signal contribution of the “liberal fascism” discourse, the discourse of the oxymoron. Intolerance of intolerance is hypocritical, the line runs. If liberals really meant their liberalism, they would be tolerant of intolerance in the same way they are tolerant of difference in gender, race, and sexual orientation.

Even if the accusation were true, it would mean only that the right and the liberal left were morally equivalent; it would mean only that the left is sometimes intolerant, just as the right proudly already is. The accusation amounts to no more than the basest possible tu quoque: “at least we know we’re scum.”

But, of course, it’s not true. It’s neither hypocritical nor incoherent for an opponent of intolerance to be intolerant of intolerance: that’s what it means to oppose intolerance.

Tolerance is the basic value of liberalism that allows for difference in gender, race, or sexual orientation: to tolerate on equal footing its own opposite, intolerance, would be to undermine itself, to prefer exclusion of, and damage to, the very system of differences the existence of which tolerance was meant to enable. Intolerance of intolerance is tolerance. The accusation that liberalism is hypocritical when it does not tolerate intolerance literally cannot be true: it is purely illogical. Liberalism is at its most consistent and true when it is intolerant of the intolerant.

It is of course deeply strange to see conservatives coming out against intolerance, as it always is whenever they find a racist, sexist, or homophobe whom they think they’re not hearing enough from. “Intolerants!” is an utterance of hilariously pure psychological projection. But Sarah Palin is a professional yahoo; riposting her arguments argumentatively is not the point, as they are intended for people who either do not know what an argument is, or do not care.

More interesting is the form which the liberal intolerance to intolerance is currently taking. Phil’s conservative supporters have asked that he receive “freedom of speech.” Give Phil his First Amendment freedom! they say. Well, Phil has his First Amendment freedom, because that is a freedom secured only against the government. The First Amendment does not much govern our interactions with our employers. Those are governed more by the freedom of contract that exists in American capitalism — and that is exactly the reasoning under which the right wing has argued for the firing of Martin Bashir from MSNBC and the right of Hobby Lobby not to pay for insurance that covers abortion and contraception.

So it’s a little surprising to me that the liberal reply to Phil’s conservative backers has amounted to: “Phil is free to say what he likes in GQ, and A&E is free not to employ him.”

Legally, this is true. That is a correct statement of the actual status of the First Amendment and the freedom of contract in America. But it’s still an uncomfortable thing for anyone nominally on the left to advocate. Phil Robertson is, of course, a public figure, and with public figures, all bets are off — but there is still something to be said for the general freedom not to be fired for speech not directly connected to your job.

It’s not clear that, as such a public figure, Phil Robertson enjoys any speech not directly connected to his job, just as Martin Bashir didn’t: Phil’s job is to be a body of free-floating redneck signifiers that can be plastered on camo hats, t-shirts, and christmas wrap sold at Wal-Mart. But let’s generalize the case. It is surprising to many, but in most parts of the country you can be fired for most any “protected” speech, if your employer takes exception to it. The same reasoning being applied by liberals to Phil allows your asshole boss to fire the person down the hall for putting an Obama or Planned Parenthood sticker on their car. (Yes, people have been fired for that, and yes, the courts have said it’s okay.)

Most people can’t risk losing their jobs, and if they know their boss is likely to retaliate against political speech, their political speech is effectively constrained. In this way, freedom of speech is a version of the mere freedom to starve.

And who most suffers in this way for their political speech? It’s not rich, white, Christian conservatives.

Liberals are supposed to like freedom of speech. Indeed, it’s one of the arenas in which the unproblematic differences of which they are so tolerant are supposed to be allowed to manifest themselves. Thus, I find it quite strange that the liberal argument against Phil Robertson has so far taken the form of an appeal to the freedom of contract that directly cuts against the actual freedom to speak.

Conservatives are wrong to demand that Phil needs to be given his “First Amendment” rights — he’s got them. But there is an important intuition inside all the ignorance: if you don’t have any rights of free speech against an employer, your right to free speech doesn’t mean much. If your right to free speech is trumped by your employer’s commercial concerns, your right to free speech doesn’t mean much. Profit is what matters.

That’s the only principle which is ever enhanced in these controversies. The Chick-fil-A kerfluffle was paradigmatic. The matter was framed as a conflict of two ideological positions: one favored by the owner of the restaurant chain, the other favored by people who boycotted it. Bigotry and anti-bigotry were both placed as “controversial” positions in equal contention; the only principle that seemed to be agreed-upon by all parties, even to the extent that it hardly needed to be stated, was commercial freedom, the freedom of contract. The owner of the restaurant could do with his money what he wanted, and give to homophobic groups; his opponents could do with their money what they wanted, and eat elsewhere. Contractual freedom was the point of ideological agreement that created the terms of the debate for everyone.

But for anyone on the left, that is a desperately retrograde premise. It atomizes and commercializes the basis of communal action against a sick status quo. It rules out precisely the notion that some questions of value ought not be merely fiduciary, that they should be subjects of a more concerted and deeper debate on a political level. This is something the right today may understand better than the liberal left.

Given that liberalism is, and has to be, intolerant of intolerance, it might be both more consistent and more forthright for liberals to claim that hate speech — and let’s face it, what Phil said was not just an abstract statement of disapproval, it was nothing but revulsion and hatred — ought not enjoy social and legal sanction in the first place. The “marketplace of ideas” is a threadbare fiction: in a sphere of contending memes in which nothing is taken to be of greater or lesser inherent value, all that stands firm and invariant is the ideal of the marketplace itself, the locus of one-to-one contractual, commercial relations, that leaves out precisely solidarity, ethics, and robust politics.

In fact, our reified ideal of the marketplace omits the regulatory requirements necessary to the proper functioning of an actual marketplace: there are rules against false advertising and fraud in real markets, because those destroy the basis of commercial trust necessary for a market to work. But in the “marketplace of ideas”, we hear that the only remedy to bad speech is more speech. Even when the bad speech is of a sort that distorts, erodes, and ultimately destroys the liberal polity itself, we are told that the only thing to do is to close our pocketbooks or post to twitter. To behave this way is all too tolerant of intolerance. It is an idea of politics drained of all blood, but not drained of all import.

Its prevalence explains why we find ourselves inhabiting a dead politics whose carcass is being picked clean by sand fleas like Sarah Palin.

The iconicity of “peaceful resistance”

The New York Times’ Mandela Obituary Headline Couldn’t Have Been More Wrong

1-N0tmt9wB6pJXQS_qKNUvsg

Before it falls down the memory hole, it should be noted that the online US edition of the New York Times marked the sad passing of the great Nelson Mandela with this odd headline: “Nelson Mandela, South African Icon of Peaceful Resistance, Dies”. (They’ve since changed it to “South Africa’s…Moral Center”, which sounds like a place FIFA could have held business ethics conventions during the last World Cup.)

“Icon of Peaceful Resistance” makes it sound like Mandela was an advocate and practitioner of nonviolence. He wasn’t. Apartheid was above all a socioeconomic system of structured viciousness: the whites were not going to give up their advantages without a fight. The struggle against Apartheid was necessarily bloody. The symbolic force of an “icon”, no matter how noble its martyrdom, could not have defeated Apartheid. It had to be defeated at the cost of lives. Mandela always knew this.

Mandela founded and ran Umkhonto we Sizwe, the paramilitary wing of the ANC, which carried out armed resistance and a bombing campaign. The bombings mostly targeted high-profile pieces of property, but were nevertheless responsible for many civilian deaths. Umkhonto we Sizwe also executed collaborators.

Botha would have freed Mandela in ‘85 if he’d agreed to renounce armed struggle; Mandela courageously refused. On his release in 1990, Mandela repeated:

“The factors which necessitated the armed struggle still exist today. We have no option but to continue. We express the hope that a climate conducive to a negotiated settlement will be created soon so that there may no longer be the need for the armed struggle.”

He was right on both counts.

Don’t think he wasn’t reviled for it. In the eyes of many among the Western elites, Mandela was a Soviet-dominated terrorist until the day he walked out of jail, and into iconicity. Reagan put the ANC on the State Department terrorist organizations watch-list; this wasn’t undone until 2008. Reagan vetoed the South Africa sanctions bill, and was overridden — not before Jesse Helms fillibustered the override vote.

Then there were even more charming expressions of Western antipathy to Mandela’s violence, like this poster produced in the 80s by the UK’s Federation of Conservative Students, which I will reproduce without further comment:

1-D8XTouWVzfFGeqh9TU_cBg

Poster by the UK’s Young Conservatives (thanks to @sarahlicity)But in American bourgeois fantasy life, the only good liberation struggles are Gandhi and King, and if a struggle does not match that mythologized template, could not have matched it, it will be roundly condemned while it is ongoing, and if it happens to be successful (despite us), its history will be rewritten.

The dialectic is a familiar one — familiar and a little sad. There is a way in which the myth of peaceful resistance is flattering to the oppressor and disabling to the oppressed. It’s as much the oppressor’s narrative as anyone’s. “You ought not to fight us with more than the image of your own broken body,” it says, “for we who oppress you are good and rational — most of the time. We have the same interests as you, and understand that you enjoy the same basic rights. We, your rulers, simply need to have our consciences pricked from time to time.” By couching the antipathy as a mere moral lapse, the oppressor is permitted simultaneously to deny the actual material basis of the social division and hence the necessity for a struggle for liberation that is more than merely symbolic, and to perform a mental splitting-off from its own identity of those aspects of itself it can now pretend were inessential deviations from its rational, humanistic core. Just as the United States broadly did with the benighted South of Bull Connor and the Klan. As if the story of American racist oppression was one of mere regional ideological peccadillo and not one of the founding principles of the whole nation’s economic structure. As if the story of Apartheid were simply those nasty Afrikaners and their gauche racism. They’d probably lived in Africa too long and allowed its “tribalism” to rub off on them, and so deviated from the European universalist norm. Still, one of us in the end, eh?

That’s the funny thing about colonialism — even when it’s visible, it appears only in ideological garb flattering to the oppressor.

In fact, this is such a reflex that the Times probably wrote that headline without a second thought, and it was only after a few thousand derisive tweets that they remembered that there is occasionally such a thing as real history, and they quietly changed it.

This post also appears on Medium.com

A Visible Darkness

Looking backward, I’m amazed I didn’t know. It was all so obvious.

I think I was touched by depression the whole time I was growing up, but I remember one day in particular. It was the day I was touring the State U campus before matriculating. I’d wanted to go somewhere else, but there’d been a last minute financial catastrophe in the family — ironically, due to my father’s depression, about which nobody had ever breathed so much as a word to me — and I couldn’t go there. I was walking the campus, and it was a grey, dull day, and quite frankly I felt above the place, like I was going to be wasting the next four years in a grubby hick backwater. (I was wrong about that, in many ways, but leave that aside.)

This was the moment: a cloud drifted across the sun, and it was as if this dark scum or membrane rose up out of the ground and slowly covered the sky, as though I were in the center of a vast glass dome that was being coated from the bottom up to the top by a pellicle of filth and grime. Eventually the scum met at the apex of the sky and everything was qualitatively darker and dimmer and blurrier after that.

The cloud eventually moved off the sun, but nothing brightened. It was if the sun had been replaced with a lesser, more pallid sun.

The sky didn’t brighten again for years and years.

This is a moment whose significance I’ve reconstructed retrospectively. At the time, I thought of it as just another bad day. But the dimming-out of the light is something that many depressives describe, and they often talk about the remission of their depression in opposite terms — a beam of light finally cracks through an endlessly grey sky. There is a brightened patch that slowly grows bigger, and as it grows bigger life seems possible to live.

But in college, in the next few years, I thought I was experiencing a perfectly rational self-hatred and self-disgust. I could rarely sleep well — particularly at night — and I would walk the campus in what I know now was utter psychic agony, but which then I thought was merely a natural regret. I can’t believe I thought that, now. I can’t believe that I thought the reason that in the middle of the night I’d sit for hour after hour in patches of deep shadow off the edges of paths, or lie curled behind furniture in dorm lounges on floors I didn’t live on, wanting to tear the skin off my body just to distract myself from the indescribable pain in my head, was because of how very clear-headed I was, how clearly I saw that I was a bad person.

It’s a question of experience. You go through certain kinds of pain for enough time and your scale is changed. It’s not, somehow, that a given level of pain is less painful. It’s that you’re thoroughly familiar with it. Your pain is a traveling companion. It’s always there, sometimes a little stronger or a little weaker, but always there. You plunge a hand into a pot of boiling water for the first time, and you’re disabled, as much from the shock. You do it every day…

Then I happened to read Darkness Visible by William Styron. I know, I know, it would be from a book. I’m sure people in my life tried to tell me. You know what? I don’t remember a single instance. Not a one. It just rolled off.

I remember once I was sitting on a landing in the student center, staring ahead, in so much pain that I was in a kind of dissociated fugue. I hadn’t been back to my dorm in a while. My girlfriend (and maybe some other people) finally came to find me. She asked me how I was. I told her, and whatever it was I said made her burst out crying and run away. I don’t remember what she said or what I said; I remember not being able to see or hear very well because the pain in my head had begun to eat away at my ability to process ordinary sensory information.

I know I wanted to die. Maybe that was it.

Anyway, there are passages in that book by Styron — it’s his memoir of depression — that captured perfectly how that felt. The identification was complete and immediate. And then I knew. It was a thing. I had a name for it.

The next day I made an appointment with a psychiatrist and shortly started my first antidepressant.

It’d round things off well to say “..and it’s been great since,” but, like many people — maybe even most people — who take antidepressants, the response is only partial. It’s a struggle. It always will be. Sometimes I’m still in a lot of pain.

At least now I have a way of separating depression from myself. Depression is no longer simply how I understand myself. We are separable. My foe and I. To attack my foe is not necessarily to destroy myself.

Law Like a Hole: one unimportant bourgie’s experience with the Affordable Care Act (so far)

There’s an NBC News investigation — “Obama Administration knew millions could not keep their health insurance” — going around the internet right now. It made Reuters; that’s how I knew about it. (I don’t watch a lot of tv news.) The NBC claim is a little misleading, but it points out a genuine and serious problem with Obamacare — one I happen to be having.

The reason people can’t keep these plans is the coverage they offer is no longer legally sufficient. They have too many exclusions. The plans that will replace them are at least nominally superior; that should be good news. But the replacement plans are not necessarily cheaper.

When rewriting the policies, the insurance companies often took the opportunity to raise their rates; not surprising, because the policies must cover more conditions and sicker people. A subset of customers— like me — have found their new insurance will be considerably more expensive, because they either fall into the Medicaid expansion “hole” or find that the subsidies they do get from the healthcare.gov Exchange are not enough to make up for the increase in rates.

Let me explain my own position — it’s the one I know most about. I’m solidly middle class, and healthy, which means my situation is hardly representative of people having serious difficulty with the nightmarish American health care market, but does show that the health care law is having all kinds of problematic effects even for bourgeois folks who were supposed to find the law unambiguously helpful. As an electoral proposition, the Affordable Care Act was marketed by the Obama administration to people like me.

I don’t have a health plan through work, so a few years ago I bought an individual policy off the rack from Blue Cross. It’s not ghastly, but I don’t think it’s a very good policy — it has a high deductible. Of course, “high deductible” is a relative term these days; ten years ago this policy’s deductible would have seemed obscene, vertiginous, but now it’s normal. In fact, it turns out to be the near-equivalent of a policy Blue Cross is selling as a Gold plan on the Exchange.

The exact policy I have is going away because it has exclusions that are no longer allowed. So I need something new.

When the law passed, I was personally very pleased, because I stood to benefit a lot —I was going to be eligible for Medicaid. For anyone Medicaid-eligible, the law doesn’t offer a subsidy to purchase a policy from the Exchange. Why would it? Nobody thought of that. We’re meant to get our coverage for nothing. (Nothing, except the taxes we pay, of course.)

The Medicaid expansion was supposed to be mandatory for every state, but the Supreme Court decision that otherwise upheld the Affordable Care Act struck down that mandate. 15 Republican-led states went on to refuse the Medicaid expansion outright, 7 more probably will, and 5 more are considering alternate models that, even if they’re approved by HHS, won’t benefit as many people or benefit them as directly. Only 21 states have accepted the expansion of Medicaid outright (3 more might). That creates a gaping “hole” in which people newly eligible for expanded Medicaid (or people who were already eligible for Medicaid, but who have been unable to enroll because their states are broke and enrollment has been closed) actually get neither Medicaid nor subsidized insurance through the Exchange.

Because the sticker prices for individual policies are generally going up, Obamacare has done people in my situation a lot less than no good: to maintain coverage at the same level, we have to pay a lot more. My options are to buy a new individual policy off-the-rack from an insurance company, or to buy a policy through the Exchange at full sticker price. In both cases I’m looking at paying thousands more next year to maintain my high-deductible policy.

Of course, right now it’s academic. There are some steps in the enrollment process I can’t complete because the healthcare.gov site is still broken. The state I live in is one of those that does not have its own exchange, so the whole process has been on healthcare.gov, and — you know the story.

Pro tip: if you are going into the national Exchange, do not use the web site at all. Start and complete the process on the phone or with a human broker. Once you have started on the web site, you must finish on the web site; a second application will case “an issue”. I found that out when I called — it only took a minute to get in touch with a person. They just couldn’t do anything to help me. I’m stuck until the web site is repaired. Whether that will be in time to start coverage on Jan 1 is anyone’s guess; it’s been 30 days and the site still can’t perform the most basic functions.

I could just go uninsured and pay the $95 penalty. That number really tells the story. You can see the amazing disconnect of intention and reality by comparing the premiums, which will cost most people hundreds (if not thousands), with that small penalty.

I’ve considered some exotic options. Incredibly, after Obamacare, it may actually be cheaper for me to buy the least expensive policy that also offers prescription coverage, and to invest in a concierge doctor.

Of course, that’s already to say I’m very fortunate. Most people in my situation — and it’s tens of millions — will have to refuse coverage and get nothing but a poke in the eye, because they don’t have thousands to spend on premiums for a policy that will still require thousands more to pay in deductibles, copays, and coinsurance.

I could move to a state that didn’t refuse the Medicaid expansion. But pulling up stakes isn’t so easy. Internal exile wasn’t supposed to be the rational solution to the health care crisis.

This is just one story, in the genre of “unintended consequences”. I don’t advance it to elicit pity. To repeat, I’m still one of the lucky ones. I will have insurance. If I get sick, I will be able to get the treatment I need.

I offer it as a case study of how the the Affordable Care Act is actually working, on the ground.

The premise of the Exchange was to make group-like policies available to individuals who either didn’t have insurance or were struggling under premiums that were breaking them. For that goal, it has already failed millions.

Kimmel and Kanye, Žižek and Chomsky

Jimmy Kimmel and Kanye West. Noam Chomsky and Slavoj Žižek. Locked in gladiatorial beef.

In fact, these two beefs are the same beef. And I’m with Kanye and Žižek, all the way. They may be goofy and hyper and self-indulgent. They may be wrong. But Kimmel and Chomsky are much worse than wrong.

Chomsky’s criticism of Žižek and Kimmel’s criticism of Kanye are about very different subjects and are enunciated in very different registers, but they amount to the same thing: “I don’t know what you’re talking about, so you can’t be saying anything. I have admittedly done very little, maybe nothing, to try to understand you, but I am confident what you say is not to be taken seriously, indeed, is hardly even intelligible.”

Kimmel:

but I don’t know fashion. And to be honest I don’t follow a lot of what Kanye West has to say.

Chomsky:

What you’re referring to is what’s called “theory.” And when I said I’m not interested in theory, what I meant is, I’m not interested in posturing–using fancy terms like polysyllables and pretending you have a theory when you have no theory whatsoever. So there’s no theory in any of this stuff, not in the sense of theory that anyone is familiar with in the sciences or any other serious field. … See if you can find that when the fancy words are decoded. I can’t.

We have a word for that in philosophy and logic: arguing in bad faith. What Chomsky and Kimmel advance here is not the claim that the other is wrong, because no counter-claim is given. It’s not satire or even caricature: satire and caricature are (hyperbolic) critiques that arise from principled disagreement. It’s not a straw man, because it doesn’t pretend to be arguing with a genuine interlocutor in the first place.

Chomsky and Kimmel don’t enact the reception, comprehension, and interpretation of the other’s words at all. They claim that the other’s words are meaningless, empty, without purpose. Hardly even deserving to be called “words”.

If I’ve said anything else in the few other posts on this blog, it’s that you should be extremely suspicious of such a claim. On the face of it, it’s a very strange thing to say. After all, it’s not as if Kanye’s career can’t be tracked, and it’s not as if Žižek fills his books with literal gibberish: there’s clearly a phenomenon there. If you don’t get them or fail to engage with them, that, it would seem, is a fact about you, isn’t it? It’s surely a great argumentative risk to assert otherwise, and a very good and strong case should be given.

In other words, it’s a pure argument from ignorance. Or maybe more amazingly, it’s ignorance as warrant. “I don’t follow it. I can’t find it. Therefore, it’s crap.” These are not attempts at understanding. They are claims that understanding is not just impossible but so obviously impossible that no effort need be made, that the other is so degenerated that the very attempt at understanding is otiose.

If the interlocutor’s words have no meaning that can be engaged with, then what are they? What differentiates them from mere sounds? The difference between human communication and the grunting of an animal or the babbling of a brook is intention and meaning. What’s the difference between the Chomsky/Kimmel characterization of Žižek and Kanye, and the stereotyping of the Other as a beast or an object? Or maybe the Chomsky/Kimmel position is that the other has tried to cheat us, by dressing up mere sounds to resemble meaningful utterances, as if they were communication, but aren’t. Either they’re creatures or things, with no value and deserving of no respect. Or they’re trying to rook us into thinking that the sounds they emit are meaningful, even though they are no more than the oozing of muck.

You can do whatever you want to someone — is it even a person? — you see that way. That’s not just the first step on a slippery slope: that’s the last step. The reduction has already been performed. You may now silence that other, that pseudo-interlocutor, as casually as you would a drip from a faucet.

At worst, if Žižek and Kanye are cheaters out for a fast buck, they are cynics. At worst, Chomsky and Kimmel are something very much more vile.

Carl Schmitt, the great theorist of fascism, would have known well what they are. He saw the very essence of political life this way: not as various kinds of disagreements and compromises, laws and revolts, conversations and protests, but as the field of an endless, ruthless war between factions, constituted by nothing more than their bare difference from one another.
Schmitt:

The specific political distinction to which political actions and motives can be reduced is that between friend and enemy….it is sufficient for [the enemy’s] nature that he is, in a specially intense way, existentially different and alien… To the enemy concept belongs the ever-present possibility of battle. [Schmitt, The Concept of the Political]

This sense of the political, because primordial and constitutive, is ineliminable. For the fascist Schmitt, it is the lie and the doom of democracy that it does not understand this, that it actually would presume to try to undo its true nature as a welter of enemies eternally at war, and substitute the democratic process for necessary bloodshed. Indeed, war itself becomes the prime, in fact only, political value. It is the allegiance to friends and the willingness to permanently silence enemies that forms the grandest ethical register.

“That’s a big claim,” you may be thinking. “You’re saying that Chomsky and Kimmel stand fundamentally against the kind of argumentative good faith that is required for a commitment to democracy, that they, at least in their ignorant dismissals of those who represent discourses even slightly different than their own, are becoming fascist.” Yeah. I really mean it.

To leap from your own failure of recognition to the denial that the other has any claim at all on your understanding is fundamentally a vicious, even fascist way to approach an interlocutor. It at once drives us away from the attempt at mutual understanding into the arena of violence.

Kimmel is the late-night-tv, pop culture version of Chomsky. The risk is that they are not themselves becoming the vanguard of some kind of authoritarianism — that would be to overstate the matter — as that their sheer bulletheadedness represents a kind of official, celebrity and intellectual, permission for the culture to slide yet farther into a dull, stultifying, yet prodigiously anti-intellectual Colonel Blimp-ism.

Colonel Blimp isn’t just an old fool. Colonel Blimp is a nasty symptom. Colonel Blimp’s willful stupidity is the official culture’s inability to critique itself. Colonel Blimp is the intellectual apparatus, the mass culture, the ruling class, in radical decline. People who otherwise look like they ought to be able to think a thought have downed tools and retreated into a snarling, anti-humanist, verificationism.

You see it everywhere. Steven Pinker’s gloating, tin-eared, “Don’t Worry, We Won’t Hurt You” article, addressed to humanists, was one version. “We’re not the enemy, humanities,” he says. “We respect you: some of you even once did science, and using our new quantitative methods, you can again. Once upon a time, we were all one discipline, and we will be again, as soon as whatever makes you distinct from the sciences is extirpated. Which you shouldn’t mind, because the rest of what you do is meaningless noise anyway.” That’s not an extended olive-branch; that’s an aimed howitzer barrel. You will be assimilated. You will cease to exist.

As Colonel Blimp said, “We should insist on peace. Except, of course, in the event of war.”

Sorry, no. I’m going down swinging on the side of meaning. Even error, excess, and goofiness are preferable to Kimmel and Chomsky, those two chattering skulls.

Penny Arcade, Geek Culture, and Hegel’s “Beautiful Soul”

I’m assuming you already know about Penny Arcade (PA), PAX, and the dickwolves. If you don’t, here is a summary, and here is a chronology. As you can see, this is one of those internet social-justice kerfluffles that was bottled in vintage.

They’re at it again. Just to be clear, before we get into the dry philosophy, I think Krahulik is behaving like a pig. A very pointed and intelligent denunciation of his conduct can be found here [Penny Arcade and the Slow Murder of Satire, by MammonMachine].

I want to talk about something a little different. I’ve long been uncomfortable with geek culture, despite arguably being a geek, and I’ve been trying to understand why I feel that way. Geek culture has pathologies that are not wholly different from what infects most American male youth culture, but they appear in a peculiar way. I think Hegel can help us understand what’s going on.

There’s a section in Hegel’s Phenomenology of Spirit in which Hegel is describing how we develop our consciences by encountering other people.

Say you’ve done something wrong. You did something dumb; you should have known better; you hurt people. What do you do? You apologize.

But an apology is a strange sort of act. An apology isn’t a recompense; it’s merely a statement of recognition. “I realize I fucked up.” Why is such a small thing so important? Because it’s supposed to be universal. We are all sinners, and in an apology, we recognize that we too have much to be sorry for; we see ourselves in the apologetic wrongdoer. It’s a moment of human equality.

Yet it often doesn’t work out that way. Other people can be recalcitrant. There’s no rule that they have to accept our apology just because we offer it. Sometimes they don’t, and they stay mad. They even say that we “still don’t get it.” Maybe we don’t! They refuse the moment of equality implicit in our apology. We don’t get what we wanted. Hegel says:

…seeing this identity and giving this expression, he openly confesses himself to the other, and expects in like manner that the other, having in point of fact put itself on the same level, will respond in the same language, will therein give voice to this identity, and that thus the state of mutual recognition will be brought about.

…But the admission on the part of the one who is wicked, “I am so”, is not followed by a reply making a similar confession. This was not what that way of judging meant at all: far from it!

…By so doing the scene is changed. The one who made the confession sees himself thrust off, and takes the other to be in the wrong…

Now what? Rage and resentment. The one who confessed now feels they are the one who has really been wronged. “Ok, what I did may not have been strictly the best, but where do they get off still flaming me after I said I was sorry! They aren’t really any different from me! Who do they think they are?

Sound familiar? It may even be that the one who now feels themselves wronged, after their apology was rejected, will retreat into feeling their original act was not even bad. Why? Because in the encounter with another person, a void opened up, a failure of understanding and forgiveness – that wasn’t what was supposed to happen. Instead, the other wronged me in return for my confession. There was no mutual expression of humility, no expression of common humanity, showing that our consciences were hardly the same to begin with. But that difference means theirs must have been broken all along! So other people are not the source of my moral development. Mutual understanding of conscience is not important for me to attempt.

The conscience, our own subjective feeling for morality, then changes its mind about the importance of listening to other people, and arrogates to itself the right to be the final authority:

Conscience, then, in its majestic sublimity above any specific law and every content of duty, puts whatever content it pleases into its knowledge and willing. It is moral genius and originality, which knows the inner voice of its immediate knowledge to be a voice divine.

For Hegel, conscience, when it reaches this stage, is uneducable. It has turned its back on others. It does not feel it has anything to learn from them.

I think you can see where I think Krahulik stands in this dialectic. He took his apology back, and said he was never wrong to begin with. He said he just should have kept quiet, not engaged. He said, in effect, that he never had anything to learn, that there was no point in his ever having listened.

Hegel calls this state “the beautiful consciousness.” Obviously, he doesn’t think much of it.

Why “beautiful”? That’s a little bit of a joke. It’s meant to evoke something like the narcissistic boho spirit of self-cultivation, to the exclusion of real-world engagement with others. The beautiful consciousness is sealed off from others. It’s not interested in, as Twitter and Tumblr social justice folks put it, shutting up and listening. All that would show is how dumb other people are. Instead, it gets all it needs from within. The beautiful soul’s own desire to express itself is its own law. (c.f. Amanda Palmer.)

But – here’s the big problem. There isn’t anything in there. The beautiful consciousness has sealed itself off from the very people that would provide its sensibilities with content other than itself:

We see then, here, self-consciousness withdrawn into the inmost retreats of its being, with all externality, as such, gone and vanished from it […] an intuition where this ego is all that is essential, and all that exists.

The result is a mind that is, in a profound way, empty – empty of engagement with others, of what one gains from engagement with others. It’s like…a beautiful snowflake. Crystalline and pure and inhuman and small. Beautiful, in a way, but utterly impoverished.

Even if it wanted more, to know more, to understand what it is about other people it has failed to understand, the beautiful soul has cut itself off and therefore no longer knows where to look. Even when it looks outside itself, it begins to see only versions of itself: objects that are as cold, inhuman, hollow, and empty as it itself is. Even in a crowd of such empty, identical beings, it must be desperately lonely.

Its activity consists in yearning, which merely loses itself in becoming an unsubstantial shadowy object, and, rising above this loss and falling back on itself, finds itself merely as lost. In this transparent purity of its moments it becomes a sorrow-laden “beautiful soul”, as it is called; its light dims and dies within it, and it vanishes as a shapeless vapour dissolving into thin air.

Because its own self-assertion is the whole of its own law, when criticized, the beautiful soul retreats immediately into non-sequitur, abstract defenses of its right to speak. Obviously, it isn’t really invoking the majesty of the First Amendment (as everyone points out, free speech isn’t a claim against criticism, it’s a claim only against prior restraint) or any other political ideal; it’s invoking its own endless need for pouring out the depths of its empty self. Indeed, because the principle of its activity is its own right to scream out its lack of interiority, it becomes deranged:

Thus the “beautiful soul”, being conscious of this contradiction in its unreconciled immediacy, is unhinged, disordered, and runs to madness, wastes itself in yearning, and pines away in consumption.

The beautiful soul, it turns out, is kind of a dick. Not a sociopath or an antisocial; those people actively enjoy causing pain and chaos. Just a dick. Someone who just isn’t interested in “getting it.” Someone who probably wants to see themselves as principled and justified, but whose principles are nothing more than assertions of their need to gratify themselves with their own forms of self-expression. That’s all that’s left for them.

I think this whole dialectic, this regression from a more mature conscience and consciousness to the “beautiful soul”, is emblematic of geek culture; I think it’s something that young male geeks in their “geekness” tend to do a lot (I mean, this is the smart kid’s version of being a dick; wordily, by being a snot). And I think it’s a repetition of something that happened before, in the life of many young male geeks, and in the ur-narrative of geekness itself.

What is the ursprung of masculine geekness, besides being smart and good at fiddly tasks? Being bullied in school. And having girls turn you down – either in a moment of humiliating explicitness, or implicitly, in the regular order of things. Well, I was there myself, and it’s frustrating. It’s frustrating in a way that warps minds.

The beautiful soul exposes itself and expects to find commonality. Consciousness seeks for its fellows in others; it expects to hear, in response to the admission that it is itself unworthy, the answering assertion that we are all unworthy. When it does not get it, but receives continued denunciation, it retreats into an angry, yet sterile, self-enclosure, in which the its own desire for self-expression at any price becomes the principle of its existence. Yet there is so very little to express, when the interior is empty of the voices of others.

Similarly. The young geek soul exposes itself in what it is already convinced is its brokenness. It assumes even in this brokenness it is still no less than human. But the reaction is not one of welcome, but revulsion – the riposte that other people are not broken at all, that it is just the geek who is unacceptable. Social and sexual life are withheld. The geek soul turns inward. Wasting itself in yearning, it turns inward.

And it gets angry.

Because what it wanted was only what it assumed was due any human insofar as they are seen to be human. Things one is owed simply because one is alive. Sociality. Sexuality. Belonging.

Owed. Sound like “the nice guy“? The nice guy is the beautiful soul in its sexual moment.

What do you do when you aren’t given what you are owed? Act out.

At last, our chance to be that asshole 16-year-old we couldn’t be the first time around because we were too busy getting jock locks and swirlies. Even if we really weren’t, we remember it that way, because that’s what it is to be a geek, and that’s why the world owes us an endless, consequence-free adolescence. Here’s to the crazy ones. Fuck you if you can’t take a joke.

This is such a nasty dialectic because, of course, the geek soul is right. It should have access to things like sociality and belonging and a sexual life. Humans need those things. To be denied them is as painful an experience as a human being can have. Geek youth is cruel. The geek ur-story is a tragic story. The turning-inward, the evolution of the geek soul into the beautiful soul, is not surprising. Probably it is inevitable.

But at the same time, it’s maddening and terrible. The older geek’s beautiful soul is one that feels violated by others’ righteous claims of conscience in the same way that the younger geek’s soul felt wounded by others’ cruelty and rejection. But those aren’t the same. The older geek’s soul hasn’t advanced enough to make the distinction that the rejection experienced in high school is not the same as the judgment it receives now. The geek soul retreated inward and so did not allow itself the experiences, the openness, needed to become the mature consciousness that could make such a distinction. The beautiful geek soul is stuck in a repetition compulsion. This repetition is what makes it such a dick.

Stop being a dick, geek culture. Grow up and stop being a dick.

Daddy, what’s Syria?

Steve Rattner (@SteveRattner), a former Obama adviser and rapacious Wall Street demogorgon, tweeted today that it’s wrong of Obama to seek Congressional approval for “strikes” in Syria because, well, I’m not sure why, except that it’s not decisive. “The President is our CEO,”* he tweeted.

As Doug Henwood (@DougHenwood) replied, “the elite is so done with democracy.”*

What’s amusing is that, while the Congressional Republicans have just as big an erection for war, if not a bigger one, nothing gets their loins pulsing like a chance to trip Obama, so they’ve declined to come back into session during recess so Obama can get their approval. So, no strikes for at least a little while. That’s an ironic reason.

Rattner also tweeted that the precedent of the Iraq war has “paralyzed”* the West. That sounds like today’s go-go, hyperlinked version of the old “Vietnam Syndrome.” (That was an awful disease in which both moral and prudential considerations were allowed to be brought to bear on the decision to go to war. It was horribly debilitating, and caused our National Resolve to bleed out.) Then we kicked Saddam Hussein’s ass – in 1991 – and we were cured. Yay!

But then Saddam gave us another dose after 2003. He was tricky that way.

Maybe it’s like Looney Tunes, in which you have to get hit on the head an even number of times to avoid amnesia. (Except, yeah, it’s an odd number of times in this case: the first touch of Saddam cured us, like removal of a case of the King’s Evil. Whatever.)

I miss the days of Vietnam Syndrome, actually. I’m nostalgic. Let’s bring back some of those moments.

TV Commercial (sorry, Youtube interstitial):

A father and his young son, standing in front of the National Iraq War Memorial, which doesn’t yet exist, but one day will have to. I favor an Ozymandias-style pair of vast and trunkless legs: Bush’s legs, from the “mission accomplished” aircraft carrier landing, complete with the big, stuffed crotch bulge in the flight-suit pants. Just nothing above that.

Child: “Daddy, what’s Iraq?” (Daddy looks nonplussed.)

Narrator: A question a child might ask, but not a childish question.

Child: Daddy, did we win the war in Iraq? (Daddy looks troubled.)

Narrator: With your payment, Time Life Books will rush you your first book in The Iraq Experience: “What the Fuck Were Bush and Cheney Thinking?” Another book will follow about every other month, including “The TV War Douchebags”, “Images of the War by the Journalists Who Were There” (that’s a blank book), and “The Mysterious Koans of Donald Rumsfeld”.

…I’d buy that.

Neurath’s Boat and the Righteous Bubble

I’m going to begin this blog with a few posts on its purpose, nature, and intended tone. Here is one.

On the Internet, there are two cultures – even going back to the days of the BBSes, there always were. One is loud, vituperative, and denunciatory. The other is shocked by the first.

Here is a recent example of the second culture – a Wall Street Journal article describing yet another study explaining why we’re so rude on the internet (anonymity, it’s always anonymity) and how that harms our polity. This has become a commonplace.

I’m not so sure.

The article describing the study was more interesting than the study. The protagonist, or sacrificial victim, of the piece is a poster called “ER Doc”, who gets flamed for offering an informed opinion on a list of dog-bite incidents by pit bull terriers. The WSJ reporter takes a dim view of ER Doc’s interlocutors:

Then a childhood pal of Ms. Bristol piped up with this: “Take it from an ER doctor… In 15 years of doing this I have yet to see a golden retriever bite that had to go to the operating room or killed its target.”

That unleashed a torrent. One person demanded to see the doctor’s “scientific research.” Another accused him of not bothering to confirm whether his patients were actually bitten by pit bulls. Someone else suggested he should “venture out of the ER” to see what was really going on.

“It was ridiculous,” says Ms. Bristol[.]”

Hm. Frankly, most of those replies to ER Doc don’t even sound rude. They’re exactly the questions you’d ask if you were conducting any kind of academic inquiry into ER Doc’s conclusions. And that is precisely the style of inquiry held up as the ideal from which “internet rudeness” supposedly strays. Where’s your data? Do you have more than anecdotes? Do you even know the accepted definitions of terms (here, dog breeds)? Or have you not been listening, and expected to drop in and shut everyone up on the basis of a putatively impressive credential? That’s another way of being rude – both rude and paradigmatically irrational, when you’re arguing something incendiary (here, implying that some people’s pet dogs should be euthanized). I would expect such a response as a bare minimum, and it’s hard to see how it’s improper.

What we lack here is a good model of conversational expectation – virtually everyone who writes “incivility on the internet” articles would be horrified by even mild academic Q&A sessions. Every part of people’s claims are hashed over, often in raised voices. Academia is not a tea party, even when everyone is doing it properly and playing by the rules. The level of discourse at a tea party is not the model of reasoned discourse, nor how things are done when Rational People get together to Have Their Rational Discussions – those often get quite tense, and certainly ought to cut deeply into the muscle of what anyone says. There’s a troubling conflation of, on the one hand, tough questioning and passionate argument, which I think are not only fine but completely in keeping with the ideal of deliberative inquiry, and, on the other, verbal assault (threats, humiliations, vicious epithets, racism and misogyny and so on). I think most people, in practice, are pretty good at seeing the difference.

Except when the inquirer is after something we ourselves said – then it all feels pretty mean. Spinoza noted that in our fondness for our own opinions, we tend to identify them with our own selves, and when our opinions are attacked, we feel the attack bodily, as an attack on our own being. But that’s not how it is, and it’s precisely what a reasonable person, or at least a person trying to be reasonable, has to get past.

The claim “we are rude on the internet and we must stop doing that because we are fracturing into different universes of discourse and it is wrecking our democracy” – this is an easy claim to make, but I think it’s mostly false, definitely ahistorical, and ideological. The assumption that without calm amity, minds do not change is, I think, demonstrably untrue – I know that I’ve changed my mind both in amicable conversations and not-at-all amicable ones (bellyflopping in an argument in front of other people has a way of clarifying the mind). And we know that calm discussion has no special power to crack open and merge separated universes of discourse: as the Public Conversations Project‘s section on abortion found, long-term, calm, reasoned conversation over a divisive issue tended to re-assert social pressure for politeness, but it manifestly did not bring people’s opinions together or even open up common ground: opinions became even more polarized.

Johnathan Haidt’s book the Righteous Mind is one that echoes the Wall Street Journal piece above. He thinks the internet has brought a historical break in our “national conversation”, which is qualitatively worse now, and it’s because we each live in self-reinforcing, filtered bubbles of opinion. We hang out online with only the like-minded, we consume media that only echoes what we already think. That’s true, but it raises a question: has there really been a time when human opinion was not formed mostly by a process of selection bias? Was there ever a time when our political discourse was not noisy, noisome, and scurrilous? I think even casual acquaintance with our history shows that this is not just wrong but an absurd claim. (The names they called Jefferson!) I don’t think the level of political acrimony is unique even in American history, and saying that discourse is too divisive now is one that’s too easy to make when social subalterns have just started gaining their own voices. That’s one reason I think the Righteous Mind ended up being a deeply conservative book (in the end, it was blue-staters who were really guilty of living in a bubble).

That’s not to say it isn’t a complete waste of time to argue with people who are Wrong on the Internet. It usually is. But it’s not because they’re loud, and it’s definitely not because “minds never change.” You can change a mind when you start from somewhere close to where it already is, not from the far side of some massive gulf you have to yell to be heard over. And minds move gradually, inch-by-inch. Opinion change is a nonrandom walk – a series of twitches and stumbles, really. I’ve had my mind changed more, and more fundamentally, by interacting with people who are like-minded, but not perfectly like-minded. I share their basic assumptions, I easily assume their sincerity and good faith, but precisely because of this I do not need to be spoken to by them in the kindest terms. I check myself, I think more, I read more, and I take a step. Then another step. And those steps change the nature of the intellectual community around me – it repopulates with different “like-minded” people as my mind changes. After a time, I find I am in a quite different place, with quite different people, and I no longer share the same assumptions as the people with whom I began. In so doing, I sail past the implicit barrier of Davidson’s model of interpretive charity in Neurath’s boat.

(You’ll want to know what that last bit means. The philosopher Donald Davidson said that, in argument, you will often find things in your opponent’s position that seem unreasonable or foolish or unfounded to you; it is your duty to reconstruct your opponent’s argument into the strongest case that you can, and then try to disprove that; in so doing you avoid arguing against straw men. Yet it has been noted by others (José Medina and Naomi Scheman, for example) that Davidson’s model has a flaw: you may be wrong in your reconstruction, because what seems to you the strongest version of their argument will rest on your own background assumptions, which may be wrong – and your opponent may have better ones. “Neurath’s boat” is a metaphor by the philosopher Otto Neurath about language: you rebuild the boat while you’re already at sea, so you can’t just start over and rebuild it all at once, because you’ll sink, but you can do it if you do it one little piece at a time (you use temporary extra pieces to keep water from rushing in). After a while you have a completely new boat, even if you never lay up and rebuild the boat from scratch. See?)

%d bloggers like this: