Mark Sussman

Writer - Researcher - Teacher

Trump’s False Choice

So Donald Trump claims that “millions” of votes for Hillary Clinton were the result of fraud.

He’s also suggesting that he might jail and/or deport flag burners, even though flag burning is protected speech under the First Amendment.

But is he “really” in the process of subverting the Constitution and delegitimizing the electoral process?

Or is he “actually” distracting us from his conflicts of interest, shady/illegal business practices, and so on?

This is essentially the shape of the debate right now. It seems to force anti-Trump folks to make a decision about how we’ll treat the things Trump says. Either we treat his tweets as miniature policy proposals or as little sideshow performances that shift public debate away from concrete legal violations. We’re meant to either take his proclamations “seriously” or else ignore them as a smokescreen.

But I think buying into the serious/distraction dichotomy in the first place is a mistake. It’s the same mistake Trump has goaded the media and the commentariat into throughout the election. He’ll make an outrageous proclamation, half of his opponents will take him seriously, and the other half of his opponents will chide the first half for getting distracted from the “real” issues. At this point, Trump will hold a rally and point out how unfairly he’s being treated by the media, and how “they” don’t get that flag burning should be illegal. To which you can imagine a Trump crowd roaring in assent because a huge part of the country agrees with him. 

The point is that the distraction and the serious dichotomy doesn’t hold up. It’s a false decision. Buying into it only enables Trump to continue using liberal outrage to fuel his support. Trump isn’t “actually” saying he’ll subvert the Constitution or “actually” distracting people from his conflicts of interest. Or rather, he’s doing both. But he has the advantage of not yet being president, so he can continue to play this game without having to face actual consequences. While he’s holed up in D.C. and New York trying to sort out what his administration will look like, unable to hold rallies for the moment and unwilling to hold a press conference, he can continue to remind the voters who showed up for him at the polls why they voted for him.

The only thing to do is take the serious/distraction dichotomy for what it is: an illusion. Reject it.

Repetition and Understanding: Rancière’s The Ignorant Schoolmaster

L0005730 Joseph Jacotot. Lithograph by A. Lemonnier after Hess. Credit: Wellcome Library, London. Wellcome Images images@wellcome.ac.uk http://wellcomeimages.org Joseph Jacotot. Lithograph by A. Lemonnier after Hess. Lithograph By: A. Lemonnierafter: HessPublished: - Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0 http://creativecommons.org/licenses/by/4.0/I’m reading Jacques Rancière’s The Ignorant Schoolmaster right now, and it’s a bit of a revelation. One of the things Rancière does that I’ve been trying to do is break down the distinction between concepts of “understanding” and those of “repetition.” In the educational context, we tend to think of “understanding” as the thing that happens when a student comprehends the logic of a given object (say, the German language) and is able to apply it to something else (they can write original, grammatically correct sentences in German). We think of “repetition” as what happens when a student memorizes a set of statements in the correct order and repeats them back, thus fooling us into thinking they have understood, when really they have only memorized and repeated. The student can repeat a grammatically correct German sentence that he has heard, but he can’t come up with his own, because he doesn’t “understand” German grammar. (Join the club, kid.)

You can sort of see this distinction dramatized in this Kids in the Hall sketch.

YouTube Preview Image

I’ve always thought there was something mysterious or fishy about the proposed distinction between understanding and repetition. When you get down to it, couldn’t you describe “understanding” as an iterable practice of minute, variously conjugated repetitions? Logic is abstract, but it follows rules. Doesn’t the application of rules imply the repetition or possible repetition of those rules? I’m getting into either John Searle territory or Jacques Derrida territory. But in the project I’m working on, I’ve found neither Searle’s “Chinese Room” nor Derrida’s “iterability” very convincing as ways of addressing a fundamental epistemological ambiguity between repetition and understanding. I’d be interested to know if there is any work in neuroscience that addresses this, though I could imagine a neuroscientist saying something like, “Well, everything in the brain is a pattern of more or less successful recall, so yeah, ‘understanding’ is just a complicated form of repetition.” That’s probably an offensive oversimplification, but you get what I’m saying.

Rancière has a different way of approaching things. He’s writing about Joseph Jacotot, a late-18th-early-19th-Century French educator, a guy who taught Flemish speaking students to read and speak French, though he knew no Flemish at all and the students knew no French at all. He “taught” them by simply giving them each a bilingual edition of Télémaque and having them find the Flemish equivalent for each French word until they could translate it themselves. Did they “understand” French or were they simply learning to locate French words? Jacotot did no explication, no explaining, and yet the students learned French. Here’s one thing Rancière says about Jacotot and his students:

Without thinking about it, [Jacotot] had made [the students] discover this thing that he discovered with them: that all sentences, and consequently all the intelligences that produce them, are of the same nature. Understanding is never more than translating, that is, giving the equivalent of a text, but in no way its reason. There is nothing behind the written page, no false bottom that requires the work of an other intelligence, that of the explicator; no language of the master, no language of the language whose words and sentences are able to speak the reason of the words and sentences of a text. The Flemish students had furnished the proof: to speak about Télémaque they had at their disposition only the words of Télémaque (9-10).

Rancière’s reading of Jacotot suggests that “reason” and “understanding” are just the names we give to forms of repetition, of translating, of providing equivalences. There is nothing more to understanding “language” than understanding “words,” in other words. And once you learn what enough words mean, you can know a language. You might object and say, okay, but then whoever learns the language will merely be translating in their head. There will always be a two-step process, from French to Flemish. But for Rancière, there is already a process of translation going on, that of “the will to express,” which he equates with “[the will to] translate” (10). Once you think of language as something that has already been “translated” from thought, spurred on by the “will to express,” then the translation between one language and another in the mind becomes a matter of little epistemological import. It would be a matter of huge import if you wanted to, say, carry on a fluent conversation in another language, but not if you are asking “Is there a qualitative difference between translating by slowly looking up a word in a bilingual dictionary and translating in your head?” In Rancière’s way of thinking about things, the answer would be a firm “No.”

But clearly some people speak new languages better than others, acquire them faster than others, and so on. In Rancière’s thinking, this would seem to be only a matter of speed, not a matter of qualitative difference. When we use the unkind euphemism “slow” to describe someone who is “unintelligent,” Rancière might say, “Yes, precisely. He’s slow. And speed is the only thing that separates him from you and me. Not some qualitative mental difference.” He’s quite clear on this matter: “[the word understanding] alone throws a veil over everything: understanding is what the child cannot do without the explanations of a master — later, of as many masters as there are materials to understand” (6). Rancière sees the notion of “understanding” as a term conferred by power. Once we have a master’s blessing, we can say we “understand” a subject rather than just remember its salient elements. The further we penetrate down in the concept, the more we find that understanding merely comprises finer and finer points of memorization, recall, and coordination. There is a difference of degree and not of kind. Yet the difference between one who understands and one who simply recalls is one of the most widespread ways that that cultures have made the distinction between the educated mind and the ignorant mind, the scholar and the idiot, the civilized and the savage.  “Understanding,” in this sense, is just a term that signifies and justifies the dominance of one over another.

The simplicity of Rancière’s analysis of understanding is seductive. In the work I’ve been doing on conceptions of African American epistemology in the nineteenth century, Rancière’s analysis is utterly in harmony with what I’ve read. White supremacists, including those that thought of themselves as liberals, argued while people of African descent were “apprehensive,” they lacked “understanding.” In other words, they could learn rote skills quickly but could not engage in original thinking. What such an argument had going for it was unfalsifiability. If an African American seemed to understand something, it could be argued by anyone that she had simply memorized a set of facts or principles and mistook it for (or knowingly passed it off as) “understanding.” But in fact, it was not understanding, it was just recall, and so we needn’t be fooled into the idea that African Americans are the intellectual equals of whites. In fact, it’s quite an elegant way to deny that any person “understands” anything at all!

That’s material for a future post (and book). In the case of racialist discourses of black epistemology, it’s clear that all of these seemingly fine distinctions between “understanding” and “recall” are a bunch of racist hooey. But I wonder how far I’m willing to follow Rancière’s analysis. While its simplicity is appealing, and I felt a bit of an epiphanic shiver while reading it, something about it seems too neat. Would I be willing to follow through on the implications of this idea in my own teaching, take up a position of ignorance, and forego the practice of explication I frequently engage in with my class? I do not think I would. Partially, that’s for institutional reasons: I don’t think my department chair would be too thrilled by it. Partially, it’s for chickenshit reasons: it would be so different from how I was taught and what I was taught teaching is, I would be afraid to do it. And partially, of course, it’s for reasons of pleasure and ego: who doesn’t love standing up there and showing that they can take apart and reassemble a complex theoretical text, turn it one way or the other, and so on?

But of course, just because you wouldn’t adopt a theory as a lived principle doesn’t mean it isn’t pragmatically useful. Even distinctions whose logic has been dissolved by critique have a way of reconstituting themselves in lived experience. It doesn’t necessarily make us hypocrites if we theorize one way and act another, though it may sometimes. I suppose I’m trying to figure out how to acquire, or understand, or at least imitate, whatever act of judgment would allow me to make the right call.

Perhaps Overly Detailed Statement Regarding the Definitions, Effects, and Institutional Mores of Plagiarism

I’ve been working on a doc for my undergrads in Intro to Literary Studies and Intro to Writing about Lit that will explain why plagiarism is a big deal, why you shouldn’t do it, and why your teachers sometimes lose their marbles when they suspect you of it. There’s a bit at the end that goes beyond the usual tautological reasoning (“Don’t plagiarize because it’s wrong”) and gestures to the ways in which plagiarism affects teacher-student collaboration. Here’s a draft.
— — —

Perhaps Overly Detailed Statement Regarding the Definitions, Effects, and Institutional Mores of Plagiarism

by Mark Sussman

 

In this class, and likely in every class you will take at Hunter, you are expected to submit work that is wholly your own. You are also expected to demonstrate that you have mastered the material at hand, which means you will often be quoting and paraphrasing the work of experts. So, turn in work that is 100% original, but make sure that original work borrows from the work of other people. Hmmmmmm …

This seeming contradiction can make the rules of plagiarism and academic integrity sound confusing, if not downright impossible to follow. It can also obscure the rather complicated reasons plagiarism is treated so seriously, despite the myriad ways in which social media has made sharing, reposting, regramming, retweeting, and other forms of appropriation acceptable and normal.  But I am going to try to explain things as clearly as I can.

 

What is plagiarism?

The most simple definition of plagiarism is appropriating someone else’s writing or ideas without attributing them to the original author. The effect of this is to make it seem as though you are the originator of what are, in reality, someone else’s words or ideas. So for example, if I write, “Othello shows us that, as T.S. Eliot wrote, ‘[N]othing dies harder than the desire to think well of oneself’” (244), I have attributed the quote and idea to their author and cited the source. Everything is fine. But if I write, “Othello shows us that nothing dies harder than the desire to think well of oneself,” I have committed plagiarism, because I took Eliot’s words and passed them off as my own.

 

What is originality?

When you hear your professors (at least your English professors) say they want you to produce “original” work, they mean “original” in a very specific sense. They mean that you should produce a piece of writing and analysis whose argument and thesis statement are the product of your own research, writing, and thought. All the writing in your essay should support that thesis statement and argument, which are original in the sense that you formulated them yourself after examining and analyzing the evidence at hand (the text, other scholars, etc.). They don’t mean that every word or idea in your essay has to be yours. Learning about what others have thought and said about the texts you study is a crucial part of writing about them in an informed manner. You are expected to read, cite, and quote from outside sources in order to learn what other writers and thinkers have said about it.

But your professors do ask that when you use someone else’s words or ideas, you give credit to the original source by using a standard system of citation (like MLA). At the undergraduate level, they don’t even ask that you argue something that no one has ever argued. They only ask that you come up with the argument on your own — if someone somewhere happens to have had the same thought and you don’t know about it, that is understandable in most cases. You’re all still learning how to do this, no one expects you to have comprehensive knowledge of your subject.

So essentially, all of the rules surrounding citation, attribution, and plagiarism are there to prevent you from doing one thing: taking credit for other people’s work, whether accidentally or purposefully. The reason style guidelines like MLA, APA, and Chicago are so intricate and infuriating, and the reason your professors get all worked  up about them, is because they are central to making sure credit is given to people who earned it. Professional scholars dedicate their lives to producing new knowledge about the world, and it matters that they receive credit for their work.

 

Ok, but why is that important?

You may ask what difference this credit makes in the context of a college class. You’re not trying to “steal credit” for writing or ideas in a professional context, like a journalist who passes off someone else’s reporting as his own. By borrowing an elegant formulation or a slick analysis from someone else, you’re only trying to create a better essay, which is, after all, what your professor told you to do. So no harm, no foul.

No. That attitude misconstrues why your professors think citation and giving credit are so important. The reason they furrow their brows when you misplace a comma in your works cited and get unreasonably upset and prosecutorial when you borrow a few sentences from a website is because they are trying to train you to think of citation as a matter of ethics, as a matter of fairness and rightness. Failing to give proper credit in the proper way is, in the context of academic institutions, wrong in the same way that stealing money from your neighbor is wrong. In that sense, not citing a source is a categorically different error than, say, writing “its” when you mean “it’s” or messing up a plot point in Othello. From their perspective, failing to credit your sources looks like a failure of character.   

 

This doesn’t sound like we’re talking about writing anymore … 

Like it or not, your English professors are trying to train you not only to be a certain kind of writer and thinker, but to be a certain kind of person, the kind of person who doesn’t steal from their academic neighbor and who looks down on anyone who would. Your professors will not really say this to you because, frankly, the idea that we’re trying to impose our own morals and character on you really weirds most of us out. The reasons for this are complicated, and I’m happy to go into them later. But trust me, it’s true. They want you to experience moral revulsion at the very suggestion of not citing your sources, just like they do. And when you don’t give credit where it’s due, your professor starts to ask themselves whether something is going on. They start to ask themselves if they have a thief on their hands. Not a “rule-breaker,” but a thief.

 

That sounds harsh.

It is. In my experience and that of most of my teacher friends, most plagiarism is accidental. Some plagiarism is intentional, but done out of desperation, fear, and anxiety. A very, very small amount of plagiarism is done in a calculating, sneaky, underhanded way. The problem is, all of those kinds of plagiarism look the same when you find them. When you’re confronted with a paper that contains plagiarism, you don’t know if you’re dealing with a) someone who simply doesn’t know the rules and has accidentally broken them, b) someone who is having real problems in the class, and perhaps in life, that can be addressed in an honest conversation, or c) a total sociopath.

At that point a wall of suspicion imposes itself between teacher and student. The suspected plagiarist’s behavior is dissected, his or her papers are examined with a fine-tooth comb, and a perceptible chill hovers over the teacher’s dealings with the student. Everything the student says and does is colored by the possibility that it might all be part of some elaborate con (English professors tend to be suspicious — it’s actually part of their training). You never really know if you’re dealing with an honest mistake or an attempt to deceive and manipulate.

So plagiarism is about your teachers’ feelings?

Yeah, kinda. There are reasons why plagiarism is a crucial issue for professional scholars, and why scholars and journalists who have been found to plagiarize in published work are essentially kicked out of the profession and shunned. Again, I’m happy to discuss that later. But in the context of the classroom, even the appearance of plagiarism, never mind flagrant, sociopathic theft, can fracture the one-on-one communication that’s necessary for teachers to really improve their students’ writing and work. You will simply learn more if there is a one-on-one component in your courses, and that is almost impossible to have when your teacher is constantly asking themselves if the sentences they are reading are yours at all. So if the appeal to ethics doesn’t do it for you, consider the quality of instruction you would like to get for the ever-increasing tuition you pay.

 

So let’s say you think I’m plagiarizing. What happens?

What happens is I call you into my office and I point to what I think are instances of plagiarism. I ask you whether or not you admit this is plagiarism and whether you have some reason why it looks like there’s plagiarism present. Then I refer the matter to Hunter College’s Academic Integrity Official, who will initiate a process that could end in a warning, expulsion from Hunter, or anything in between, depending on the severity of the offense. You can either officially admit to the accusation or contest it, in which case there will be a sort of hearing held to determine what will happen. You can read all about this on Hunter’s Academic Integrity website.

 

Ok. Got it. Don’t plagiarize. But I’m worried that I might accidentally plagiarize. How do I not do that?

  1. Keep track of your sources. You will probably accumulate many sources you would like to quote from. As you start incorporating quotations, and especially as you start paraphrasing, it will become surprisingly easy to lose track of what you thought of and wrote and what someone else thought of and wrote. Keep a doc that has only the material you’re getting from elsewhere and the citation information for that material so you can double-check.
  2. Cite as you go. Do not tell yourself you’ll insert in-text citations later because you’re on a roll, and you don’t want to stop writing to check a page number. Take a second to do it as you’re writing or you may forget.
  3. “Borrowing” language from a website without attribution is plagiarism. Taking language from any source (including a website) and changing around a few of the words to make it look slightly different but not citing it is most definitely plagiarism. It’s tempting, but don’t do it. It’s very easy to spot.
  4. Err on the side of caution. If you’re not sure if you should cite something or not, cite it. I’ll let you know if it’s something you don’t need to cite.
  5. If you have questions about how or whether to cite, ask me. I promise I will not be mad. In fact, I will be happy that you are taking these issues so seriously!

 

Butler, Speech, and the Campus

[Note: a slightly expanded version of this post is up at Souciant, titled “Looking for Judith Butler.” I’m keeping the post as-is for posterity’s sake.]

I really enjoyed Molly Fischer’s piece about Judith Butler for New York, but I think it misses something significant about Butler’s ongoing relevance. The piece ends with the suggestion that discourse about gender has moved beyond the performative theories Butler expounded in Gender Trouble. Paragraphs like this one convey the idea that Butler has triumphed, but also that she has been surpassed:

Isaac belongs to a generation for whom Butler is part of the canon. Today, it is possible to go online and read Judith Butler’s theory of gender performativity as explained with cats. There are Facebook pages like “Judith Butler Is My Homegirl.” Quotes from Gender Trouble are reliably reblogged on Tumblr. And yet, Maria Trumpler, director of Yale’s Office of LGBTQ Resources and a professor of women’s, gender, and sexuality studies, says that for the kids she sees at Yale today, 40 years after Butler was an undergraduate there, Gender Trouble is “really old-fashioned.” The last four years in particular have seen an enormous growth of student interest in identities “beyond the binary,” Trumpler says, like agender, bigender, genderqueer.

Fair enough. But Butler still remains wildly relevant on college campuses, particularly for undergraduates. Nathan Heller’s recent piece for the New Yorker and reports about campus protests makes it clear that it’s Butler work on speech (in Excitable Speech) and assembly (in Notes Toward a Performative Theory of Assembly) that have the most relevance to campus life right now. In fact I would say that, from the perspective of the present, Butler’s work as a theorist of gender looks like a special case of her broader work as a theorist of speech. It is difficult for me to read accounts of students calling the speech they hear on campus “violence” without thinking of Butler’s work after Gender Trouble.

Here, for example, is a passage from the introduction to Excitable Speech:

Understanding performativity as a renewable action without clear origin or end suggests that speech is finally constrained neither by its specific speaker not its originating context. Not only defined by social context, such speech is also marked by its capacity to break with context. Thus, performativity has its own social temporality in which it remains enabled precisely by the contexts from which it breaks. This ambivalent structure at the heart of performativity implies that, within political discourse, the very terms of resistance and insurgency are spawned in part by the powers they oppose (which is not to say that the latter are reducible to the former or always already coopted by them in advance).

In other words, Butler is saying that when you “resist” dominant social forces by construing their hate speech (like racial slurs) as violence, you are actually participating in validating a model of language that can work against you as well. Butler uses the example of arguments about pornography, but we could just as easily look at arguments against gay marriage. We may scoff at a straight, married couple who says their religious rights are being infringed upon when two people of the same gender get married. But what they’re saying is that the political act that legitimizes gay marriages changes the terms of the institution of marriage without their consent, and so does injury to them in the same way that a slur or hate speech does injury.

Performativity, though it is often thought of as a tool of insurgent political analysis, has no political allegiances. I think this is the push-pull we see on campuses now, with some campus activists calling for protections from what they see as hate speech and others saying that such protections constitute a restriction on free speech, and thus a form of injury. Butler has spent a long time describing and theorizing this sort of structure, where, as she puts it, “language constitutes the subject in part through foreclosure, a kind of unofficial censorship or primary restriction in speech that constitutes the possibility of agency in speech.” In other words, what we think of as a freedom of speech, with all of the privileges of expression that implies, is only enabled by a tacit agreement not to speak about certain things or in certain ways.

Right now the nature of those certain things and certain ways is becoming more and more uncertain. The limits of speech are being tested on both the left and right. They are tested on the left by campus activism that demands institutional protection from forms of speech they consider to be violence. They seek the power to punish people for certain kinds of hurtful language. Though Butler’s writings do not endorse those sorts of punitive measures (at least that I can see, I’m not a Butler expert), it seems clear to me that the dissemination of her ideas has influenced these activists. From the right, those same forms of hurtful speech are becoming part of the political lingua franca. Utterances that would otherwise be called hate speech are drawn into a zone of acceptance that protects them from any plausible claim that they constitute a form of violence. Butler’s ideas, far from approaching comfortable retirement, need to be engaged now more than ever.

Criticism in Doubt: A.O. Scott’s Better Living Through Criticism

It’s not clear why, exactly, A.O. Scott wrote Better Living Through Criticism in the first place. It seems like the sort of thing the lead film reviewer at the New York Times ought to do, I guess. But, as Leon Wieseltier’s dead-on and damning review makes clear, Scott doesn’t have any particular critical position, never mind a thesis, to defend. The book’s subtitle, How to Think about Art, Pleasure, Beauty, and Truth, turns out to be a joke: of course no one can tell you these things, or at least Scott doesn’t seem to think one should. He’s perfectly comfortable praising or trashing an individual film, as you can see in his reviews on any given day in the Times, but when it comes to stating his working principles as a critic, he retreats.

Scott finds that two major critical positions, elitism and populism, turn out to be the same: “The idea of critical authority and the ideal of common knowledge are not in competition, but are rather the antithetical expressions of a single impulse toward comprehensive judgment, toward an integral aesthetic experience, the achievement of which would eliminate the need for critics altogether.” This sounds nice, but its strategy is to nullify the difference between meaningful critical attitudes. And it’s this discomfort toward the disagreements that occur when critics take concrete positions on art that they are willing to defend that is a problem for the rest of Scott’s book. 

Better Living Through Criticism constantly performs this anxiety over disagreement, and particularly about being on the wrong side of disagreement. Over time, Scott tells us, “You are guaranteed to be wrong,” proven wrong by changing tastes, which come to laud the movie you trashed in print, proven wrong by history, which now views that delightful comedy you reviewed favorably as a paragon of fascist cinema, your opinion fossilizes, turns to dust, and so on. Times change, tastes change, critics can only accept it. That’s good practical advice for a working critic, but terrible advice if you happen to be writing a book about the actual practice of criticism. What the reader wants from a book like Scott’s, what its subtitle seems to view as an ironic impossibility, is a book of ideas and methods that have the chutzpah to claim critical authority for themselves, to tell you why you should believe them, and to anticipate and eviscerate any argument that says it ain’t so.

But it’s almost as though Scott feels he doesn’t have the authority to make such claims — then why write the book? Several sections are written as dialogs between Scott and some interlocutor (modeled, Scott tells us, on David Foster Wallace’s “Brief Interviews with Hideous Men”), presumably his own nagging voice of self-criticism and doubt. In them he is forever offering qualifications and revisions, telling us, “Of course, we’re all determined beings, made by circumstances beyond our control. But we’re also changeable creatures, highly susceptible to the influence of accident, free agents with the power to invent ourselves.” The formation of critical judgment is equal parts nature and nurture, sure. But surely a book like this ought to offer some thoughts as to how the practice of criticism can identify and, to some extent, ground our unknowing suspension between the tastes we absorb by osmosis and those we cultivate, to make conscious and explicit the unacknowledged and implicit forms we struggle to see in the objects before us and which exert power over our thoughts and beliefs. Scott’s book opts out of the difficult work of hewing knowledge from uncertainty. Criticism begins in doubt, but the point is to overcome it, not enshrine it.

 

« Older posts

© 2016 Mark Sussman

Theme by Anders NorenUp ↑