Mark Sussman

Writer - Researcher - Teacher

Author: mark sussman (page 1 of 17)

Post-Quality Television

That TV is “good” now seems to be beyond dispute. No one need ask “Is there anything good on?” anymore, because we all know that there is something good waiting for us on our DVR, on a streaming service, for a la carte purchase, etc.  I don’t need to rehash the rise of the serial drama or the advent of the second Golden Age of Television or whatever you want to call it — plenty of other people have done that better than I can. But there is always someone telling me, as I’m sure there is always someone telling you, about some show you have never heard of but that you have to see. And, if you are like me, after the third or fourth time someone tells you you have to watch that show, you make a solemn pact with yourself never ever to watch that show, just out of sheer spite.

And yet sometimes, whether out of weakness, boredom, or genuine curiosity, I find myself watching a new series. And in watching these series, I’ve started to realize that my own judgments have become fuzzier, less definite. Some I have an immediate allergic reaction to and stop watching; some I continue to hate-watch, though I’ve stopped thinking they’re good, and I refuse to behave rationally, accept the sunk cost fallacy, and stop; some I continue to watch because I want to be part of the culture (i.e., memes — what’s up Game of Thrones); and others I actually enjoy and admire. I’ve often thought that, even with shows that I dislike or think are just straight-up bad, the overall quality of television is up. There are more shows now that seem like they are custom made for someone with exactly my tastes: ambiguous, allusive, surreal, darkly funny, accommodating to pessimism and negativity, comfortable with silence, auteur-driven, and visually attractive. “Arty” or “pretentious,” take your pick.

But I frequently find myself watching a new, arty, expensive-looking series, asking myself, “Is this good?” and finding no way to answer the question. I had this experience most recently with two series: Netflix’s The OA and HBO’s The Young Pope. Both of them came to me as “must watch” shows, though for totally different reasons. The OA, I had heard and read, is a drama about loss and trauma, featuring nuanced performances from young, mostly unknown actors, with a quasi-mystical vibe to it. The Young Pope is a TV show called The Young Pope starring Jude Law as a young pope. The OA seemed to scream “prestige drama,” while The Young Pope looked like fun sexy garbage, replete with absurd stunt casting and a title so on-the-nose it’s almost subtle. And they are quite different shows. But after a few episodes, I noticed myself reacting in a similar way to both of them. I have no idea if they’re good or not. I have no idea if they are as smart as they want me to think they are, as deep as they want me to think they are, or as artful as they want me to think they are. More than that, I remain vaguely suspicious of these shows, suspecting that they’re compensating for some essential lack of ideas, intellectual heft, and existential import by using strategies that suggest ideas, intellectual heft, and existential import.

Guess who? It’s the Young Pope.

Some of this comes down to the ways the series use silence. Characters stare silently, look at each other without speaking, seem to contemplate ineffable mysteries. This is less true of The Young Pope, whose characters are perpetually explaining their own feelings and motivations to each other in yelly, improbable monologues. (See for example, an enraged, cardiganed James Cromwell yelling, “I was supposed to be pope!” at a sulking Jude Law. Me too, dude!)

But The OA, at least the handful of episodes I made it through before giving up, is full of characters staring and contemplating, full of moments in which we, the audience, are meant to infer some deep, ambiguous process going on within the character, in which, in fact, the scene’s meaning and drama are often left up to us to produce. And the The OA is not alone in this. Hulu’s The Path (which I kind of like) is redolent with characters staring off into empty space while they struggle, silently and internally, and I’m willing to bet a number of other dramas use this technique as well. We’re meant, I think, to understand that the characters, like us, experience their turmoil within themselves, in their own heads and guts. Thus, no matter how mystical the show’s premise or fantastical its themes, they all claim a kind of realism for themselves by portraying characters silently reflecting on their own experiences and sense of selves as we silently watch them reflecting and ponder what it is they’re pondering while recognizing that we, too, ponder our own nature and experience it in just that way. It’s here, in silence, where characters’ three-dimensionality emerges, where they become “realistic,” and where these shows often implicitly makes their claims for art, depth, and all of that. In other words, “quality.” (The other source of TV’s “realism” is, of course, its violence, especially its sexual violence, but that’s a topic for another time.)

There are different kinds of silence, with different qualities. The locus classicus for silent staring on TV is, of course, Mad Men, and I think that show exemplifies uses of good silence. Nary an episode went by where Don Draper wasn’t staring at some damn thing: a wall, a car, a window or whatever was outside it, himself in the mirror or maybe just the mirror itself. Sometimes the staring occasions an expository flashback, like the one where Don gets lost in a saucepan of boiling milk because it reminds him of his traumatic, depression-era childhood. Usually, though, it’s staring that seems to serve no narrative purpose, nor is its cause or outcome clearly defined. But I want to praise Don Draper’s staring, because it is organically bound to the subject matter of Mad Men itself, and to Draper’s character, and to how other characters and we as an audience are meant to perceive him. Don is an enigma both to himself and to those around him. I think, at the series’s end, he remains enigmatic. We don’t know what he’s thinking, nobody around him knows what he’s thinking and, most importantly, he might not even know what he’s thinking or why he’s thinking it. He remains cut off from himself, and his silence marks an impasse between his past and his present, one he repeatedly tries and fails to cross.

Don Draper staring.

This marks another problem for silence, and really for many forms of ambiguity these serial dramas trade on. If these moments of silence, nonaction, and ambiguity do finally coalesce into something meaningful, it happens over a long stretch of time. Mad Men had seven seasons to do it, which is far longer than most series last. But Mad Men also had unusually high production values, a famous obsession with historical accuracy, and very good dialogue that was often genuinely funny. (It also liked to play cat and mouse games with its die hard viewers, as when the creators seemed to leave clues hinting that Megan Draper would be murdered by the Manson family in season six. She wasn’t.)  All of that made the show worth watching week to week, even if you weren’t convinced that its narrative would pay off. Now it’s common to hear something like, “It starts off boring, but it starts to get really good in episode six when everything starts coming together.” Episode six??? Art sometimes demands great patience of its audience, but after absorbing six hours of anything, I may start convincing myself that it was worth it just so I don’t feel like a sucker. Stockholm Syndrome works in mysterious ways.

Which, I think, is why I often can’t tell whether or not these shows are actually good.  They’ve gotten so good at engaging their audience’s capacity for ambiguity, postponement, and silence. There is a sense in which we are always waiting for a show to “get good” because these shows always hint toward the idea that they might, that the boredom and listlessness you’re experiencing are just the necessary prelude to a revelation that will recast the hours and hours and hours you’ve already invested as crucial steps in a satisfying aesthetic experience. It’s no surprise that so many of these shows take divine, mystical, or otherwise supernatural revelation as their explicit subject-matter: The Path, The Leftovers, The OAThe Young Pope, and, importantly, two shows that set the template for modern serial drama, Twin Peaks and Lost. Mr. Robot, a show that is in some sense about two very worldly concerns, technology and wealth, is also about revealing the hidden reality beneath ordinary perception. It is also absolutely full of staring, a fact not unrelated to how superlatively bad its second season was.  Even Mad Men (which, if it’s not clear by now, I love) ends with Don attaining enlightenment in lotus position.

Part of me wants to say the seemingly endless stream of series that promise noumenal contact with some transcendent Truth as a narrative payoff (and then don’t or can’t deliver) is a product of the auteurism in television that followed on the success of The SopranosThe WireMad Men, and Breaking Bad (three of which actively resisted that kind of narrative trajectory, btw). Great television, the theory goes, like great cinema, literature, and visual art, comes from the mind of one great author, or “showrunner” if you insist on using the faux insider industry term.  Our current fetish for “creators” (or **shudder** “creatives”) insists that great art is produced when great minds are given as much freedom as possible to do whatever they want. (This is, of course, true only in a few famous but anomalous cases.)

 

I think what our current TV situation shows us, though, is that when many, many people are given the chance to do what they want with an incredibly powerful medium, they simply end up reproducing tropes and themes that happen to signify “depth” rather than being deep, that orchestrate pre-digested narratives, pre-circulated tropes, and shopworn techniques that read as “arty” without actually saying much. Their shows are often beautifully shot and develop a “signature” visual style: Kubrickian one point perspective in House of Cards, the simultaneously spacious and claustrophobic Vatican in The Young Pope, whatever that thing is in Mr. Robot where the characters’ heads sort of pop in from the bottom of the frame.

Typical composition from Mr. Robot.

But in most cases these signatures devolve into cliché, and visual style becomes a way of suggesting, like the religious iconography they so often reference, an encounter with meaning that transcends the material of the object itself. They remind me of “creative writing,” the sort of pieces know that a man looking in a cracked mirror signifies “crisis of self” without needing to understand or communicate what such a crisis might actually feel like, or that suggesting someone is a “Christ-figure” confers, through some arcane transitive property, the weight of martyrdom but without any actual suffering to support it. Such a technique relies on a reader’s willingness to “put in the work,” but often the “work” the diligent reader (or viewer) puts in is work the writer has failed to do himself, or perhaps doesn’t think it’s his job to do.

Such a predicament doesn’t mean that all of this television is “bad” rather than “good.” I think it suggests that the terms by which we judge quality have become obscure, that much of the new television we see, intentionally or not, works to evade the kind of judgments that could pin it down and find it wanting. The formal vocabulary of the new Golden Age of Television draws attention to itself as important, or potentially important, art, but it does so by relying on our receptiveness to its ambiguities, deferrals, and silences. In that sense, there can be no final judgment of good or bad, there can be no real evaluation of the work.  In the ’70s, Norman Lear’s “quality television” involved making the social issues of the day part of the explicit subject-matter of the shows he produced. In that sense, most serial dramas are also “quality television.” But in another sense, we are post-quality, because judgment has become not so much a matter of exercising your critical faculties, but of deciding how long you will “stick with” a show before it either completes its run or you bail on it. Since there is always a possibility that a show will “pay off,” it can always claim a kind of importance for itself, one confirmed by the very fact that you, the diligent viewer, have sat and watched 10 hours of it already in expectation that “something” will happen.

I sound like someone who hates TV, but I don’t. I, for one, welcome our post-quality world. While I don’t think it necessarily makes for good art, it may serve another function, which is to offer a kind of therapeutic critical no-space. No value judgments necessary, no critical renderings possible, just the amniotic warmth of a narrative environment promising a final act we can take comfort in knowing will never come. If you ignore its need to be meaningful, television offers a zen-like retreat for people like me who lack the discipline for an actual zen retreat, or the interest in attending one. But this, you’ll say, is how we used to talk about television: empty calories, vapidity, it’ll rot your brain, the vast wasteland, and so on. Fine. Good. If you’re at all concerned about our current political environment, you feel as though your brain is in a vice, and every Times news alert that rattles your phone turns the screw a little tighter. They call you to engage, get outraged, resist, and so on. But these demands are unsustainable. I want to flee from them, too. Between submitting to irrational authoritarianism on the one hand and the warring puritanisms of the “resistance” on the other, I’ll take the vast wasteland.

Trump’s False Choice

So Donald Trump claims that “millions” of votes for Hillary Clinton were the result of fraud.

He’s also suggesting that he might jail and/or deport flag burners, even though flag burning is protected speech under the First Amendment.

But is he “really” in the process of subverting the Constitution and delegitimizing the electoral process?

Or is he “actually” distracting us from his conflicts of interest, shady/illegal business practices, and so on?

This is essentially the shape of the debate right now. It seems to force anti-Trump folks to make a decision about how we’ll treat the things Trump says. Either we treat his tweets as miniature policy proposals or as little sideshow performances that shift public debate away from concrete legal violations. We’re meant to either take his proclamations “seriously” or else ignore them as a smokescreen.

But I think buying into the serious/distraction dichotomy in the first place is a mistake. It’s the same mistake Trump has goaded the media and the commentariat into throughout the election. He’ll make an outrageous proclamation, half of his opponents will take him seriously, and the other half of his opponents will chide the first half for getting distracted from the “real” issues. At this point, Trump will hold a rally and point out how unfairly he’s being treated by the media, and how “they” don’t get that flag burning should be illegal. To which you can imagine a Trump crowd roaring in assent because a huge part of the country agrees with him. 

The point is that the distraction and the serious dichotomy doesn’t hold up. It’s a false decision. Buying into it only enables Trump to continue using liberal outrage to fuel his support. Trump isn’t “actually” saying he’ll subvert the Constitution or “actually” distracting people from his conflicts of interest. Or rather, he’s doing both. But he has the advantage of not yet being president, so he can continue to play this game without having to face actual consequences. While he’s holed up in D.C. and New York trying to sort out what his administration will look like, unable to hold rallies for the moment and unwilling to hold a press conference, he can continue to remind the voters who showed up for him at the polls why they voted for him.

The only thing to do is take the serious/distraction dichotomy for what it is: an illusion. Reject it.

Repetition and Understanding: Rancière’s The Ignorant Schoolmaster

L0005730 Joseph Jacotot. Lithograph by A. Lemonnier after Hess. Credit: Wellcome Library, London. Wellcome Images images@wellcome.ac.uk http://wellcomeimages.org Joseph Jacotot. Lithograph by A. Lemonnier after Hess. Lithograph By: A. Lemonnierafter: HessPublished: - Copyrighted work available under Creative Commons Attribution only licence CC BY 4.0 http://creativecommons.org/licenses/by/4.0/I’m reading Jacques Rancière’s The Ignorant Schoolmaster right now, and it’s a bit of a revelation. One of the things Rancière does that I’ve been trying to do is break down the distinction between concepts of “understanding” and those of “repetition.” In the educational context, we tend to think of “understanding” as the thing that happens when a student comprehends the logic of a given object (say, the German language) and is able to apply it to something else (they can write original, grammatically correct sentences in German). We think of “repetition” as what happens when a student memorizes a set of statements in the correct order and repeats them back, thus fooling us into thinking they have understood, when really they have only memorized and repeated. The student can repeat a grammatically correct German sentence that he has heard, but he can’t come up with his own, because he doesn’t “understand” German grammar. (Join the club, kid.)

You can sort of see this distinction dramatized in this Kids in the Hall sketch.

YouTube Preview Image

I’ve always thought there was something mysterious or fishy about the proposed distinction between understanding and repetition. When you get down to it, couldn’t you describe “understanding” as an iterable practice of minute, variously conjugated repetitions? Logic is abstract, but it follows rules. Doesn’t the application of rules imply the repetition or possible repetition of those rules? I’m getting into either John Searle territory or Jacques Derrida territory. But in the project I’m working on, I’ve found neither Searle’s “Chinese Room” nor Derrida’s “iterability” very convincing as ways of addressing a fundamental epistemological ambiguity between repetition and understanding. I’d be interested to know if there is any work in neuroscience that addresses this, though I could imagine a neuroscientist saying something like, “Well, everything in the brain is a pattern of more or less successful recall, so yeah, ‘understanding’ is just a complicated form of repetition.” That’s probably an offensive oversimplification, but you get what I’m saying.

Rancière has a different way of approaching things. He’s writing about Joseph Jacotot, a late-18th-early-19th-Century French educator, a guy who taught Flemish speaking students to read and speak French, though he knew no Flemish at all and the students knew no French at all. He “taught” them by simply giving them each a bilingual edition of Télémaque and having them find the Flemish equivalent for each French word until they could translate it themselves. Did they “understand” French or were they simply learning to locate French words? Jacotot did no explication, no explaining, and yet the students learned French. Here’s one thing Rancière says about Jacotot and his students:

Without thinking about it, [Jacotot] had made [the students] discover this thing that he discovered with them: that all sentences, and consequently all the intelligences that produce them, are of the same nature. Understanding is never more than translating, that is, giving the equivalent of a text, but in no way its reason. There is nothing behind the written page, no false bottom that requires the work of an other intelligence, that of the explicator; no language of the master, no language of the language whose words and sentences are able to speak the reason of the words and sentences of a text. The Flemish students had furnished the proof: to speak about Télémaque they had at their disposition only the words of Télémaque (9-10).

Rancière’s reading of Jacotot suggests that “reason” and “understanding” are just the names we give to forms of repetition, of translating, of providing equivalences. There is nothing more to understanding “language” than understanding “words,” in other words. And once you learn what enough words mean, you can know a language. You might object and say, okay, but then whoever learns the language will merely be translating in their head. There will always be a two-step process, from French to Flemish. But for Rancière, there is already a process of translation going on, that of “the will to express,” which he equates with “[the will to] translate” (10). Once you think of language as something that has already been “translated” from thought, spurred on by the “will to express,” then the translation between one language and another in the mind becomes a matter of little epistemological import. It would be a matter of huge import if you wanted to, say, carry on a fluent conversation in another language, but not if you are asking “Is there a qualitative difference between translating by slowly looking up a word in a bilingual dictionary and translating in your head?” In Rancière’s way of thinking about things, the answer would be a firm “No.”

But clearly some people speak new languages better than others, acquire them faster than others, and so on. In Rancière’s thinking, this would seem to be only a matter of speed, not a matter of qualitative difference. When we use the unkind euphemism “slow” to describe someone who is “unintelligent,” Rancière might say, “Yes, precisely. He’s slow. And speed is the only thing that separates him from you and me. Not some qualitative mental difference.” He’s quite clear on this matter: “[the word understanding] alone throws a veil over everything: understanding is what the child cannot do without the explanations of a master — later, of as many masters as there are materials to understand” (6). Rancière sees the notion of “understanding” as a term conferred by power. Once we have a master’s blessing, we can say we “understand” a subject rather than just remember its salient elements. The further we penetrate down in the concept, the more we find that understanding merely comprises finer and finer points of memorization, recall, and coordination. There is a difference of degree and not of kind. Yet the difference between one who understands and one who simply recalls is one of the most widespread ways that that cultures have made the distinction between the educated mind and the ignorant mind, the scholar and the idiot, the civilized and the savage.  “Understanding,” in this sense, is just a term that signifies and justifies the dominance of one over another.

The simplicity of Rancière’s analysis of understanding is seductive. In the work I’ve been doing on conceptions of African American epistemology in the nineteenth century, Rancière’s analysis is utterly in harmony with what I’ve read. White supremacists, including those that thought of themselves as liberals, argued while people of African descent were “apprehensive,” they lacked “understanding.” In other words, they could learn rote skills quickly but could not engage in original thinking. What such an argument had going for it was unfalsifiability. If an African American seemed to understand something, it could be argued by anyone that she had simply memorized a set of facts or principles and mistook it for (or knowingly passed it off as) “understanding.” But in fact, it was not understanding, it was just recall, and so we needn’t be fooled into the idea that African Americans are the intellectual equals of whites. In fact, it’s quite an elegant way to deny that any person “understands” anything at all!

That’s material for a future post (and book). In the case of racialist discourses of black epistemology, it’s clear that all of these seemingly fine distinctions between “understanding” and “recall” are a bunch of racist hooey. But I wonder how far I’m willing to follow Rancière’s analysis. While its simplicity is appealing, and I felt a bit of an epiphanic shiver while reading it, something about it seems too neat. Would I be willing to follow through on the implications of this idea in my own teaching, take up a position of ignorance, and forego the practice of explication I frequently engage in with my class? I do not think I would. Partially, that’s for institutional reasons: I don’t think my department chair would be too thrilled by it. Partially, it’s for chickenshit reasons: it would be so different from how I was taught and what I was taught teaching is, I would be afraid to do it. And partially, of course, it’s for reasons of pleasure and ego: who doesn’t love standing up there and showing that they can take apart and reassemble a complex theoretical text, turn it one way or the other, and so on?

But of course, just because you wouldn’t adopt a theory as a lived principle doesn’t mean it isn’t pragmatically useful. Even distinctions whose logic has been dissolved by critique have a way of reconstituting themselves in lived experience. It doesn’t necessarily make us hypocrites if we theorize one way and act another, though it may sometimes. I suppose I’m trying to figure out how to acquire, or understand, or at least imitate, whatever act of judgment would allow me to make the right call.

Perhaps Overly Detailed Statement Regarding the Definitions, Effects, and Institutional Mores of Plagiarism

I’ve been working on a doc for my undergrads in Intro to Literary Studies and Intro to Writing about Lit that will explain why plagiarism is a big deal, why you shouldn’t do it, and why your teachers sometimes lose their marbles when they suspect you of it. There’s a bit at the end that goes beyond the usual tautological reasoning (“Don’t plagiarize because it’s wrong”) and gestures to the ways in which plagiarism affects teacher-student collaboration. Here’s a draft.
— — —

Perhaps Overly Detailed Statement Regarding the Definitions, Effects, and Institutional Mores of Plagiarism

by Mark Sussman

 

In this class, and likely in every class you will take at Hunter, you are expected to submit work that is wholly your own. You are also expected to demonstrate that you have mastered the material at hand, which means you will often be quoting and paraphrasing the work of experts. So, turn in work that is 100% original, but make sure that original work borrows from the work of other people. Hmmmmmm …

This seeming contradiction can make the rules of plagiarism and academic integrity sound confusing, if not downright impossible to follow. It can also obscure the rather complicated reasons plagiarism is treated so seriously, despite the myriad ways in which social media has made sharing, reposting, regramming, retweeting, and other forms of appropriation acceptable and normal.  But I am going to try to explain things as clearly as I can.

 

What is plagiarism?

The most simple definition of plagiarism is appropriating someone else’s writing or ideas without attributing them to the original author. The effect of this is to make it seem as though you are the originator of what are, in reality, someone else’s words or ideas. So for example, if I write, “Othello shows us that, as T.S. Eliot wrote, ‘[N]othing dies harder than the desire to think well of oneself’” (244), I have attributed the quote and idea to their author and cited the source. Everything is fine. But if I write, “Othello shows us that nothing dies harder than the desire to think well of oneself,” I have committed plagiarism, because I took Eliot’s words and passed them off as my own.

 

What is originality?

When you hear your professors (at least your English professors) say they want you to produce “original” work, they mean “original” in a very specific sense. They mean that you should produce a piece of writing and analysis whose argument and thesis statement are the product of your own research, writing, and thought. All the writing in your essay should support that thesis statement and argument, which are original in the sense that you formulated them yourself after examining and analyzing the evidence at hand (the text, other scholars, etc.). They don’t mean that every word or idea in your essay has to be yours. Learning about what others have thought and said about the texts you study is a crucial part of writing about them in an informed manner. You are expected to read, cite, and quote from outside sources in order to learn what other writers and thinkers have said about it.

But your professors do ask that when you use someone else’s words or ideas, you give credit to the original source by using a standard system of citation (like MLA). At the undergraduate level, they don’t even ask that you argue something that no one has ever argued. They only ask that you come up with the argument on your own — if someone somewhere happens to have had the same thought and you don’t know about it, that is understandable in most cases. You’re all still learning how to do this, no one expects you to have comprehensive knowledge of your subject.

So essentially, all of the rules surrounding citation, attribution, and plagiarism are there to prevent you from doing one thing: taking credit for other people’s work, whether accidentally or purposefully. The reason style guidelines like MLA, APA, and Chicago are so intricate and infuriating, and the reason your professors get all worked  up about them, is because they are central to making sure credit is given to people who earned it. Professional scholars dedicate their lives to producing new knowledge about the world, and it matters that they receive credit for their work.

 

Ok, but why is that important?

You may ask what difference this credit makes in the context of a college class. You’re not trying to “steal credit” for writing or ideas in a professional context, like a journalist who passes off someone else’s reporting as his own. By borrowing an elegant formulation or a slick analysis from someone else, you’re only trying to create a better essay, which is, after all, what your professor told you to do. So no harm, no foul.

No. That attitude misconstrues why your professors think citation and giving credit are so important. The reason they furrow their brows when you misplace a comma in your works cited and get unreasonably upset and prosecutorial when you borrow a few sentences from a website is because they are trying to train you to think of citation as a matter of ethics, as a matter of fairness and rightness. Failing to give proper credit in the proper way is, in the context of academic institutions, wrong in the same way that stealing money from your neighbor is wrong. In that sense, not citing a source is a categorically different error than, say, writing “its” when you mean “it’s” or messing up a plot point in Othello. From their perspective, failing to credit your sources looks like a failure of character.   

 

This doesn’t sound like we’re talking about writing anymore … 

Like it or not, your English professors are trying to train you not only to be a certain kind of writer and thinker, but to be a certain kind of person, the kind of person who doesn’t steal from their academic neighbor and who looks down on anyone who would. Your professors will not really say this to you because, frankly, the idea that we’re trying to impose our own morals and character on you really weirds most of us out. The reasons for this are complicated, and I’m happy to go into them later. But trust me, it’s true. They want you to experience moral revulsion at the very suggestion of not citing your sources, just like they do. And when you don’t give credit where it’s due, your professor starts to ask themselves whether something is going on. They start to ask themselves if they have a thief on their hands. Not a “rule-breaker,” but a thief.

 

That sounds harsh.

It is. In my experience and that of most of my teacher friends, most plagiarism is accidental. Some plagiarism is intentional, but done out of desperation, fear, and anxiety. A very, very small amount of plagiarism is done in a calculating, sneaky, underhanded way. The problem is, all of those kinds of plagiarism look the same when you find them. When you’re confronted with a paper that contains plagiarism, you don’t know if you’re dealing with a) someone who simply doesn’t know the rules and has accidentally broken them, b) someone who is having real problems in the class, and perhaps in life, that can be addressed in an honest conversation, or c) a total sociopath.

At that point a wall of suspicion imposes itself between teacher and student. The suspected plagiarist’s behavior is dissected, his or her papers are examined with a fine-tooth comb, and a perceptible chill hovers over the teacher’s dealings with the student. Everything the student says and does is colored by the possibility that it might all be part of some elaborate con (English professors tend to be suspicious — it’s actually part of their training). You never really know if you’re dealing with an honest mistake or an attempt to deceive and manipulate.

So plagiarism is about your teachers’ feelings?

Yeah, kinda. There are reasons why plagiarism is a crucial issue for professional scholars, and why scholars and journalists who have been found to plagiarize in published work are essentially kicked out of the profession and shunned. Again, I’m happy to discuss that later. But in the context of the classroom, even the appearance of plagiarism, never mind flagrant, sociopathic theft, can fracture the one-on-one communication that’s necessary for teachers to really improve their students’ writing and work. You will simply learn more if there is a one-on-one component in your courses, and that is almost impossible to have when your teacher is constantly asking themselves if the sentences they are reading are yours at all. So if the appeal to ethics doesn’t do it for you, consider the quality of instruction you would like to get for the ever-increasing tuition you pay.

 

So let’s say you think I’m plagiarizing. What happens?

What happens is I call you into my office and I point to what I think are instances of plagiarism. I ask you whether or not you admit this is plagiarism and whether you have some reason why it looks like there’s plagiarism present. Then I refer the matter to Hunter College’s Academic Integrity Official, who will initiate a process that could end in a warning, expulsion from Hunter, or anything in between, depending on the severity of the offense. You can either officially admit to the accusation or contest it, in which case there will be a sort of hearing held to determine what will happen. You can read all about this on Hunter’s Academic Integrity website.

 

Ok. Got it. Don’t plagiarize. But I’m worried that I might accidentally plagiarize. How do I not do that?

  1. Keep track of your sources. You will probably accumulate many sources you would like to quote from. As you start incorporating quotations, and especially as you start paraphrasing, it will become surprisingly easy to lose track of what you thought of and wrote and what someone else thought of and wrote. Keep a doc that has only the material you’re getting from elsewhere and the citation information for that material so you can double-check.
  2. Cite as you go. Do not tell yourself you’ll insert in-text citations later because you’re on a roll, and you don’t want to stop writing to check a page number. Take a second to do it as you’re writing or you may forget.
  3. “Borrowing” language from a website without attribution is plagiarism. Taking language from any source (including a website) and changing around a few of the words to make it look slightly different but not citing it is most definitely plagiarism. It’s tempting, but don’t do it. It’s very easy to spot.
  4. Err on the side of caution. If you’re not sure if you should cite something or not, cite it. I’ll let you know if it’s something you don’t need to cite.
  5. If you have questions about how or whether to cite, ask me. I promise I will not be mad. In fact, I will be happy that you are taking these issues so seriously!

 

Butler, Speech, and the Campus

[Note: a slightly expanded version of this post is up at Souciant, titled “Looking for Judith Butler.” I’m keeping the post as-is for posterity’s sake.]

I really enjoyed Molly Fischer’s piece about Judith Butler for New York, but I think it misses something significant about Butler’s ongoing relevance. The piece ends with the suggestion that discourse about gender has moved beyond the performative theories Butler expounded in Gender Trouble. Paragraphs like this one convey the idea that Butler has triumphed, but also that she has been surpassed:

Isaac belongs to a generation for whom Butler is part of the canon. Today, it is possible to go online and read Judith Butler’s theory of gender performativity as explained with cats. There are Facebook pages like “Judith Butler Is My Homegirl.” Quotes from Gender Trouble are reliably reblogged on Tumblr. And yet, Maria Trumpler, director of Yale’s Office of LGBTQ Resources and a professor of women’s, gender, and sexuality studies, says that for the kids she sees at Yale today, 40 years after Butler was an undergraduate there, Gender Trouble is “really old-fashioned.” The last four years in particular have seen an enormous growth of student interest in identities “beyond the binary,” Trumpler says, like agender, bigender, genderqueer.

Fair enough. But Butler still remains wildly relevant on college campuses, particularly for undergraduates. Nathan Heller’s recent piece for the New Yorker and reports about campus protests makes it clear that it’s Butler work on speech (in Excitable Speech) and assembly (in Notes Toward a Performative Theory of Assembly) that have the most relevance to campus life right now. In fact I would say that, from the perspective of the present, Butler’s work as a theorist of gender looks like a special case of her broader work as a theorist of speech. It is difficult for me to read accounts of students calling the speech they hear on campus “violence” without thinking of Butler’s work after Gender Trouble.

Here, for example, is a passage from the introduction to Excitable Speech:

Understanding performativity as a renewable action without clear origin or end suggests that speech is finally constrained neither by its specific speaker not its originating context. Not only defined by social context, such speech is also marked by its capacity to break with context. Thus, performativity has its own social temporality in which it remains enabled precisely by the contexts from which it breaks. This ambivalent structure at the heart of performativity implies that, within political discourse, the very terms of resistance and insurgency are spawned in part by the powers they oppose (which is not to say that the latter are reducible to the former or always already coopted by them in advance).

In other words, Butler is saying that when you “resist” dominant social forces by construing their hate speech (like racial slurs) as violence, you are actually participating in validating a model of language that can work against you as well. Butler uses the example of arguments about pornography, but we could just as easily look at arguments against gay marriage. We may scoff at a straight, married couple who says their religious rights are being infringed upon when two people of the same gender get married. But what they’re saying is that the political act that legitimizes gay marriages changes the terms of the institution of marriage without their consent, and so does injury to them in the same way that a slur or hate speech does injury.

Performativity, though it is often thought of as a tool of insurgent political analysis, has no political allegiances. I think this is the push-pull we see on campuses now, with some campus activists calling for protections from what they see as hate speech and others saying that such protections constitute a restriction on free speech, and thus a form of injury. Butler has spent a long time describing and theorizing this sort of structure, where, as she puts it, “language constitutes the subject in part through foreclosure, a kind of unofficial censorship or primary restriction in speech that constitutes the possibility of agency in speech.” In other words, what we think of as a freedom of speech, with all of the privileges of expression that implies, is only enabled by a tacit agreement not to speak about certain things or in certain ways.

Right now the nature of those certain things and certain ways is becoming more and more uncertain. The limits of speech are being tested on both the left and right. They are tested on the left by campus activism that demands institutional protection from forms of speech they consider to be violence. They seek the power to punish people for certain kinds of hurtful language. Though Butler’s writings do not endorse those sorts of punitive measures (at least that I can see, I’m not a Butler expert), it seems clear to me that the dissemination of her ideas has influenced these activists. From the right, those same forms of hurtful speech are becoming part of the political lingua franca. Utterances that would otherwise be called hate speech are drawn into a zone of acceptance that protects them from any plausible claim that they constitute a form of violence. Butler’s ideas, far from approaching comfortable retirement, need to be engaged now more than ever.

Older posts

© 2017 Mark Sussman

Theme by Anders NorenUp ↑