Skip to content

Lex maniac

Investigating changes in American English vocabulary over the last 40 years

Tag Archives: democracy

nuclear option

(1990’s | journalese | “extreme (or drastic or desperate) measure,” “last resort,” “irrevocable decision”)

Gorblimey, china plates, I do believe this expression counts as a Briticism, at least in its contemporary sense. As far back as the sixties, the phrase was widely used in the U.S. to talk about energy generation or weapons. “The nuclear option,” on one hand, was what utility executives urged us not to neglect; on the other, it heralded the development of atomic weapons (for those that didn’t already have them), or different ways it might be possible to use them (for those that did). That was true at least until 1980. Not long after that, one started seeing the odd figurative use, but it was much more common in the U.K. than in the U.S. up until 2004 or so, when the phrase assumed the meaning we hear most commonly today: the majority party in the U.S. Senate taking away the filibuster, the last-ditch means for the minority of derailing legislation. Since then, it has crept into other spheres — trade negotiations, computer maintenance, even sports.

Semantically, I find the phrase surprisingly difficult to pin down. Is it an unanswerable blow? A point of no return? Overkill? Destroying the cause of a problem rather than simply solving it? Nuclear war can only be imagined in terms of all-encompassing destruction, persisting for centuries, at least, so the figurative use has a palette of apocalypses to choose from. But the slipperiness brings starkly into view the loss of force the expression has undergone. Inevitably, the “nuclear option” in negotiation or managing your players involves lower stakes than it does in its more literal senses, even though the literal meanings have not gone away. In time the terror will leach out of it. (Not that eliminating the filibuster isn’t terrifying, but you can’t compare it to thousands dead in a flash.) Grammatically, it’s more predictable: “nuclear option” always takes the definite article; you’ll never hear “a nuclear option,” though that sort of thing can always change.

Today’s Senate is pretty debased, but they have not quite gone the whole hog with the nuclear option, not yet, anyway. Oh, wait, they have, at least as the term was originally understood in 2004. Back then, Republicans threatened to eliminate the filibuster for judicial nominees. They didn’t go through with it, and two years later the Democrats took the Senate back. But in 2013, the Democrats did change the rules, exempting only Supreme Court nominees. In 2017, the Republicans finished the job, but so far have stoutly resisted doing away with the filibuster against legislation, Trump’s uncomprehending dismay notwithstanding. The majority party in the Senate has been pointing fingers and making threats at least since the nineties, and minority power has diminished though not disappeared. Those who believe the minority party should not have the power to stop majority-supported legislation may tout government efficiency as their most powerful rationale. I say the last thing we need is efficient government, particularly at the federal level. Efficient governments are dictatorships. Our system has been designed from the beginning to pose obstacles to rushed legislation that we’ll all regret later. The framers weren’t always right by any means, but we would do well to reflect on their wisdom here.

Tags: , , , , , , , , ,

hive mind

(1990’s | science fiction | “zeitgeist,” “will of the people,” “conventional wisdom,” “groupthink”)

It all started with the bees. The British apiarist H.J. Wadey probably did not invent the term, but he used it in the 1940’s to describe the process by which lots and lots of bees, each of which has next to no mental capacity on its own, work together to create an intelligence that cannot be accounted for simply by adding up the microcapacities of each bee in the colony. There was something a bit mystical about it, and that transcendent quality was picked up by other authorities on bees. From there it became property of science fiction writers, for whom the concept was tailor-made. In their hands, it could retain the sense of a purer intelligence emerging from the collective, or it could be a means of imposing zombie conformity and obedience on the rest of us. Science fiction runs to utopia or dystopia anyway, and the hive mind can be used to exemplify both, even in the same book. The phrase had not become common outside of science-fiction circles; I doubt most Americans were familiar with it when I was young.

There the matter rested until the mid-1990’s, when the expression received the benefit of two cultural megaphones: first Kevin Kelly, founder of Wired magazine, then the film Star Trek: First Contact. Kelly saw the hive mind as the result of amplifying human capability with computers (preferably implanted) to enhance our collective intelligence and create a larger force, human yet superhuman, that would change everything for the better — although individual drones might not fare so well. A year or two later, Star Trek: First Contact came out, which featured the Borg as the villain. The Borg had appeared on Star Trek: The Next Generation (the Patrick Stewart cast, which also populated the film), but this seems to have been the first time the phrase “hive mind” ever appeared in the script. The Wired geeks and the Star Trek geeks between them formed a critical mass, and “hive mind” emerged from the sci-fi shadows and began to be encountered much more often. The onset of social media certainly didn’t slow the spread of the phrase; here again, the concept may be beneficent or noxious.

Kelly was an optimist, positing that the computer-aided hive mind would lead to a much greater capacity to solve human problems, whereas the Borg represents the dark side, gobbling up plucky individualists and producing numbing conformity while enriching its own hive mind with the contributions of other civilizations (sounds like imperialism, or the one percent). My sense is that today the pessimists are winning; “hive mind” has become a favored grenade to toss across the political divide, as stalwarts of the right and left accuse their opponents of stupidly parroting the sentiments put forth by their respective opinion makers. On this view, the hive mind is simply an overlord to which the bad guys pledge dumb fealty. (Of course, both left and right have their share of unreasoning myrmidons, but I wonder if they may be more characteristic of the right wing. “Dittohead” is no longer fashionable, but it’s worth noting that only right-wingers called themselves “dittoheads,” often with pride.) Even if the insulting use predominates right now, the more hopeful meaning may rise again. Take UNU, for example, which promises to help us “think together” by setting up a “swarm intelligence.”

Once you get away from the notion of a literal superbrain, the metaphorical uses of the expression come quickly into view. A single brain can itself be seen as a teeming hive mind, with neurons equivalent to drones, each doing its tiny duty but producing prodigious results by subordinating itself. (A more recent issue of Wired showcases an example of this sort of analogy, which has no counterpart for the queen bee.) More generally, the hive mind may serve as a symbol of our politics, in which millions combine to create and support a unified national government. (If that idealized picture makes you snicker, you’re not alone.) Our national motto, E pluribus unum, means “out of many, one,” and that’s not a bad summary of how a hive mind works. No single individual knows everything or can do it all by herself; the nation must muddle along making the most of whatever contributions it can get from hard-working citizens, who create the polity by banding together, at least partly unconsciously, to assert a collective will.

This post was inspired by the one and only lovely Liz from Queens, who nominated “hive mind” only last week, thereby sparing me the trouble of coming up with a new expression to write about. Thanks, baby!

Tags: , , , , , , , , , , ,

national conversation

(1980’s | journalese? bureaucratese? | “lively debate,” “broad-based or general discussion,” “public discourse”)

Another expression we owe to the Reagan years, though not to Reagan himself. The first example in LexisNexis shows up at the beginning of 1984 from the pen of George Will (Google Books kicks up a few scattered uses before that), but Secretary of Education William J. Bennett seems to have debuted this phrase definitively; he took office in 1985 and used it frequently, and it started to show up more after that. The phrase was a commonplace by the end of Bill Clinton’s first year in office, once again associated with a particular official, NEH chair Sheldon Hackney. Originally it referred to discussion of a single event or issue — in Bennett’s case it was education policy — politicians and their spokespersons would call for a national conversation about their field of interest or responsibility. “Have a national conversation” replaces “focus our attention on” or “get people thinking about.” Another, more general, meaning soon crept in, and we may use the term, generally preceded by the definite article, to refer to what everyone is talking about, even if not directly related to traditional political questions. The latter meaning is harder to pin down, more mythical, than the original.

It won’t do to give the politicians too much credit. The rise of talk radio, cell phones, and the internet in the early 1990’s seemed to embody the national conversation, and the phrase slipped naturally into a kind of shorthand for these brave new means of mass communication. Talk radio, in particular, was quickly embraced as opening a window on our collective consciousness or oversoul, or something. Personally, I doubt that any of these innovations has really contributed much to improvements in civil discourse, other than making it easier for us to trade trivialities or blow off steam (the internet has made itself useful as a storehouse of information, which is not insignificant). But I admit the possibilities seemed powerful at the time.

“National dialogue” had already come into use before 1980, but it sounds too two-sided to take multiple perspectives into account. (“Dialogue” seems to be the preferred term when some kind of reconciliation is needed, reinforcing the notion that only two parties can be involved.) I suspect Bennett was looking for a word that sounded not only more diverse than “dialogue” but also less confrontational than “debate,” which for centuries served its turn as the word for the process of figuring out what the government should do next, or what its guiding principle in a particular area should be. “Debate” conjures up sweaty candidates at podiums, observing strict rules about how much they can talk and maybe even which matters they may address. Conversations are different. They’re not staged, or bound by formal procedures; they proceed naturally as people try to deal with the problem at hand. It’s not just the prerogative of politicians and their advisors, but something we can all do. This jibes nicely with one of our cherished stories about self-governance: just by leaning over the back fence, we can participate in our most urgent policy discussions and guide our leaders to the best, most democratic solution. The facade does become harder to maintain as the population grows, and “national conversation” may promise more widespread involvement than it can deliver.

If a welcoming, inclusive effect was what Bennett was after in 1985, we can judge that he was not particularly successful. Maybe that’s because the loss of rules and standards that accompanied the shift from debate to conversation forestalled any such result. The premise underlying political debate was a commitment to giving all sides a fair hearing, in the hopes that each position would be explained clearly and buttressed by the best available arguments. A good debate represents the various viewpoints well enough that observers may reach sensible, reliable conclusions. Conversation makes no such promise. It’s easy for a conversation to degenerate into a shouting match or vituperation, or simply two people talking past each other; the rules of debate that limit or prevent such failures don’t apply. It’s true that debates are often won by the best debater, rather than by the defender of the most cogent position, but conversations are even worse. The old standards required not only that each side be heard respectfully, but that participants would acknowledge opposing sides of the argument. Now even that theoretical baseline has been lost. Conversation carries no obligation to listen to anyone else or any means to compel it. So you get less efficiency and more clash.

Tags: , , , , , , , , , , ,

gridlock

(1980’s | enginese | “traffic jam,” “logjam,” “deadlock,” “paralysis”)

I found a handful of doubtful cases in Google Books, but nothing that disproved the reigning explanation of the origin of “gridlock.” The story goes that two traffic engineers, Sam Schwartz and his partner Roy Cottam, invented a word for a nightmarish traffic jam — Manhattan’s street grid rendered completely impassable due to cars blocking every intersection for blocks around. (Maybe it should have been called “gridblock.”) New York has a transit strike every so often, and we had a big one in 1980. No subways and buses means more cars on the same streets means impossible traffic all over Midtown. Schwartz by that time was a city employee, and the word started to turn up regularly in the New York Times. (William Safire was an early partisan, using the word several times in his language column between 1980 and 1982; one lexicographer was watching it closely even in 1981.) It was thoroughly established within a few years. In the early days, it was used most often to talk about movement of motorized vehicles, but the word was used in discussions of politics as early as 1980, and quickly developed secondary senses in the realms of legislation (parties can’t agree on anything) and the judicial system (shortage of judges preventing cases from being resolved quickly). It can still be used to talk about traffic, but that sounds a little prosaic, now that the term is heard far more often in political discourse. Today, “gridlock” takes flight only when used to bash one’s political opponents as obstructionists, do-nothings, and filibusterers.

Schwartz did well by the coinage, anyway: he went on to write the wonderful “Gridlock Sam” traffic advice column for the New York Daily News –- one of the few bright spots of the News in the mid-1990’s, as I recall –- and he remains a respected commentator on traffic and transportation. That’s a full-time job in New York, and few are better at it.

The three uses mentioned above (traffic, politics, courts) constitute a relatively small number, considering how often the word appears. It has retained a narrow range with little spread into new applications. No one talks about “emotional gridlock,” or “office gridlock.” When it comes to traffic, the word denotes immobility due to overuse of the roads. But in political use — much more common these days — the immobility isn’t generally due to an oversupply of legislative proposals or debates; it’s more likely to arise from throwing sand in the gears. Classic gridlock isn’t willed. It just happens, because there’s nowhere for all the cars to go. But partisan gridlock is often the deliberate result of the efforts of a small group. There is a broadening of definition here, but not of application; the use of the word in politics caught on early and fast and now is omnipresent.

Legislative gridlock is always deplored, and no one ever speaks up for it. But gridlock is good. When the two parties disagree on how to achieve the shared goal of screwing most of the population, gridlock can keep things from getting worse too fast. It’s surprising how often American voters wind up with divided legislatures, or a partisan divide between the legislature and the executive. I think that’s partly because we understand instinctively that government can do a lot of damage in a hurry if the people don’t find ways to apply the brakes. (Witness the shitstorm of changes in North Carolina this year, when both houses of the legislature and the governor are all of the same party for the first time since Reconstruction.) Like pork-barrel spending, gridlock may not be so bad. The ruthlessly efficient government is the most dangerous because it is most likely to disregard the will of the people. Democratic governments need to slow down, hear from a lot of different sides, and throw bribes at voters to stay in power. That is one of the great premises of our political system: making it difficult to pass (or change) laws, because so many competing interests must be placated. Gridlock caused by partisan differences and failure to compromise is an important check on the legislature.

Tags: , , , , , , , ,

talking point

(journalese | “campaign promise,” “key issue”)

On this expression, I stand confused. Most dictionaries, including all three of my printed unabridged models, will tell you that this phrase means something like “selling point” or “persuasive zinger.” Most on-line dictionaries agree, although the Business English Dictionary sponsored by the Cambridge Dictionary Online gives a more complete picture of how we use the word nowadays. It’s not that that usage is extinct, but I do think it’s receding, even though most on-line dictionaries don’t say so even now — one expects printed volumes to lag behind, but these on-line editors today are just lazy, if you ask me. My prediction: in the next twenty years, “talking point” will lose the sense of “statement intended to convince,” or “kicker,” supplanted by a definition related but distinct: “item from a list of statements to reiterate.” A statement will be considered a talking point because it appears consistently in the speeches of a politician, or in commercials for similar products, or in public relations campaigns, whether it’s persuasive or not.

There’s an obvious connection to make here, and Wikipedia makes it. Those attempting to persuade others will invariably use the strongest, most convincing arguments in their favor, so when politicians use “talking point” to mean “item from a list,” it’s understood that the slogan has been carefully chosen to bolster the positions of the people who made the list. Granted. If political strategists, or jingle writers for that matter, were infallible, we would have a distinction without a difference. But they’re not, and a talking point can easily backfire, either because it’s misstated or misguided, causing voters to roll their eyes and make a mental note to vote for the other guy. It’s part of the job for each candidate to take apart the the other candidate’s talking points and show why we shouldn’t allow ourselves be bamboozled.

“Talking point” does have several other meanings. One is most commonly employed in diplomacy, meaning simply “agenda item.” Each side has a list of issues to raise, and if you’re going to have talks, you have talking points. (I don’t think you hear “talks” as much as you did in the seventies, when the word was used endlessly with reference to arms reduction negotiations with the Soviet Union.) The OED On-line offers “topic suitable for or inviting discussion or argument,” that is, anything worth bringing up in the first place. It seems also to mean “center of attention” occasionally. I ran across an instance in an essay on Rudolf Nureyev by distinguished critic Clive Barnes: “Before his escape he had been the talking point of the first Western season the Kirov Ballet of Leningrad had given in Paris. He had the kind of stardom that notoriety might enhance but could never create” (Life magazine, May 12, 1967). It’s using “point” to mean locus rather than proposition, and there’s no possible interpretation other than “cynosure.” Maybe it’s idiosyncratic, but I’ve seen a few other things like it. I would call it an unusual but legitimate variant meaning.

Random House Dictionary, so useful for its fearless dating of new entries to the American language, saddles the second decade of the twentieth century with the appearance of “talking point” (as “selling point”). The OED On-line cites Sinclair Lewis’s Babbitt (1922); the context suggests that the term is an example of obnoxious new business jargon. My guess is that this term did originate in advertising, although now it comes up most often in political journalism, if LexisNexis is anything to go by. It’s also noteworthy that the popular news site Talking Points Memo covers politics and nothing else.

Your “talking points” are what you repeat when you’re “on message.” In today’s politics, the humble talking point plays its part in the demoralization of voters and decay of debate alluded to in my entry on the latter phrase. Politicians are lavishly rewarded for sticking to the script — by their advisors, by the press, and often enough by voters — and excoriated for departing from it. Reducing political campaigns to the brute repetition of a few themes, be it ever so effective as an election strategy, negates our duty as citizens to pay attention and stay informed. Mere sloganeering can never give us the information we need to make intelligent decisions about our government.

Tags: , , , , , ,

unintended consequences

(late 1980’s | “Murphy’s Law,” “fly in the ointment,” “revenge of . . . ” )

Is this phrase eligible? It’s a very old idea, and its status as a concept worthy of study goes back at least to the eighteenth century, Adam Smith’s invisible hand being a potent example. Sociologist Robert K. Merton wrote an influential essay titled “The Unanticipated Consequences of Purposive Social Action” in 1936. (Another variant is “unintended effects.”) “Unintended consequences” has become the accepted form, a step that probably had happened by 1980: “In Washington these days, one often hears references to ‘the unintended consequences of reform’” (New York Times, August 24, 1980). So the phrase cannot be said to have originated after 1970, but it has become more common; like many new(ish) terms, it has gone from relatively specialized to relatively demotic. (Before this phrase took root, we were more likely to express the same idea with a verb, like “backfire” or “come back to haunt.”)

On the one hand, the phrase is stultifyingly simple, almost impossible to misunderstand. But there’s a lot going on underneath that we need to attend to. Let’s parse this one out (pardon the expression) and see what we find.

Wikipedia breaks the concept down into three types: benefits, detriments, and perverse outcomes. Only the last requires any explanation: a perverse outcome occurs when a policy designed to ameliorate a specific condition makes it worse instead. (Detriments don’t necessarily have to do with the problem at hand; any unfortunate result might qualify.) In common discourse, I would venture that the second and third usages occur far more often, although it’s certainly true that unintended consequences are not always bad. Likewise, the phrase “law (or doctrine) of unintended consequences” is invoked when something has gone wrong, whether the doctrine is held to say merely that unintended consequences, good or bad, can’t be avoided, or that unintended consequences always bite you in the ass — both formulations may be found.

This phrase seems generally to have to do with politics, legislation, and public policy, much less often with private decisions and actions. I’m not sure why that should be, since surely we can fail to foresee what might follow from our personal dealings as easily as our solutions to more general problems. But that is something I’ve observed and that LexisNexis confirms pretty resoundingly. It’s important because it ironically frees up the phrase to take on multiple shadings, as set phrases in politics so often do. “Unintended consequences” means three different things. First, at the most innocent level, the term acknowledges that no action affects only its designated target, that the world is too complicated to permit us to see all the possible results of any action. Second, it is used as a way of saying the road to hell is paved with good intentions. More precisely, “unintended consequences” make a great club to beat back any kind of social reform. Since every attempt of the government to change things for the better will make something somewhere worse, the government should never try to make conditions generally better. (It should, however, continue to make a small number of very rich people richer.) A single ill effect, even if it’s not fully attributable to the reform in question, outweighs any possible improvement. Finally, it’s a way for politicians to avoid taking responsibility for the effects of their laws, or for their defenders to suggest that the bigger problem we face now that arose from the solution of the previous problem could not reasonably have been foreseen. This usage is rarely innocent; often blatantly predictable consequences are dismissed as “unintended,” and it becomes a phrase to hide behind.

That’s why we need to keep in view the crucial difference between “unintended consequence” and “unforeseeable consequence.” Some effects really cannot be predicted, given our incomplete knowledge and the gaps in our understanding of how the world works. But that doesn’t mean we let everyone responsible off the hook every time something goes wrong. If a legislator just bats out laws banning whatever conduct is agitating Peoria this week, without taking time to ask, “what could go wrong if this is enacted?”, that’s a failure of democracy. Nobody’s perfect, but policymakers must consider a variety of possible consequences and take some steps to prevent the least desirable from afflicting us. If not, “unintended consequences” turns into a blank check. No matter what goes wrong, you plead that you didn’t mean it, and implicitly, that no one could have seen it coming. It’s a poor excuse and we can’t afford to accept it lightly.

Tags: , , , , , ,