Tag Archives: English
(1980’s | academese (economics) | “cautious”)
This expression carries a couple of odd dichotomies considering how straightforward it appears. The most obvious pertains to that which it modifies; either persons or corporate bodies — whatever the Supreme Court says, they’re not the same — may be risk-averse, though presumably the risk-aversion of a corporation is ultimately traceable to individuals, whether executives or independent shareholders. More interesting is the fact that risk-averseness may proceed from two entirely different kinds of experience. A conservative corporate board avoids sudden shifts and grand initiatives because they feel prosperous; there’s no incentive to rock the boat. Yet it is a tenet of pop psychology that those who have lived through times of deprivation are suspicious of all but the safest investments, and, in extreme cases, may refuse even to keep their money in banks. (Both sides have in common assets to protect; if you have nothing to lose, there’s no point in being risk-averse.) But then there’s an absent dichotomy that one might naively expect to find in an expression beloved of bankers: the distinction between sensible risk likely to pay off and a crazy scheme. The risk-averse will stay away from both, desiring only the steadiest and safest.
The expression comes out of the discipline of economics and was most used originally in finance, starting in the sixties and becoming commonplace by the eighties. Soon it came to be used often of politicians and lawyers. Among corporations, insurance companies attract it the most; their risk-aversity comes from a visceral understanding of actuarial tables. Yet any stodgy company merits the term. Slowly but surely over time, it has spread into other kinds of prose, with movie reviewers and even the odd sportswriter resorting to it nowadays. More kinds of writers use it to describe more kinds of people — it’s not just for stockholders any more. The point of the compound seems to be neutrality; it strives to avoid any imputation of prudence or cowardice, and largely does, as far as I can tell.
In a previous post I remarked on the curse of capitalism — if one guy works harder, everyone has to work harder — and risk-aversitude bears the seeds of a different manifestation of it. In competitive markets, each company watches the innovations of others like a hawk. When they succeed, the other competitors follow; when they fail, everyone else drops plans to do something similar. Television works this way, though maybe less so now, when there are so many networks (an obsolete word, I know). Any change — introducing a new character into a popular series, or a new show about a controversial subject — carries with it a chance that your audience will flee in terror. But if it pays off, your competitors take note and resolve to do the same damn thing, backed up by shareholders who noticed that it made big profits for the other guy. Within a season or two, everyone is sick of the no-longer new gambit, and most of the imitators have made no headway. Whereupon they lose advertisers, another risk-averse group famously shy of causing offense, taking the money and running at the first sign of any immoral or objectionable acts that might result in lost market share. (Bill O’Reilly is only the latest in a very long line of such embarrassments.) Sometimes, what looks safe turns out to be dangerous. Risk avoidance, like any other strategy, is subject to misuse born of misunderstanding or bad timing, whether by the humblest investor or the loftiest board of directors.
(1980’s | computerese? | “innate,” “(pre-)programmed,” “fixed,” “unalterable”)
The hard-wired smoke detector was already around in 1980; in that sense the term has not changed meaning since. “Hard-wired” meant connected directly to the building’s electrical system, meaning it was not powered by batteries, meaning that it would not infallibly begin making horrible chirping noises one morning at 3:00 and resist every sleep-fogged effort to silence it. A hard-wired telephone was similar in that it was harder to disconnect than the standard model you plug into a wall jack (already common in my youth, though far from universal). The cord connected to the system inside the wall rather than on the outside. Cable television might be hard-wired in that the cables connected to the source physically entered your house and attached themselves to a television set. Computer scientists had been using the term before that, generally to mean something like “automatic” or “built-in” — the only way to change it is to make a physical alteration to part of the equipment — and it remained firmly ensconced in the technical realm until the eighties. That’s when “hard-wired” became more visible, as computer jargon was becoming very hip. (PCMAG offers a current set of computer-related definitions.) In computer lingo, “hard-wired” came to mean “part of the hardware,” so “soft-wired” had to follow to describe a capability or process provided by software.
My father, erstwhile electrical engineer, pointed out that in his world, “hard-wired” was the opposite of “programmable.” In other words, the hard-wired feature did what it did no matter what; it couldn’t be changed simply by revising the code. Yet you don’t have to be too careless to equate “hard-wired” with “programmed” (see above) in the sense of predetermined. It’s not contradictory if you substitute “re-programmable” for “programmable,” but that requires an unusual level of precision, even for a techie. Every now and then you find odd little synonym-antonym confusions like that.
Still in wide technical use, this expression has reached its zenith in the soft sciences, in which it is commonly used to mean “part of one’s make-up,” with regard to instincts, reflexes, and basic capacities (bipedal walking, language, etc.), and more dubiously to describe less elemental manifestations such as behavior, attitude, or world-view. “Hard-wired” is not a technical term in hard sciences such as genetics or neurology. The usefulness of the expression is open to question: one team of psychologists noted, “The term ‘hard-wired’ has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression . . . remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression” (citation). Scientists may sniff at the term as used in pop psychology, but it does make for easy shorthand and probably won’t go away any time soon.
The reason we take so easily to applying the term “hard-wired” to the brain is that the computer, as developed over the last fifty years, forms the most comprehensive map yet for the workings of our minds. A contributing reason is the very common, casual linking of brain activity with electricity, as in referring to one’s “wiring” — even though one may also refer to one’s “chemistry” to explain mental quirks, probably a superior explanation. Watching a computer “think” helps us understand how our brains work, or maybe it just misleads us, causing us to disregard our own observations in order to define our own mentation with reference to the computer’s processing. There are obvious connections and obvious divergences; surely any device we concoct must reflect the workings of our own minds. But computers aren’t just for playing solitaire, calculating your tax refund, running a supercollider. They serve a humanistic function by giving us new ways to think about the old ways we think.
bring to the table
(1980’s | businese | “have to offer,” “start out with”)
What one brings to the table by definition benefits the party already there. It is a positive term, rarely used ironically, indicating qualities that will improve an existing situation or resolve a problem. In a job interview, it’s the thing that makes you desirable. Among athletes, it’s what will make the team into a winner. In diplomacy, it’s a bargaining chip that helps move the process along. Generally, it’s what you can do to help. There was a time when it might connote baggage as well as benefit; what you brought to the table was simply what you had, good or bad. But since 1980 or so, it has taken on the favorable connotation exclusively. The phrase arose in business and government; nowadays athletes also use it a lot. To my ear at least, when a phrase becomes popular among athletes, it has stepped irrevocably over the border into cliché country. I’m not exactly sure why, but I think it has to do with the fact that professional sports figures are quick to adopt new expressions from each other and use them frequently thereafter, rarely with any imagination or creativity.
You have to keep your eye on the table, because idioms that rely on that word come from different places. “Bring to the table” calls to mind negotiation: the big table everyone sits around to hammer out an agreement. “Everything on the table” almost certainly comes out of gambling — the moment of showing your hand. “Seat at the table” could come from either, or from the dining room. To get anywhere at any table, a seat is the minimum requirement. Waiters bring things to the table all the time, but that sort of pig-headed literal-mindedness doesn’t get the blog written. In all these expressions, the table by now is purely metaphorical; when an actual table is involved, we understand it to be a play on words.
There’s a certain kind of new expression that develops a settled usage even though it is not particularly distinctive and could occur in everyday conversation without any reference to the specialized meaning. That description is a little vague, so let me offer some examples: “at the end of the day,” “be careful out there,” “do the math,” “don’t even think about it,” “good luck with that,” “I’ll shut up now,” “in a good place,” “play well with others,” “smartest guy in the room,” “what’s your point?.” All of these expressions have in common an ordinariness, almost a triviality, that allows us to notice, if we think about it, that they could just as well have no meaning beyond that carried by the word string itself. And yet, when we hear such phrases, we grasp an extra dimension, so that even if the sense of the expression is not much different from the literal sense of the words, we know we are hearing a distinct expression. There must be a process that allows such utterances to transmogrify into idioms, but I don’t understand it. Is there any way to predict that “I’ll shut up now” would take on a universe of connotation while “I’ll go to the store” (so far) has not?
(1980’s | bureaucratese? legalese? financese? | “recoup,” “recover”)
No longer the sole property of sportswriters, this noun-verb complex has invaded the financial pages and legal journals in force. When I was young, you clawed your way back into a contest through determination and effort, not quitting until the game was on the line and you had a chance to win. It didn’t have to be a single game; it could happen over course of a season, as in a baseball team clawing its way back into the pennant race. It might be used in the context of an individual sport like tennis or golf, but I think it more often went with team sports. In the business world, you might claw your way to the top, but you don’t claw back your way to the top — though you might claw your way back to the top. There’s something ruthless about clawing when people do it; it requires unreasoning vigor, like a jungle cat, blindly fighting its way forward as long as it can move.
In the late seventies, the U.S. began imposing treble (i.e., threefold) damages on defendants who lost certain kinds of civil suits. The U.K. responded by passing a law of their own that gave a British person or corporation the right to recover the portion of the total damages that was not actually compensatory (in other words, the part that was multiplied on after actual damages were awarded). In both the British and American press, this was widely referred to as a “clawback provision.” The expression was much more common in the British, Canadian, and Australian press for at least a decade thereafter, and it is indubitably a Briticism.
My impression was that the expression refers mainly to something governments do, as in the Bernie Madoff case, but a corporation can do it, too; take Wells Fargo’s repossession of stock from disgraced executives in the wake of a banking scandal. I suppose that a business partner could claw back money that another partner had misused, but for the most part it seems to be something an institution does. Clawbacks normally occur when assets have been stolen or used illegitimately; when you hear the word, you can be pretty sure that there was some funny business that has been found out, and a governing body, private or public, is doing something about it. (That isn’t always true; for example, when the British government was privatizing public industries in the eighties, they decreed that a certain number of shares had to be available to British investors. In some cases, that meant “clawing back” shares bought by foreigners to make sure enough shares were available.) The government generally needs some kind of judicial ruling, but a corporation needs no more than the approval of the directors.
In truth, the new expression here is “clawback” (n.) since “claw back” (v.) has been a permissible construction for a long time. (As we saw above, “clawback” also serves as an adjective. I hope I am cold in my grave before “clawbackly” becomes standard English.) But its present sense seems to have arisen around the same time, and I wouldn’t want to state with certainty that one preceded the other, though I would guess the verb came first. It has never left legal and political contexts, or spread outward from them. Law and justice must have their own language.
(2000’s | advertese | “den”)
The evidence strongly suggests that man-caves are the creation of marketers, despite visible traces of the expression before the mid-aughts, which is when it starts turning up in bulk in LexisNexis. The phrasing likely owes a debt to the author of “Men are from Mars, Women are from Venus” (1992), John Gray. While he did not, as far as I can tell, ever use “man cave” himself, he used the two words in close proximity, notably in the apothegms “Never go into a man’s cave or you will be burned by the dragon!” and “Much unnecessary conflict has resulted from a woman following a man into his cave.” In other words, let the old grouch suck his thumb and fiddle with his TV or his train set for a while. He’ll come out and make nice eventually. And if he doesn’t, it’ll be your fault. Gray’s biases aside, he was influential, and today’s more compact phrasing may claim his as an ancestor. Actually, the first use I found in LexisNexis is not due to Gray but to a Canadian columnist writing about house floorplans; she proposed that the basement be renamed “man cave,” because that is where men go to get away from their women. (She had in mind a damp, cobwebbed basement, not a home entertainment center. “Cave” is the French word for basement, so the use of “cave” is more intuitive in Canada than here.) Was author Joanne Lovering an early adopter or ahead of the curve? (Or ahead of the cave!)
But when “man cave” started showing up in quantity, it was purveyed by Maytag, of all corporations, which marketed a product called SkyBox, a vending machine for soda or beer that you could install right in your very own home. Fred Lowery, the director of Maytag’s “strategic initiatives group,” noted that “every guy would like to carve out his own little place in his home. Internally, we call it the man cave. And lots of guys, at some point, would like a vending machine in their man cave” (January 29, 2004). There you have it. Very soon, real estate agents began touting the things, sports promoters jumped on board, and it became a proper fad. No man cave was complete without a big-screen television and a sofa — video game consoles and sports-related items also popular — and if not your very own vending machine, at least a dorm refrigerator, maybe even a full bar. What you won’t find is a workbench. The man’s retreat in my youth was likely to involve tools and at least the possibility of repair or construction. A few men still favor that, but these days it’s more about swilling beer while endless hours of sports unroll before your glazed eyes. Well, not really; what it’s really about is male bonding or just having a place to get away from your woman. The corresponding “woman cave” has not made much headway, a few sightings in the press notwithstanding, but all the ladies have to do is wait; sooner or later some savvy marketer will attract huge sums convincing women they need their own gender-specific refuges.
“Cave” is an interesting word to use here; to my mind it calls up two different associations. First, of course, the caveman: brutal and self-reliant (actually, cavemen were much less self-reliant than we are). Primitive, crude, and therefore manly, the caveman lords it over his woman and slays giant beasts. Just what we all want to be, right? The second association with “cave” is a dangerous, unpleasant place where no sensible woman would set foot to begin with. They’re dark and treacherous, lairs of wild animals, drifters, or lunatics. Of course, that’s what he wants you to think, ladies. He has a giant-screen TV in there — how dangerous can it be? Just don’t get burned.
Why has “man” become such a common prefix in compound nouns since the dawn of the new millennium? Nobody says “man about town” or “man alive!” any more, but you can’t get away from “man-hug,” “man-bun,” “man-boobs.” “Man cave” predates some of these, though “man-boobs” dates back to 2003, according to Urban Dictionary. Is it a simple matter of dumbing down, the word “male” having become too complicated for us cavemen? Is it a wistful attempt to recover a lost sense of masculinity by reverting to the simpler (and therefore more primitive) term? Is it an attempt to express solidarity? “Man-splaining” and “man-spreading” go the other way, of course, used by women in solidarity, not men.
You’d be surprised at how many meanings this word has — anyway, I was. They’re all related to the notion of a strap (going back to Middle French and Old High German), and that association holds true even in these latter days. A hundred years ago, its primary meanings were: a strong rope used to help secure ship’s rigging, cord used to fire a cannon, strip of leather used to hold a snowshoe (for example) together, and another nautical reference, which evolved directly into today’s use of the word: the cord a sailor hung his knife on so he could carry it around his neck. “Lanyard” has a few other miscellaneous meanings as well: chinstrap (on a hat), summer camp favorite (the lanyard as craft project seems to date back at least to the middle of the twentieth century), what the referee wears a whistle on (back to the sailors again), military decoration (worn on the shoulder), part of a safety harness. When used to help construction workers secure their tools, lanyards aren’t just for necks any more; they can attach to shoulder or wrist as well. By and large, the old meanings are still active, and probably no less common than they ever were. Around them has sprung up a new field that calls on lucrative forces of the Zeitgeist like security, fashion, and commerce, commerce, commerce. Now we have Lanyards USA, Lanyards Tomorrow, and CustomLanyard.net.
If the old meanings are still around and there aren’t any new ones, why an entry? Just my sense that the word has become vastly more popular. It may have meant a lot of things in its storied past, but it always had a specialized air about it. Nowadays, though, you hear it everywhere — everybody from lowly janitors to Super Bowl spectators wears one. And it’s being applied in ways it never was before. Now “lanyard” is what you call the cord or chain you hang your glasses on around your neck; in my youth, plenty of people wore their glasses around their necks, but not on lanyards. It was always true that the lanyard, whatever it denoted, had a strongly utilitarian cast. But not any more; lanyards still serve everyday functions, but they also should match your clothes or sparkle or advertise or say something interesting. An accessory at the very least, potentially more.
The lanyard revolution is more than anything a consequence of our efforts to keep ourselves safe, which has made us ID-happy. Why do you need a lanyard at the Super Bowl, or to get into your office? So you can display your credentials and prove that you belong there. Sure, people carry keys and other household objects around their necks, too, but your standard lanyard nowadays comes with the clear plastic ID-holder, so you never have to dig out your card to show the guard. In the seventies, members of a few professions were using lanyards for that purpose, but now almost any public employee and lots of private ones wear them as a matter of course. It’s a tacit acknowledgment that we have accepted increasing restrictions on our movements in hopes of preventing, or at least limiting, mayhem and bloodshed. Lanyards are an emblem of that loss of freedom, another component of the uniform that the wealthy, ever security-conscious (and for good reason), force on the masses. Adding insult to injury, the bosses and big money want us to regard the badge of servitude as just one more consumer good. If they succeed, we lose again.
(1980’s | therapese?)
Also latchkey child, though that wording seems almost archaic now. Some sources date the expression to the 19th century, but it’s probably later. Random House assigns an origin between 1940 and 1945, and Dorothy Zietz in “Child Welfare: Principles and Methods” (Wiley, 1959) cites not only “latchkey child” but “eight-hour orphan” and “dayshift orphan” as synonyms. Zietz points to “’emergency’ day care programs which became prominent during World War II [that] are now regarded as part of the community’s basic child welfare services,” which will come as no surprise to anyone who has ever heard of Rosie the Riveter. Nonetheless, in 2017 it is generally assumed that Generation X both invented and perfected the concept of the latchkey kid. Scattered references can be found before 1980, but the phrase really took off afterwards, which explains why Gen X gets the credit. (Full disclosure: I’m a proud member of Generation X (the older end) but was not a latchkey kid.) I can’t find any sign that “latchkey child/kid” came along before World War II, certainly not as early as the nineteenth century. It’s easy to imagine a Victorian illustration of a disconsolate waif with a key on a string or chain (not a lanyard) around her neck, but the term was not needed then because the kids were working the same hours as their parents. We still have plenty of latchkey kids, of course, but the novelty has worn off. Today, Free Range Kids carries on the tradition of advocating unsupervised time for children.
God help us, a lot of those Gen X’ers are parents now, and they indulge in the eternal practice of contrasting their kids’ experience unfavorably with their own. The Generation Next of parents proclaims that all that time with no adults in the house made them resilient and self-reliant, and maybe it did. But then why have so many turned into helicopter parents who starve their own kids of opportunities to learn how to manage without adult intervention? I suspect such generational shifts aren’t all that unusual, because parents have a commendable desire to spare their children the traumas they had to go through. But the wider tendency to bewail these kids today goes back a long time, too long and steady to be wholly unfounded. Every generation of parents sees their own experiences as definitive and notices only that which has deteriorated. The thing is, a lot of the time they’re right; standards do change, sometimes for the worse, and good parents must be especially alert to such slippages.
We associate latchkey kids with working single mothers and always have, though plenty of them have working fathers. From this has arisen a certain stigma the phrase can never seem to shake. Even today, it is used as a class marker, one of many indications of poverty, crime, substandard education, and the rest of it. Numerous studies suggest that latchkey kids don’t generally do worse than average; they share the fate of all studies that call easy explanations into question. We just know that the kids are worse off now and/or will do worse as adults; don’t try to tell us different. It is common to read nostalgic accounts of eighties childhoods, but at the time most press coverage — and there was quite a bit — was marked by dubiety. Some researchers pointed to pervasive fear among latchkey kids of emergencies they were unequipped to handle, or of intruders, or just of being all alone in an empty house. Latchkey kids may not want to relate such feelings to their parents, knowing that expressing doubt or anxiety will disappoint or irritate their hard-working elders. Then again, some kids learned to keep house, manage their time, or just watch lots of television. It’s unlikely that most parents want to leave their kids alone day in and day out, but unless the kid shows obvious ill effects, there’s no point feeling guilty over it.
(1990’s | teenagese? | “first date,” “brief encounter,” “not even a date, really”)
It’s tempting to see the rise of the phrase “coffee date” as concomitant with the rise of the gourmet coffee craze (which hasn’t abated), and the expression did become a lot more common around the time Starbucks did, from the late eighties to the mid-nineties. On-line dating services made their mark only a few years later and produced many more coffee dates, but the term existed well before that. Google Books fishes up a solid reference in Mademoiselle magazine from 1966 (not that Google Books’s dating is all that reliable). That article explained that the coffee date was the college equivalent of the Coke date. There’s no obvious origin for either phrase that I can find in my limited corpora; maybe it bubbled up from below.
College students being so mature and all, naturally they prefer coffee. But the point of the coffee date is not what you consume; it’s a probationary first meeting, which the parties use to size each other up. So it must be short, inexpensive, casual, easy to escape, and in a neutral, public place. Nothing much can happen, and that’s the point. If you hit it off, maybe a lunch date next. “Lunch date,” “dinner date,” and “movie date” are older terms — or at least they became common earlier — that imply a progression whose first step now is the coffee date.
Coffee dates have become so firmly part of the romantic how-to manual that a reaction has developed. While conventional wisdom still recommends them as sensible first meetings, certain apostates, such as this eHarmony blogger, dismiss them as old hat and unlikely to lead to serious relationships; others question whether they should be called “dates” at all. There are always doubters, but even they can’t deny that the dating landscape has changed, tilting the playing field decisively toward Starbucks.
An expression, and concept, with a verifiable origin. The on-line consensus — unanimous as far as I can tell — says that Rabbi Yaakov Deyo and his wife invented speed dating in 1998 as a way to encourage Jewish singles to meet each other and form relationships. It goes like this: between five and ten women sit at individual tables. The same number of men wait nearby. At a signal, each man sits down at a table and talks with the woman for eight minutes, then moves to the next table and does it again. In slightly over an hour, you meet several candidates, at least one or two of whom might be worth a follow-up. Now several national organizations sponsor speed-dating events, which may or may not have any religious, ethnic, or gender restrictions. The practice is sometimes known as “turbo-dating.”
I was struck by the ritual character of speed-dating, which was after all created by a rabbi. The basics of the process don’t vary much regardless of who’s in charge: several conversations in succession, each a fixed period of time; then participants notify the organizer which live one(s) are of further interest, whereupon the organizer puts two people in touch if they appear on each other’s lists. Perhaps the level of rigor does not measure up to the detailed ritualistic instructions of the Torah, but there’s a rule-bound quality all the same. One site notes the roots of speed dating in the traditional Jewish concept of the shiddach (match), basically an arranged marriage made with the help of a middleman or -woman. At any rate, like many concepts invented by Jews, from monotheism to relativity, speed dating has spread quickly and exercised tremendous influence.
It’s another kind of prescribed first date and so is related to the coffee date, but it’s even more circumscribed. Like a coffee date, your chances of success are low but the investment of time and energy is small, and like a coffee date, it can only arguably be called a date at all. Speed dating is distinctive because of the sheer number of people involved; if you buy the theory that most of the time we decide in a matter of seconds whether we’re attracted to someone or not, the approach makes sense. Just get a bunch of generally like-minded, well-disposed people in the same room and let nature take its course. The irony is that while speed dating looks like it was designed to deal with a glut of possibly eligible partners, it was actually invented to keep members of a relatively small, insular group from finding mates elsewhere. (Of course, in a large city like Los Angeles, where Rabbi Deyo first tried out speed dating, there are thousands of unattached Jewish adults, still an impossible number to navigate on one’s own.)
When a reader asked advice columnist Carolyn Hax her opinion of speed dating, she replied, “I liked it a whole lot better when it was called a ‘cocktail party.’” The point is well taken; speed dating is a highly regulated version of what was once known as “mingling.” You went to a party with people you didn’t know, and you went around and talked to them, allowing you to determine who might be a possible romantic interest. No timekeepers or chaperones required, and if you wanted someone else to have your number, you gave it to them. I’ve never tried speed dating, but I was never much good at mingling, so something tells me I wouldn’t be much of a speed dater, either. Both of my long-term relationships began with dates that lasted several hours, so maybe that’s just how I roll.
(2010’s | militarese? | “giving one’s all,” “bound and determined”)
“All in all.” “All-in-one.” “All in the wrist.” “All in your head.” “All in the same boat.” “All in good time.” Or you could just settle for “all in,” shorn of superfluous objects and uttered with quiet conviction. It means we won’t turn back; we won’t give in. But that’s not what it meant in my childhood. Back then “all in” meant “worn out,” “exhausted.” That definition was on its way out then, and the usage we see today represents a revival, doubtless an unnecessary one. In poker, it meant “having put all one’s chips in the pot” (which makes more sense). “All in” was a bit anomalous among the many vigorous expressions for states of lassitude. Most of them are straight predicate adjectives: “beat,” “pooped,” “spent,” “wrecked.” It reminds me a little of “done in,” but literally that means “murdered,” something much stronger. The old usage (citations date back to the nineteenth century in Lighter) is mostly gone, but I believe the term is still current in poker. (Ian Crouch gives a good account of the evolution of “all in” in the New Yorker.) In the modern sense, popularized by David Petraeus’s biography (2012), it also seems related to poker somehow, but in a more positive way — a confidence in the supremacy of your hand that causes you to bet your entire stack of chips without hesitation. But “all in” doesn’t connote arrogance or unseemly displays of power so much as steely resolve or unswerving attention to the task at hand. “All in” is what you are at the beginning of the day; it used to be what you are at the end of the day.
Theoretically it ought to be possible to be “all in” squared — bent on reaching the goal AND too tired to go on. But the effort required to maintain such commitment precludes helplessness born of weariness. Being all in implies that you have enough energy to figure out and make the next move, or enough force of will to overcome the newest obstacle. The other verb that precedes the expression is “go,” which reminds us of how closely it resembles “go all out,” a phrase much beloved of sports announcers in my youth. I don’t listen to play-by-play as much as I used to, but I have the impression we don’t hear “go all out” much any more.
“All” in itself implies a group, so “all in” should suggest effort toward a common goal, as in “we’re all in this together.” It may, but it doesn’t have to. It is possible to go all in on your own private project, but it might sound a little odd. When politicians and military people use it, there’s at least a hint of pulling together. That assumption of camaraderie is made explicit in what may prove to be yet another new meaning for the expression. Penn State University’s “All In” initiative provides an example, the motto being “A Commitment to Diversity and Inclusion.” Here the term is used very self-consciously to express the ideal of a tolerant, easy-going community. Donald Trump’s ascendance has given this sort of communitarianism a boost, and so I suspect we may see the expression used this way more and more. Keep your eyes peeled; “all in” may shed its skin yet again.