Thursday, October 19, 2017

Silent Republicans Have Their Reasons. They Don't Have an Excuse.

Whatever his impact may be on the country or the world, Donald Trump’s presidency imperils the future of his party, and there isn’t a serious-minded Republican in Washington who would tell you otherwise, privately.

In the short term, Trump’s determination to upend the health care market, his vague tax plan that’s already unpopular, an approval rating that can’t crack 40 percent, his exhausting and inexhaustible penchant for conflict — all of it threatens to make a massacre of the midterm elections, if you go by any historical marker.

In the longer term, it’s plausible to think that Trump’s public ambivalence toward white supremacists, along with his contempt for immigrants and internationalism, could end up rebranding Republicans, for generations, as the party of the past.

Trump doesn’t care what happens to Republicans after he’s gone. The party was always like an Uber to him — a way to get from point A to point B without having to find some other route or expend any cash.

Which leads to the question I hear all the time these days. Why aren’t more Republicans separating themselves from Trump? And why aren’t they doing more with the power they have to get in his way?

Sure, you have a senator like Bob Corker, a party pillar and notorious straight shooter, who publicly worried that an unrestrained Trump might bumble his way into World War III. That should have been sobering.

But barely a week later, here’s Mitch McConnell, the majority leader whom Trump has repeatedly demeaned, standing in the Rose Garden, smiling thinly and making hollow sounds about unity, allowing himself to be used for another weird Trump selfie.

It’s actually not hard to understand why McConnell and his fellow lawmakers don’t stand up and declare independence from this rancid mess of a presidency.

It’s just increasingly hard to justify.

I don’t read a ton of opinion pieces online, unless they happen to concern the Yankees, but there was one on last weekend that caught my attention. It was written by Steve Israel, who until this year was a senior Democrat in Congress, serving Long Island.

Responding to Corker’s sudden eruption of candor, Israel explained that retiring politicians like Corker, who has announced this will be his last term in the Senate, have the luxury of dispensing with political calculation.

“Many of us who’ve left elective life feel a sense of liberation, as if our tongues are no longer strapped to the left or right side of our mouths,” Israel wrote, with admirable flair.

“It’s wonderful to speak your mind without worrying about the next campaign, or parsing every word knowing that some opponent could twist an errant phrase against you out of context.”

We get it. It isn’t news that politicians have to be, you know, political. Or at least politicians not named Trump.

And these days, as I’ve noted many times, the real fear for most elected officials in Washington isn’t that they may say something to offend persuadable voters, whose existence no one really believes in anymore, like Bigfoot or Bill O’Reilly.

No, the fear now, if you’re sitting on either end of the Capitol, is that some no-name activist will decide to primary you, because you’ve somehow run afoul of extremists with followings on Twitter and Facebook, and you’ll have to spend all your time and money holding onto a job that you might very well lose, since it takes only one fringe group or millionaire and a few thousand angry voters to tip the balance in your average congressional primary.

The fact that Israel is the one writing about this dilemma should tell you that this isn’t simply a Republican phenomenon. Yes, Republicans are more tightly wedged between conscience and job security right now, because the president is constantly putting both in jeopardy.

But Democrats, too, often find themselves pinned between reason and reflexive ideology, mouthing mantras of economic populism that aren’t all that different from what Trump believes, and that most of them know to be painfully simplistic. Serving in Congress now, on either side of the aisle, often means hearing from a tiny slice of loud activists first, and everyone else where you can fit them in.

Our primary system wasn’t designed for an age when social media could supplant institutional loyalties, and at the moment it’s skewing the entire political process. So Israel offers a pretty fair explanation for why his former colleagues remain so maddeningly reticent.

by Matt Bai, Yahoo News |  Read more:

How Social Media Endangers Knowledge

Wikipedia, one of the last remaining pillars of the open and decentralized web, is in existential crisis.

This has nothing to do with money. A couple of years ago, the site launched a panicky fundraising campaign, but ironically thanks to Donald Trump, Wikipedia has never been as wealthy or well-organized. American liberals, worried that Trump’s rise threatened the country’s foundational Enlightenment ideals, kicked in a significant flow of funds that has stabilized the nonprofit’s balance sheet.

That happy news masks a more concerning problem—a flattening growth rate in the number of contributors to the website. It is another troubling sign of a general trend around the world: The very idea of knowledge itself is in danger. (...)

As Neil Postman noted in his 1985 book Amusing Ourselves to Death, the rise of television introduced not just a new medium but a new discourse: a gradual shift from a typographic culture to a photographic one, which in turn meant a shift from rationality to emotions, exposition to entertainment. In an image-centered and pleasure-driven world, Postman noted, there is no place for rational thinking, because you simply cannot think with images. It is text that enables us to “uncover lies, confusions and overgeneralizations, to detect abuses of logic and common sense. It also means to weigh ideas, to compare and contrast assertions, to connect one generalization to another.”

The dominance of television was not contained to our living rooms. It overturned all of those habits of mind, fundamentally changing our experience of the world, affecting the conduct of politics, religion, business, and culture. It reduced many aspects of modern life to entertainment, sensationalism, and commerce. “Americans don’t talk to each other, we entertain each other,” Postman wrote. “They don’t exchange ideas, they exchange images. They do not argue with propositions; they argue with good looks, celebrities and commercials.”

At first, the Internet seemed to push against this trend. When it emerged towards the end of the 80s as a purely text-based medium, it was seen as a tool to pursue knowledge, not pleasure. Reason and thought were most valued in this garden—all derived from the project of Enlightenment. Universities around the world were among the first to connect to this new medium, which hosted discussion groups, informative personal or group blogs, electronic magazines, and academic mailing lists and forums. It was an intellectual project, not about commerce or control, created in a scientific research center in Switzerland.

Wikipedia was a fruit of this garden. So was Google search and its text-based advertising model. And so were blogs, which valued text, hypertext (links), knowledge, and literature. They effectively democratized the ability to contribute to the global corpus of knowledge. For more than a decade, the web created an alternative space that threatened television’s grip on society.

Social networks, though, have since colonized the web for television’s values. From Facebook to Instagram, the medium refocuses our attention on videos and images, rewarding emotional appeals—‘like’ buttons—over rational ones. Instead of a quest for knowledge, it engages us in an endless zest for instant approval from an audience, for which we are constantly but unconsciouly performing. (It’s telling that, while Google began life as a PhD thesis, Facebook started as a tool to judge classmates’ appearances.) It reduces our curiosity by showing us exactly what we already want and think, based on our profiles and preferences. Enlightenment’s motto of ‘Dare to know’ has become ‘Dare not to care to know.’

It is a development that further proves the words of French philosopher Guy Debord, who wrote that, if pre-capitalism was about ‘being’, and capitalism about ‘having’, in late-capitalism what matters is only ‘appearing’—appearing rich, happy, thoughtful, cool and cosmopolitan. It’s hard to open Instagram without being struck by the accuracy of his diagnosis.

by Hossein Derakhshan, Wired | Read more:
Image: Alan Schein

Wednesday, October 18, 2017

Philip-Lorca diCorcia

The Deep Unfairness of America’s All-Volunteer Force

As far as we know, the phrase “all-recruited force” was coined by Karl Marlantes, author of Matterhorn: A Novel of the Vietnam War, a book that provides vivid insight into the U.S. Marines who fought in that conflict. Mr. Marlantes used the expression to describe what’s happened to today’s allegedly “volunteer” force, to say in effect that it is no such thing. Instead it is composed in large part of people recruited so powerfully and out of such receptive circumstances that it requires a new way of being described. We agree with Mr. Marlantes. So do others.

In The Economist back in 2015, an article about the U.S. All-Volunteer Force (AVF) posed the question: “Who will fight the next war?” and went on to describe how the AVF is becoming more and more difficult to field as well as growing ever more distant from the people from whom it comes and for whom it fights. The piece painted a disturbing scene. That the scene was painted by a British magazine of such solid reputation in the field of economics is ironic in a sense but not inexplicable. After all, it is the fiscal aspect of the AVF that is most immediate and pressing. Recruiting and retaining the force has become far too costly and is ultimately unsustainable.

When the Gates Commission set up the rationale for the AVF in 1970, it did so at the behest of a president, Richard Nixon, who had come to see the conscript military as a political dagger aimed at his own heart. One could argue that the decision to abolish conscription was a foregone conclusion; the Commission simply provided a rationale for doing it and for volunteerism to replace it.

But whatever we might think of the Commission’s work and Nixon’s motivation, what has happened in the last 16 years—interminable war—was never on the Commission’s radar screen. Like most crises, as Colin Powell used to lament when he was chairman of the Joint Chiefs of Staff, this one was unexpected, not planned for, and begs denial as a first reaction.

That said, after 16 years of war it is plain to all but the most recalcitrant that the U.S. cannot afford the AVF—ethically, morally, or fiscally.

Fiscally, the AVF is going to break the bank. The land forces in particular are still having difficulties fielding adequate numbers—even with lowered standards, substituting women for men (from 1.6 percent of the AVF in 1973 to more than 16 percent today), recruitment and reenlistment bonuses totaling tens of millions of dollars, advertising campaigns costing billions, massive recruitment of non-citizens, use of psychotropic drugs to recycle unfit soldiers and Marines to combat zones, and overall pay and allowances that include free world-class health care and excellent retirement plans that are, for the first time in the military’s history, comparable to or even exceeding civilian rates and offerings.

A glaring case in point is the recent recruitment by the Army of 62,000 men and women, its target for fiscal year 2016. To arrive at that objective, the Army needed 9,000 recruiting staff (equivalent to three combat brigades) working full-time. If one does the math, that equates to each of these recruiters gaining one-point-something recruits every two months—an utterly astounding statistic. Additionally, the Army had to resort to taking a small percentage of recruits in Mental Category IV—the lowest category and one that, post-Vietnam, the Army made a silent promise never to resort to again.

Moreover, the recruiting and retention process and rich pay and allowances are consuming one half of the Army’s entire annual budget slice, precluding any sort of affordable increase in its end strength. This end strength constraint creates the need for more and more private contractors on the nation’s battlefields in order to compensate. The employment of private contractors is politically seductive and strategically dangerous. To those enemies we fight they are the enemy and to most reasonable people they are mercenaries. Mercenaries are motivated by profit not patriotism—despite their CEOs’ protestations to the contrary—and place America on the slippery slope towards compromising the right of sovereign nations to the monopoly of violence for state purposes. In short, Congress and the Pentagon make the Army bigger than the American people believe that it is and the American people allow themselves to be convinced; thus it is a shared delusion that comforts both parties.

A more serious challenge for the democracy that is America, however, is the ethical one. Today, more than 300 million Americans lay claim to rights, liberties, and security that not a single one of them is obligated to protect and defend. Apparently, only 1 percent of the population feels that obligation. That 1 percent is bleeding and dying for the other 99 percent.

Further, that 1 percent does not come primarily or even secondarily from the families of the Ivy Leagues, of Wall Street, of corporate leadership, from the Congress, or from affluent America; it comes from less well-to-do areas: West Virginia, Maine, Pennsylvania, Oklahoma, Arkansas, Mississippi, Alabama, and elsewhere. For example, the Army now gets more soldiers from the state of Alabama, population 4.8 million, than it gets from New York, Chicago, and Los Angeles combined, aggregate metropolitan population more than 25 million. Similarly, 40 percent of the Army comes from seven states of the Old South. As one of us has documented in his book, Skin in the Game: Poor Kids and Patriots, this is an ethically poisonous situation. And as the article in The Economist concludes, it’s dangerous as well.

The last 16 years have also generated, as wars tend to do, hundreds of thousands of veterans. The costs of taking care of these men and women are astronomical today and will only rise over the next decades, which is one reason our veterans are already being inadequately cared for. Without the political will to shift funds, there simply is not enough money to provide the necessary care. And given the awesome debt America now shoulders—approaching 20 trillion dollars and certain to increase—it is difficult to see this situation changing for the better.

In fact, when one calculates today’s U.S. national security budget—not simply the well-advertised Pentagon budget—the total expenditure of taxpayer dollars approaches $1.2 trillion annually, or more than twice what most Americans believe they are paying for national security. This total figure includes the costs of nuclear weapons (Energy Department), homeland security (Homeland Security Department), veteran care (Veterans Administration), intelligence needs (CIA and Defense Department), international relations (State Department), and the military and its operations (the Pentagon and its slush fund, the Overseas Contingency Operations account). The Pentagon budget alone is larger than that of the next 14 nations in the world combined. Only recently (September 2016), the Pentagon leadership confessed that as much as 50 percent of its slush fund (OCO) is not used for war operations—the fund’s statutory purpose—but for other expenses, including “military readiness.” We suspect this includes recruiting and associated costs.

There is still another dimension of the AVF that goes basically unmentioned and unreported. The AVF has compelled the nation to transition its reserve component forces from what they have been since colonial times—a strategic reserve—into being an operational reserve. That’s military-speak for our having used the reserve components to make up for deeply felt shortages in the active force. Nowhere is this more dramatically reflected than in the rate of deployment-to-overseas duty of the average reservist, now about once every 3.8 years.

by Dennis Laich and Lawrence Wilkerson, The American Conservative |  Read more:
Image: U.S. Army photo by Spc. Advin Illa-Medina

The Only Job a Robot Couldn't Do

The gig economy is depressing. Crowdtap is worse.

About 8 percent of U.S. adults earn money by doing contract work through so-called “gig economy” employers like Uber, Favor, or Amazon Mechanical Turk, according to the Pew Research Center; if the trend continues, one-third of Americans will support themselves this way by 2027.

The gig economy is growing rapidly, but it’s also changing how we think about what it means to work. Uber and other online platforms are making the case for a future in which work happens in little on-demand bursts — you need a ride, and someone appears to give you that ride. Instead of a salary and benefits like health insurance, the worker gets paid only for the time they’re actually driving you around.

I’m a researcher who studies how people work and I have a hard time endorsing this vision of the future. When I see Favor delivery drivers waiting to pick up a to-go order, I imagine a future in which half of us stand in line while the other half sit on couches. And then I imagine a future in which all these mundane tasks are automated: the cars drive themselves, the burritos fly in our windows on drones. And I wonder how companies are going to make money when there are no jobs and we can’t afford to buy a burrito or pay for a ride home from the bar.

To understand the answer to that question, you have to scrape the very bottom of the barrel of gig work. You have to go so far down that you don’t even call it work anymore. You have to go to Crowdtap.

Crowdtap pitches itself as a place where anyone can “team up” with their “favorite brands” and “get rewarded for your opinions” by completing “missions,” such as filling out surveys and posting product promotions to social media. These actions earn points that can be traded in for gift cards.

Along the way, you might also get a few free samples or coupons. There are also occasional contests where members can win additional gift cards or larger prizes. A lot of what Crowdtappers do is post brand promotions to social media. As I’m writing this, for example, I’m looking at someone who has tweeted, in the last ten minutes, about Febreze (“Would totally use Febreze to fight persistent odors in my basement”), squeezable apple sauce (“Great deal, just in time for the new school year!”), Splenda (three times), cheese snacks, LensCrafters (twice), Suave (twice), and McDonalds.

A cynical view of Crowdtap is that it’s just another form of social media marketing — people being paid to tell their friends to buy things. But then you realize that these people don’t have large social media followings (in fact, Crowdtap users I interviewed said they create throwaway social media accounts to use exclusively for Crowdtap). And sharing content on social media isn’t even required — the default option is to share, but you get your points either way. Crowdtap passes members’ responses on to brands, but otherwise nobody is listening to what they say. No one is responding. There’s very little about this that might be called social. Imagine someone wandering alone in a giant desert, shouting “I love Big Macs!” into the sky. That’s Crowdtap.

So here’s an even more cynical view: Crowdtap isn’t social media marketing. Instead, it’s a form of work that rewards people for selling products to themselves. And if we imagine a future in which more people work in the gig economy, in which income inequality continues to increase, and in which brands need new ways to stimulate consumption, it might give some hints about what’s coming after Uber.

In the summer of 2016, I interviewed twelve people who use Crowdtap and similar online platforms. I compensated the interviewees, as is standard in academia, in order to get a more diverse sample and also because it seemed like the ethical thing to do given the fact that they could have spent that time Crowdtapping. I offered $5 Amazon gift cards and explained that the interview was for labor research. In exchange, I asked them what their lives are like and why they do what they do.

The people I talked to generally don’t have standard jobs. Seven referred to themselves as stay-at-home moms. Two are retired. One described herself as disabled, and two told me they have disabled children. Two said that they rarely leave the house, and several others described similarly isolating daily routines. Almost all of them said, unprompted, that they use the gift cards they earn to buy Christmas presents for family members. But they talk about the free products they receive, too, which Crowdtap sends them in order to complete tasks — how helpful it is to get shampoo, pain killers, tampons, or dog food, for the small price of answering surveys about their mustard purchase habits or commenting on an Instagram photo of a redesigned Splenda bottle.

One of the dark sides of Crowdtap and similar platforms is how little participants end up making. Crowdtap doesn’t advertise itself as a kind of work (although its Membership Agreement makes clear that participants are classified as independent contractors), and members typically don’t see Crowdtap as an employer, either. But people I talked to said they spend an average of 15 hours a week on these platforms, and, when I asked how much they end up making based on the value of the free products, coupons, and gift cards, the answers ranged from 25 cents an hour to about $11, with an average of $2.45.

But beyond the far-below-minimum-wage payments, the more insidious side of this kind of work is that it masquerades as something it’s not. Crowdtap says it’s about letting people share their opinions with brands and with other consumers, but the flow of information actually goes the other way. Crowdtap isn’t about letting people speak to brands; it’s about letting brands speak to people.

I didn’t fully understand this until I decided to spend an afternoon using Crowdtap. During that time, here’s what I did:
  • Picked a list of my favorite brands.
  • Answered questions such as “What are the benefits of aloe in a facial skin care product?”
  • Visited pages that provided information about various cheese products.
  • Filled in prompts about these cheese products such as “Think my fam will love ______.”
  • Tweeted these responses, being sure to include hashtags such as #HorizonCrowd and links to product pages.
  • Wrote a short response about why I was “hoping to try out a delicious variety” of cheese snacks. I was reminded to “mention both what kind of occasions you envision your family snacking … and which aspects of the snack pack appeal to you most.”
Answering a survey question usually earns two points. Posting something to social media usually earns 20. Uploading a photo earns 30. To get your first two $5 Amazon gift cards, you need 500 points; after that, it takes 1,000 points for each gift card. As I wrote short tweets and answered survey question after question, I felt myself pulled in two directions — between getting through the rote tasks as quickly as possible and putting enough effort into my responses that I wouldn’t get kicked off the platform. As one woman told me, “A lot of times it gets really repetitive. You're looking at the same links over and over again so you just fly through it.” But flying through can feel a little unsettling when you’re not sure if anybody is checking your work and you know you’re an independent contractor who can be dismissed at any time.

Regardless of whether we call it work or “connecting with brands,” these repetitive tasks are a pretty intense form of consumer education. And when you repeat these activities hour after hour, spending enough time to actually earn a $5 gift card, you probably pick up some habits that brands are happy about. Such as:
  • Good consumers have favorite brands.
  • Good consumers are knowledgeable about the products produced by those brands. Specifically, they can list features that make those products better than others.
  • Good consumers think about buying and using products in the future, and they tell people about their plans.
  • Good consumers include brand hashtags when they talk about products online.
There’s a further level of education built into these activities as well, in that you often have to already have bought certain items before you can earn your points. For example, a lot of the tasks are intended to be completed on a smartphone with a camera. A mother of two told me her family had recently purchased a smartphone with their tax refund so she could participate in these tasks. Another, an unemployed single mother, told me that she couldn’t complete all the tasks offered because she just couldn’t afford to: “They want you to take a picture of you eating lobster. I don't eat lobster and I can't afford lobster, which is why I'm getting Amazon gift cards from Crowdtap.”

Good consumers keep their personal electronics up to date. And they eat lobster (or at least aspire to).

by Daniel Carter, The Outline | Read more:
Image: Rune Fisker

Is Democracy in Europe Doomed?

On the morning of April 23, 2017, as the polls opened in the ninth arrondissement of Paris, an old man with a cane positioned himself in front of a bright yellow mailbox and began to scrape. After a few minutes, he sauntered away toward the markets of the rue des Martyrs, leaving a torn and scratched relic of the modified hammer-and-sickle logo of the hard-left candidate Jean-Luc Mélenchon’s party, La France Insoumise (“Rebellious” or, literally, “Unsubmissive France”).

The old man, evidently no fan of Mélenchon’s anticapitalist, anti-NATO, pro-Russian rhetoric, had reason to worry. In neighborhoods like this, the epicenter of Paris hipsterdom, Mélenchon polled well. Everyone from student protesters to academics and the well-to-do scions of one of the city’s wealthiest families told me they were voting for the ex-communist firebrand. (...)

Le Pen and Mélenchon together drew nearly 50 percent of the youth vote in the first round, splitting the 18-34 age bracket evenly. Unlike in Britain’s Brexit referendum, the young did not support the status quo; they voted for extremists who want to leave the EU.

Those who believe millennials are immune to authoritarian ideas are mistaken. Using data from the World Values Survey, the political scientists Roberto Foa and Yascha Mounk have painted a worrying picture. As the French election demonstrated, belief in core tenets of liberal democracy is in decline, especially among those born after 1980. Their findings challenge the idea that after achieving a certain level of prosperity and political liberty, countries that have become democratic do not turn back.

In America, 72 percent of respondents born before World War II deemed it absolutely essential to live in a democracy; only 30 percent of millennials agreed. The figures were similar in Holland. The number of Americans favoring a strong leader unrestrained by elections or parliaments has increased from 24 to 32 percent since 1995. More alarmingly, the number of Americans who believe that military rule would be good or very good has risen from 6 to 17 percent over the same period. The young and wealthy were most hostile to democratic norms, with fully 35 percent of young people with a high income regarding army rule as a good thing. Mainstream political science, confident in decades of received wisdom about democratic “consolidation” and stability, seemed to be ignoring a disturbing shift in public opinion.

There could come a day when, even in wealthy Western nations, liberal democracy ceases to be the only game in town. And when that day comes, those who once embraced democracy could begin to entertain other options. Even Ronald Inglehart, the celebrated eighty-three-year-old political scientist who developed his theory of democratic consolidation more than four decades ago, has conceded that falling incomes, rising inequality, and the abject dysfunction of many governments—especially America’s—have led to declining support for democracy. If such trends continue, he wrote in response to Foa and Mounk, “then the long-run outlook for democracy is indeed bleak.” Part of voters’ disillusionment stems from the political establishment’s failure to confront very real tensions and failures of integration, opening the door for a web-savvy army of right-wing propagandists who put forth arguments that are both offensive and easily digestible.

Others have been more nuanced. Christopher Caldwell’s provocative 2009 book, Reflections on the Revolution in Europe, stood out from the chorus of shrill, alarmist writers who warned that mass migration posed a fundamental threat to European culture and stability. His was a serious and carefully argued book. The central question he posed was, “Can Europe be the same with different people in it?” He held that the erosion of old Christian values and a strong sense of national pride in much of Western Europe weakened the cultural identity of countries to the point that they were no match for the all-encompassing identity offered by Islam.

This account seemed prescient when it was published, but it was premature. The rhetoric of anti-immigration parties and right-wing propagandists has propelled the rise of a powerful countervailing form of extremism: white identity politics. In France, this movement was not strong enough to put Marine Le Pen in power, but it did garner over one-third of valid votes cast in France’s presidential runoff. And like fundamentalist Muslims, white nationalists idealize a pure, imagined past. Both extremist visions feed off one another, and they have the power to tear Europe apart.

The nagging question today is which Europe will ultimately win. In the wake of Macron’s victory in the French election, it is tempting to think that the plague of populist nationalism has been banished. That would be naive.

Within minutes of Macron’s win on May 7, 2017, the triumphalism began across the world. Macron defeats radicalism, proclaimed Spain’s El País. France stems tide of populist revolution, Britain’s Independent cheered. White nationalism gets thumped, declared David Leonhardt in The New York Times the next morning. The euphoria that greeted Macron’s victory is understandable but dangerous. Le Pen’s FN won over 10.5 million votes, double the number her father received in 2002, drawing in supporters from both the far left and center right. She ran a serious and competent campaign, unlike other far-right figures. As with Holland, where Geert Wilders’s weaker-than-expected showing in the March 2017 election was interpreted as a signal that populism’s march had been halted, there is no cause for celebration, as the strong showing of Austria’s right-wing populist Freedom Party in Sunday’s election proved.

Wilders performed poorly because the few times he did campaign, he was surrounded by a phalanx of armed guards in small villages filled with supporters. Le Pen, by contrast, stumped all across the country and braved crowds throwing eggs at her in staunchly anti-FN Brittany. She even tried to upstage Macron in his hometown, Amiens, where he waded into a hostile crowd of striking Whirlpool workers and, rather than pandering, told them he wouldn’t make any “airy promises” to avert the closure of their factory. When Le Pen heard he was going to visit, she descended on the site with her entourage first, seeking to bolster her credentials with workers whom she knew would not be receptive to Macron’s free-market message. It was a bold move akin to Trump’s visit to an Indiana air conditioner factory a few weeks after the election, where he sought to show that he was already saving American jobs.

Even in Paris, where Le Pen’s posters were routinely defaced with the word “SATAN,” there was no unanimity about how to fight her. Unlike in 2002, the front républicain that had battered Le Pen the elder did not materialize this time. Macron’s victory, with 66 percent of the vote, was a convincing one, but it was nowhere near Jacques Chirac’s 82 percent score—a testament to what Marine Le Pen has achieved. After the FN’s loss, Le Pen gave a concession speech that sounded more like a campaign rally for the upcoming legislative elections. If the FN finally abandons its name and the baggage that comes with it, new leaders, like Le Pen’s young and telegenic niece, Marion-Maréchal, may be able to de-demonize the party in a way that Marine could not.

Too many people on the European left scoff at nationalism, mistaking their own distaste for evidence that the phenomenon no longer exists or is somehow illegitimate. If 2016 and 2017 have proven anything, it is that this sort of visceral nationalism, or loyalty to one’s in-group, still exists and is not going away. Those who dismiss this sort of national sentiment as backward and immature do so at their own peril.

What the globalists of the transnational elite miss is that not everyone has the luxury of leaving. Those who don’t have the education and skills to travel abroad often resent those who do. To compensate, they identify strongly with the place they come from and support politicians who promise to protect them from both genuine and imaginary threats. They do not have the luxury of voting with their feet, but their protest is felt at the polls.

To dismiss the populist impulse as something completely alien is to miss the point and to preemptively lose the political debate. With or without actual control of the government, they have proved they can exert influence and shape debates without ever wielding formal power.

The first step in any coherent political project to counter right-wing populists is to reject the fear that fuels their popularity and resist the temptation to adopt their policies. Very few leaders have done this. In Holland and Denmark, the center right and the social-democratic left have largely caved and adopted planks from the populists’ platform. The left has lost much of its old base by appearing to care only about free trade, technological progress, and limitless diversity. This scares many people who used to vote for the Democratic Party, British Labour, or European Social Democrats.

Nativist politicians like Trump or Holland’s Geert Wilders are not particularly concerned with bread-and-butter issues, and their economic policies aren’t terribly helpful to workers and the poor. But because there is often no class-based counterargument coming from the left, it is easy for right-wing populists to seize that political terrain; it is an open space. Once the old economic battle lines disappear, realignment becomes very easy. The challenge for today’s left is to acknowledge these voters’ fears and offer policies that help address their grievances without making the sort of moral concessions that lead toward reactionary illiberal policies.

by Sasha Polakow-Suransky, NYRB | Read more:
Image: Jean-Paul Pelissier/Reuters

Tuesday, October 17, 2017

Joni Mitchell: Fear of a Female Genius

In one of the golden, waning years of the 1960s, Chuck Mitchell told his young wife to read Saul Bellow’s novel Henderson the Rain King. It was not a gesture of marital kindness so much as a power move: Chuck was older and more educated than Joan, and to her ears, his book recommendations always came with a tone of condescension. (“I’m illiterate,” she bemoaned to a friend around that time. “My husband’s given me a complex that I haven’t read anything.”) Chuck and Joan were both folk singers who played as a duo—together if not exactly equal. He was traditional where she was itchily forward-thinking (“Lately he’s taken to saying I’m crazy and blind,” she’d later sing in one of her own songs, “He lives in another time”). She had, on her guitar, an ability to find strange new tunings that Chuck called “mystical.” His penchant for making his wife feel decidedly un-genius-like was most likely born out of a terror—one that grew stronger with each day—that she actually was one.

Still, one day around 1966, she brought a copy of Henderson with her on a plane. It just so happened that the narrator of the book was also on a plane. “We are the first generation to see clouds from both sides,” he wrote, and Joni read. “What a privilege! First people dreamed upward. Now they dream both upward and downward.” That passage snagged something inside her. She closed the book. She scribbled some lyrics, and when the plane landed she picked up a guitar and twirled the tuning knobs until she found the properly improper chords to accompany her words. When she first played the song for Chuck, he scoffed. What could a 23-year-old girl know about “both sides” of life? More than anything, he was insulted that she’d put the book down less than halfway through and hadn’t bothered to finish it. He took this as evidence of her inferior intelligence, her “rube” upbringing, her flighty attention span. And yet, what else was there to get out of Henderson the Rain King? What more could a human being possibly get out of a book than Joni Mitchell putting it down to write “Both Sides Now”?

Some people think that when a woman takes her husband’s last name it is necessarily an act of submission or even self-erasure. Joni Mitchell retaining Chuck’s last name for decades after their divorce has always struck me as a defiant, deliciously cruel act of revenge. In the 50 years since, she spread her wings and took that surname to heights and places it never would have reached had it been ball-and-chained to a husband: the hills of Laurel Canyon, The Dick Cavett Show, a window overlooking a newly paved Hawaiian parking lot, the Grammys, Miles Davis’s apartment, Charles Mingus’s deathbed, Matala, MTV, the Rolling Thunder Revue, and the top of a recent NPR list of greatest albums ever made by women. Over a singular career that has spanned many different cultural eras, she explored—in public, to an almost unprecedented degree—exactly what it meant to be female and free, in full acknowledgement of all its injustice and joy.

Not long after “Both Sides, Now” was written, the folk pioneer Joan Baez caught a Chuck and Joni set at the Gaslight Cafe in New York. “I remember thinking, ‘You gotta drop this guy,’” Baez recalled. Soon after, Joni did. Leaving Chuck Mitchell was her first hejira, a variation of an Arabic word she’d later stumble upon in a dictionary that, too, would snag something in her—it means a “flight or journey to a more desirable or congenial place,” or “escape with honor.” There would be many more. Decades later, in a 2015 interview with New York, though, Mitchell reflected on the decision to leave her first marriage. She quoted an old saying: “‘If you make a good marriage, God bless you. If you make a bad marriage, become a philosopher.’ So I became a philosopher.”

It did not take long. In the opening moments of her first album, 1968’s Song to a Seagull, she bid goodbye not only to Chuck, but to the roadmap of a traditional life. This is the chorus of a song called “I Had a King.”

I can’t go back there anymore
You know my keys won’t fit the door
You know my thoughts don’t fit the man
They never can
They never can

There is right now a spirited conversation about women and canonization happening in the music world, and there is right now a new biography of Joni Mitchell on the shelves. If you pay more than passing attention to these topics, you will know that neither of these occurrences is particularly rare, but they are as good reasons as any to take stock of Mitchell’s singular, ever-changing legacy, in the always-fickle light of right now. (...)

“Before canons are handed down, someone has to make them,” Wesley Morris recently wrote in New York Times Magazine. “The atmospherics around that consecration tend to default to masculinity because the mechanisms that do the consecrating are overwhelmingly male.” Inspired by NPR, Morris decided to listen only to music made by women for several months, and to write about his experience. He started with all 150 albums on the NPR list and eventually added 72 more. The result was a sharp, thoughtful essay, but, as critic Judy Berman pointed out on Twitter, it may have mapped a territory that only seemed uncharted to men. “Gorgeous piece,” she wrote, “but jarring that one of our best male critics had to hear 150 albums to get something all women know… I would never think to write this essay, because it just seems obvious to me, but maybe men need to have the conversation amongst themselves.”

Morris’s essay, though, was astute in identifying the cultural forces and biases that combine to create the idea of legacy. It’s true that we’re living through an exceptional time for women in pop music, with mainstream artists like Beyoncé, Rihanna, Taylor Swift, and Adele all pushing boundaries and/or dominating the charts, but, Morris wondered, “What happens in 20 years?” He used the (somewhat selective) example of Donna Summer, who once seemed winningly ubiquitous in the pop world: “Now she’s the epitome of a bygone era instead of the musician who paved a boulevard for lots of women who top charts.” Men, of course, are perceived to grow older more “gracefully” in our sexist, ageist culture. It follows that the masculine forces of canonization and legacy-making are stacked against female artists as they age, and that perhaps the most crucial time to assert female artist’s importance isn’t so much in the moment of their domination but in that crucial “20 years later.”

Which brings us to Reckless Daughter: A Portrait of Joni Mitchell, an extensive new Joni Mitchell biography by the Syracuse professor and New York Times contributor David Yaffe. It is by no means the first book about Mitchell—actually, you could topple a small bookshelf with its predecessors: Barney Hoskyns’s extensive collection Joni: The Anthology; Joni Mitchell: In Her Own Words (a candid 2014 collection of interviews with the Canadian broadcaster Malka Marom); and Sheila Weller’s Girls Like Us: Carole King, Joni Mitchell, Carly Simon—and the Journey of a Generation (a three-woman biography) to name just a few. But Yaffe does have a few new brushstrokes to add to the canvas, thanks mostly to a series of interviews he’s conducted with Mitchell over the past decade. He flew to her California home in 2007 to interview her for the Times; after the piece ran, she called Yaffe, “bitched [him] out,” and painstakingly enumerated every detail she thought he’d gotten wrong. They didn’t speak for years. Then a mutual friend reconnected them, and over marathon hours and seemingly billions of cigarettes (Mitchell’s longest love affair has, quite possibly, been with American Spirits), the loquacious artist held court while her biographer was given a second chance to tell her story.

Reckless Daughter is an engrossing, well-told, but ultimately conventional biography. It reanimates Mitchell’s incredible history, but it also left me wondering about her current influence and relevance outside the pages of prestigious newspapers and hardcover books. While I was reading the book, a few people mentioned to me that they weren’t sure if Mitchell’s influence was carrying over to millennials. I’ll admit that there’s definitely something internet-proof about her: An unruliness that makes it difficult to distill the adoration down to a gif or a well-chosen photo as it does with, say, boomer-turned-Tumblr-icons like Stevie Nicks or Joan Didion. And yet, Mitchell has, in the past, prided herself on being out of step with the times when she did not believe the times were worthy of her footwork. When people told her she was “out of sync” with the ’80s, she felt relieved. To be “in sync with the ’80s,” Yaffe quotes her saying, would have been “degenerating both morally and artistically.”

I was in my mid-20s when I started to realize—with absolute exhilaration and a little fear—that my life was not going to play out on the same traditional feminine timeline as my mother and grandmothers. Then, late last year, I felt a certain cosmic vertigo when I passed the age that my own mother had been when she gave birth to me. Unlike she was at 29, I was without a partner, a mortgage, or a concrete five-year plan. Friends were getting married in barns and having children on purpose and putting down payments on houses in the suburbs. I had, a few years prior, moved to New York to write and make new friends and go to the movies alone when I felt like it and live in a rented apartment. Throughout my adulthood, I had made certain choices that had at times looked reckless to the people around me—abruptly leaving unsatisfying jobs or rejecting perfectly decent men—though I knew, intuitively, that they were the correct choices for me at the time. I am happy and secure and without any major regrets, but I have sometimes had to crane my neck around for other long-term models of how to be a woman who lives, as it were, off-road. This is all a long-winded way of saying that, like so many people before me, in my 20s I went through a Joni Mitchell phase.

Those many people before me, of course, are not just women. Mitchell gestures towards the elsewhere at all kinds of angles, which is intrinsic to her mass popularity. No matter how you look at her, she provides an alternative to something. One example of many: Two years ago, Dan Bejar, the eccentrically talented songwriter of Destroyer and the New Pornographers, was asked by the music site The Quietus to pick and discuss his 12 favorite albums for their “Baker’s Dozen” feature. His first six choices were, in order, Court and Spark, Hejira, The Hissing of Summer Lawns, Don Juan’s Reckless Daughter, Mingus, and Turbulent Indigo. (Blue, he actually considered too canonical to mention: “It’s so etched in stone that I wouldn’t know how to draw from it.”) The interviewer took the bait and asked him why so much Joni Mitchell. Bejar, then 42, said of her freewheeling, jazz-embracing late period in particular, “Listening to [her] I realized that this is a path I could follow, which I always search for, because at this point in my career, in terms of pop music years, I think I’m supposed to die. So when you find a different path that you can follow, it’s more exciting than the idea that you should just die.” (...)

By the mid-’70s, Mitchell had developed a disdain for much of the pop music world; in the ’80s, it curdled into outright disgust. There’s a hilariously biting scene in Yaffe’s book chronicling the backstage drama at a 1990 charity concert celebrating the fall of the Berlin Wall. The rock stars of the day were constantly falling short of her expectations. Cyndi Lauper was acting “childish,” Bryan Adams was rude to his girlfriend in front of Mitchell, Sinead O’Connor (“a passionate little singer”) looked down at her feet rather than making eye contact. “The childish competitiveness, the lack of professionalism—I don’t have a peer group,” she told Yaffe, recalling this era. “All of them, these spoiled children. It’s not what I would have expected in an artistic community.”

And so—to the frustration of some of her fans—as the years went on she sought out her artistic equals in the jazz world. One of her first collaborators to truly challenge her was the electric bass iconoclast Jaco Pastorius; they started working together on Hejira. “Nearly every bass player that I tried did the same thing. They would put up a dark picket fence through my music,” she recalls in Woman of Heart and Mind. “Finally, one guy said to me, ‘Joni, you’ve gotta play with jazz musicians.’” Eventually, in 1978, she was summoned for her most daunting collaboration yet, working with the legendary Charles Mingus on his final album, while he was dying of ALS. Though plenty of jazz purists scoffed at Mitchell’s involvement, she earned the admiration of her brilliant, cantankerous collaborator. (He called her, affectionately, “motherfucker.”) As her music grew less commercial, it sometimes felt—for better and worse—that she was simply sending out dog whistles to other musicians as accomplished as herself. The very first time she met Mingus, he said to her, “The strings on ‘Paprika Plains’—they’re out of tune.” Far from offended, she was delighted—the strings were out of tune, and “she wished someone else had noticed.” Only a fellow genius would have noticed, and introduced himself like that.

by Lindsay Zoladaz, The Ringer |  Read more:
Image: Getty

Pablo Picasso, Le peintre et son modèle IV, 15-November/1964

An Interview with MacArthur ‘Genius’ Viet Thanh Nguyen

Viet Thanh Nguyen had just gotten back from a summer in Paris when he received an unexpected phone call from a Chicago number. He didn’t recognize the caller, so he let it ring. Out of curiosity, he texted back, “Who is this?”

The number replied, “It’s the MacArthur Foundation.”

“Oh,” Nguyen thought. “I should call these people back right away.”

Nguyen managed to stand for the first few seconds of the call, but soon had to sit down. He’d just won $625,000, no strings attached, as an unrestricted investment in his creative potential.

Eighteen months earlier, Nguyen had received another life-altering phone call when he won the 2016 Pulitzer Prize for Fiction for his debut novel, The Sympathizer. Since the book’s publication in April 2015, Nguyen’s been no stranger to worldwide recognition: He’s also received a Guggenheim fellowship, the Dayton Literary Peace Prize, the First Novel Prize from the Center for Fiction, the Carnegie Medal for Excellence in Fiction, and countless others.

According to the MacArthur Selection Committee, “Nguyen’s body of work not only offers insight into the experiences of refugees past and present, but also poses profound questions about how we might more accurately and conscientiously portray victims and adversaries of other wars.” After writing in obscurity for more than a decade to honor his and others’ war stories — and all refugee stories, Nguyen insists, are war stories — he will now have even more resources to help tilt the world in a more peaceful direction.

I spoke with Nguyen the day after the MacArthur Foundation announced him, along with 23 other extraordinary recipients, as a 2017 MacArthur Fellow. (...)

How have you been feeling about defining the purpose of this monetary windfall — a definition the prize leaves entirely up to you?

This has been a crazy 18 months for me, ever since I won the Pulitzer. The Pulitzer had already started transforming my life economically. This award is just… it’s more than I actually need. (...)

You’ve spoken with Josephine Livingstone at The New Republic about how, up until you’d secured tenure in 2003, you’d lived your life in a “very pragmatic, very strategic way,” which “required a lot of repression.” Does the MacArthur put that phase of your life into a different context? Was it a necessary period of incubation for the scope of your work?

It was definitely necessary, just given my understanding of my own character. I’m like my parents — they are, at the same time, both risk capable and risk averse. They mostly want everything to be stable, to be safe, but when times require it, they will do tremendous things. They were refugees twice. They built their own fortunes twice. So they know when to take risks. But after those risks are successfully accomplished, they just want everything to be safe. I’m like that.

I wanted to make sure that my life was secure by being a professor, then I could maybe be more explicit about the risks that I was taking. I was already writing fiction before that, but no one cared, no one knew. After 10 years more people knew, but no one cared! So that was a very long period of risk-taking after tenure. That was like 12 years before The Sympathizer got published.

I think that with the MacArthur now, at this point, it’s a confirmation of the fact that the risk that I was taking paid off. It also just really adds to the pressure! No one knew who I was when I was writing The Sympathizer, which was very liberating. It’s one thing to take risks when no one knows who you are. It’s a different matter to take risks when expectations are heightened all around.

You wrote The Sympathizer as a comedic spy novel to draw readers into engaging with the history of the Vietnam War. Why was it so crucial, in order to create that engagement, to disguise theory and philosophy as plot and action?

I have spent a lot of time as a scholar thinking about the question of where art and politics intersect. It seemed to me that you would find this intersection happening very regularly in countries outside of the United States. It’s within the United States that there is great reluctance, generally, to engage in this. It’s a legacy of the fact that this is a very anti-communist country, with very stereotypical perceptions of what the intersection of art and politics can look like.

Coming at this as a writer who did not go through an MFA program, where I think it is discouraged to talk about politics explicitly, I felt that I could draw from what I knew as a scholar to try to do this novel. I had to create a character and a plot where it would be viable dramatically for him to think about these theoretical and political issues. The novel is implicitly a rejoinder to many of the works dealing with the Vietnam War — but in general, American literature — that shies away from that.

I did not want to write a realistic account of the Vietnam War because so many of those have already been written. I didn’t have anything new to add to that. I felt like we had to get beyond realism to talk about this war and its consequences.

Humor is also a very important strategy for when we are dealing with really horrific situations. It just helps to alleviate the mood, to make us more capable of bearing the burden of what we are witnessing. Soldiers who go through war find humor, terrible humor, in the most terrible things.

by Catherine Cusick and Viet Thanh Nguyen, Longreads | Read more:
Image: via
[ed. I put off reading The Sympathizer for a long time even though it won the Pulitzer Prize (pretty much my gold standard for fiction recommendations) because... Vietnam. Who wants to read another book about that? But it was superb, and not anything like I expected. Here's an excerpt. I'm also taken by Mr. Nguyen's description of himself and his parents as "risk capable and risk averse"; a good way to be, I think, and a good description of my own approach to life (though never articulated in such a concise way).]

A Washing Machine That Tells the Future

The Smoot-Hawley Tariff Act of 1930 is perhaps the single most frequently mocked act of Congress in history. It sparked a trade war in the early days of the Great Depression, and has become shorthand for self-destructive protectionism. So it’s surprising that, while the law’s tariffs have been largely eliminated, some of its absurd provisions still hold. The other week, the American appliance-maker Whirlpool successfully employed a 1974 amendment to the act to persuade the United States government to impose as yet unidentified protections against its top Korean competitors, LG and Samsung. Whirlpool’s official argument was that these firms have been bolstering their market share by offering fancy appliances at low prices. In other words, Whirlpool is getting beat and wants the government to help it win.

This decision is more than a throwback. It shows that Whirlpool and its supporters in government have failed to understand the shift occurring in the business world as a result of the so-called Internet of Things—appliances that send and receive data. It’s easy to miss the magnitude of the change, since many of these Things seem like mere gimmicks. Have you ever wanted to change the water temperature in the middle of a wash cycle when you’re not at home, or get second-by-second reports on precisely how much energy your dryer is consuming? Probably not, but now you can. And it’s not just washing machines. There are at least two “smart” toasters and any number of Wi-Fi-connected coffeemakers, refrigerators, ovens, dishwashers, and garbage cans, not to mention light bulbs, sex toys, toilets, pet feeders, and a children’s thermos.

But this is just the early, land-rush phase of the Internet of Things, comparable to the first Internet land rush, in the late nineties. That era gave us notorious failures—cue obligatory mention of—but it also gave us Amazon, the company that, more than any other, suggests how things will play out. For most of its existence, Amazon has made little or no profit. In the early days, it was often ridiculed for this, but the company’s managers and investors quickly realized that its most valuable asset was not individual sales but data—its knowledge about its loyal, habit-driven customer base. Amazon doesn’t evaluate customers by the last purchase they made; instead, customers have a lifetime value, a prediction of how much money each one will spend in the years to come. Amazon can calculate this with increasing accuracy. Already, it likely knows which books you read, which movies you watch, what data you store, and what food you eat. And since the introduction of Alexa, the voice-operated device, Amazon has been learning when some customers wake up, go to work, listen to the news, play with their kids, and go to sleep.

This is the radical implication of the Internet of Things—a fundamental shift in the relationship between customers and companies. In the old days, you might buy a washing machine or a refrigerator once a decade or so. Appliance-makers are built to profit from that one, rare purchase, focussing their marketing, customer research, and internal financial analysis on brief, sporadic, high-stakes interactions. The fact that you bought a particular company’s stove five years ago has no value today. But, when an appliance is sending a constant stream of data back to its maker, that company has continuous relationships with the owners of its products, and can find all sorts of ways to make money from those relationships. If a company knows, years after you bought its stove, exactly how often you cook, what you cook, when you shop, and what you watch (on a stove-top screen) while you cook, it can continuously monetize your relationship: selling you recipe subscriptions, maybe, or getting a cut of your food orders. Appliances now order their own supplies when they are about to run out. My printer orders its own ink; I assume my next fridge will order milk when I’m running low.

by Adam Davidson, New Yorker |  Read more:
Image: New Yorker, R. Kikuo Johnson

Monday, October 16, 2017

The Decline of the Midwest's Public Universities Threatens to Wreck Its Most Vibrant Economies

Four floors above a dull cinder-block lobby in a nondescript building at the Ohio State University, the doors of a slow-moving elevator open on an unexpectedly futuristic 10,000-square-foot laboratory bristling with technology. It’s a reveal reminiscent of a James Bond movie. In fact, the researchers who run this year-old, $750,000 lab at OSU’s Spine Research Institute resort often to Hollywood comparisons.

Thin beams of blue light shoot from 36 of the same kind of infrared motion cameras used to create lifelike characters for films like Avatar. In this case, the researchers are studying the movements of a volunteer fitted with sensors that track his skeleton and muscles as he bends and lifts. Among other things, they say, their work could lead to the kind of robotic exoskeletons imagined in the movie Aliens.

The cutting-edge research here combines the expertise of the university’s medical and engineering faculties to study something decidedly commonplace: back pain, which affects as many as eight out of every 10 Americans, accounts for more than 100 million annual lost workdays in the United States alone, and has accelerated the opioid addiction crisis.

“The growth of the technology around us has become so familiar that we don’t question where it comes from,” says Bruce McPheron, an entomologist and the university’s executive vice president and provost, looking on. “And where it happens consistently is at a university.”

But university research is in trouble, and so is an economy more dependent on it than many people understand. Federal funding for basic research—more than half of it conducted on university campuses like this one—has effectively declined since 2008, failing to keep pace with inflation. This is before taking into account Trump administration proposals to slash the National Science Foundation (NSF) and National Institutes of Health (NIH) budgets by billions of dollars more.

Trump’s cuts would affect all research universities, but not equally. The problem is more pronounced at public universities than private ones, and especially at public institutions in the Midwest, which have historically conducted some of the nation’s most important research. These schools are desperately needed to diversify economies that rely disproportionately on manufacturing and agriculture and lack the wealthy private institutions that fuel the knowledge industries found in Silicon Valley or along Boston’s 128/I-95 corridor. Yet many flagship Midwestern research universities are being weakened by deep state budget cuts. Threats to pensions (in Illinois) and tenure (in Wisconsin) portend an exodus of faculty and their all-important research funding, and have already resulted in a frenzy of poaching by better-funded and higher-paying private institutions, industry, and international competitors.

While private institutions are better shielded from funding cuts by huge endowments, Midwestern public universities have much thinner buffers. The endowments of the universities of Iowa, Wisconsin, and Illinois and Ohio State, which together enroll nearly 190,000 students, add up to about $11 billion—less than a third of Harvard’s $37.6 billion. Together, Harvard, MIT, and Stanford, which enroll about 50,000 students combined, have more than $73 billion in the bank to help during lean times. They also have robust revenues from high tuitions, wealthy alumni donors, strong credit, and other support to fall back on. Compare that to the public university system in Illinois, which has cut its higher-education budget so deeply that Moody’s downgraded seven universities, including five to junk-bond status.

This ominous reality could widen regional inequality, as brainpower, talent, and jobs leave the Midwest and the Rust Belt—where existing economic decline may have contributed to the decisive shift of voters toward Donald Trump—for places with well-endowed private and better-funded public universities. Already, some Midwestern universities have had to spend millions from their battered budgets to hang on to research faculty being lured away by wealthier schools. A handful of faculty have already left, taking with them most if not all of their outside funding.

“We’re in the early stages of the stratification of public research universities,” said Dan Reed, the vice president for research and economic development at the University of Iowa. “The good ones will remain competitive. The rest may decline.” Those include the major public universities established since the 1860s, when a federal grant set aside land for them in every state. “We spent 150-plus years building a public higher-education system that was the envy of the world,” said Reed, who got his graduate degrees at Purdue, in Indiana. “And we could in a decade do so much damage that it could take us 30 years to recover.”
That land grant was called the Morrill Act. Abraham Lincoln signed it into law during the depths of the Civil War, in 1862, resulting in the establishment or major expansion of, among others, Purdue, the University of Illinois at Urbana-Champaign, the University of Minnesota, the University of Missouri, and Ohio State. Along with many other major public universities in the Midwest, each would go on to have an outsized impact.

It was at Illinois that the first modern internet browser was developed, along with other advances in computer science and technology including early versions of instant messaging, multiplayer games, and touch screens. Today, researchers there are working on a new treatment for brain cancer, a way to boost photosynthesis to increase crop yields, and a solution to the growing problem of antibiotic resistance.

Scientists at the University of Minnesota created the precursor to the World Wide Web, performed the first open-heart surgery, and developed Gore-Tex waterproof fabric. The University of Wisconsin is where human embryonic stem cells first were isolated, and it has since become a center of stem-cell research. Researchers there are trying to develop new drugs to fight the Ebola and West Nile viruses. The University of Iowa’s Virtual Soldier Research Program uses human modeling and simulation to design new military equipment, and its National Advanced Driving Simulator is heavily involved in driverless-vehicle research.

Universities perform more than half of all basic research in America, and public research universities in particular account for two-thirds of the $63.7 billion allocated annually by the federal government for research. That spending, in turn, produces more than 2,600 patents and 400 companies a year, according to the Association of University Technology Managers.

The impact on local economies is hard to miss. In places like Columbus, Ohio, and Columbia, Missouri, the big research universities are among the most important institutions in town. The checkerboard patchwork of farms on the approach to Port Columbus International Airport gives way to office buildings housing high-tech companies spun off by Ohio State and the affluent suburbs where their employees live. The real-estate company CBRE ranks the city as the country’s top small market for attracting tech talent.

More than one in five graduate students who worked on sponsored research at eight Big Ten universities studied by Ohio State economist Bruce Weinberg, including Indiana, Michigan, Minnesota, Purdue, and Ohio State, stayed in the state where they attended school—13 percent of them within 50 miles of the campus. That may not sound like a lot—and, indeed, the exodus of highly educated people is a serious problem—but it’s significant when considering that the jobs for these students exist in a national labor market. People with engineering Ph.D.s from Minnesota could take their talents anywhere. If even 20 percent stick around, that’s a big win for states that can’t expect an influx of educated elites from other parts of the country. These graduates provide an educated workforce that employers need, create jobs themselves by starting their own businesses, and pay taxes.

These universities have served as bulwarks against a decades-long trend of economic activity fleeing smaller cities and the center of the country for the coasts. Since the 1980s, deregulation and corporate consolidation have led to a drastic hollowing out of the local industries that once sustained heartland cities. But a university can’t just be picked up and moved from Madison to New York in the way a bank, an insurance company, or even a factory can be.

“What difference does having a major research university in a place like Wisconsin make?” said Rebecca Blank, the chancellor of the University of Wisconsin. “It’s the future of the state.” If Blank is right, then current trends put that future in doubt for much of the Midwest. Many of these same universities have suffered some of the nation’s deepest cuts to public higher education. Illinois reduced per-student spending by an inflation-adjusted 54 percent between 2008 and last year, according to the Center on Budget and Policy Priorities. The figure was 22 percent in Iowa and Missouri, 21 percent in Michigan, 15 percent in Minnesota and Ohio, and 6 percent in Indiana. While higher education funding increased last year in 38 states, Scott Walker’s budget for 2015 through 2017 cut another $250 million from the University of Wisconsin system. The University of Iowa recently had its state appropriation cut by 6 percent, including an unexpected $9 million in the middle of the fiscal year. (...)

These financial woes would only be made worse by Trump’s proposed budget, which would cut funding by between 11 percent and 18 percent for the federal agencies that provide the bulk of government support for university research. Congress has so far resisted this call, instead adding $2 billion to the NIH and $8.7 million to the NSF in the five-month budget extension approved in April. But budget cuts remain a threat. So does a Trump budget proposal to eliminate so-called indirect cost payments—billions of dollars’ worth of federal reimbursements for overhead such as lab space and support staff to conduct the research. (The House Republicans’ 2018 budget plan rejects that idea, at least for medical research.)

Private universities with big endowments and wealthy donors may be able to weather the storm. (So, too, may the handful of public universities, like the University of Michigan and the University of Virginia, that receive far more private than public funding.) But most public research institutions won’t.

by Jon Marcus, The Atlantic |  Read more:
Image: David Mercer

Ana de Armas - Blade Runner 2049


[ed. I finally got to see Blade Runner, 2049 last night and left half-way through. Too boring for me (watching Ryan Gosling act as an emotionless replicant for over an hour? No thanks.). But I could have watched Ana de Armas forever (Joi, a holographic computer program and the main character's girlfriend). What a beauty, and what a beautiful performance. Reminds me of someone else who very nearly stole the show on a big screen blockbuster.]

On the Political Uses of Evil

It’s strange to think of evil getting an unduly bad rap, and yet it has. “We should be extremely suspicious of the post-attack rhetoric of evil, whose modern incarnation dates to the evening of September 11, 2001,” brilliant left-wing writer Meagan Day noted, astutely, on Twitter after commenters and politicians rushed to describe the Las Vegas slaughter as such. “Evil is morally urgent but structurally vague,” Day explained, and “Can be used to abdicate responsibility (weapons regulation) & grant special powers (wage war).”

Day is right to observe that the political use of evil, especially in the post 9–11 world, seems intentionally vague and directionless. If something or someone is simply evil, that line of reasoning goes, nothing can be done about them but total defeat; likewise, a good actor can hardly be faulted for whatever measures they take in the process: we’re dealing with evil, after all. President George W. Bush dubbed Iraq part of an “axis of evil” in order to pursue conflict in the region, and President Barack Obama linked the necessity of his drone policy with “the evil that lies in the hearts of some human beings.” In each case, the attribution of evil to nameless persons or governments describes little but justifies plenty.

But the trouble with contemporary political uses of evil isn’t the concept itself, but rather the intentional vagueness thrust upon it by an era without any well-defined theory of the good. Defined well and deployed clearly, evil is an illuminating concept with useful political punch, but it can’t function, either personally or politically, without a correspondingly useful theory of the good.

It’s easy to see why evil is often considered amorphous to the point of uselessness as a concept; it’s both applied so promiscuously and dishonestly as to drain it of meaning, and it genuinely has many expressions. But these difficulties (perhaps paradoxically) point to a fairly useful definition of evil — one proposed by the ancient Christian philosopher Augustine of Hippo, and advanced throughout the medieval period: evil isn’t any existing thing, but rather negation itself. Or, as contemporary author Terry Eagleton put it in his book on the subject: it is “supremely pointless,” the utter perversion of meaning, creation, existence. It is meaningless, destructive, chaotic nothingness.

It’s less abstract than it sounds — and it can actually help us shape up our understanding of the good. When people pointlessly destroy themselves and one another, that’s evil. When people inflict harm, rob one another of peace and create disorder, that’s evil. Suppose you grant me that: Now how, you’d rightly ask, do we make political use of that kind of definition?

We do so by recognizing that evil is a shadow that falls where good is negated. If we do evil when we turn away from our obligations to one another via acts of destruction, that implies we do good when we lean into those obligations. Leaving one another alone isn’t enough: It might get you out of the ‘evil’ zone, but it doesn’t get you into the ‘good’ zone. If evil is abandonment, indifference can’t be good, even if it’s somewhere closer to neutral.

Rather, defining acts as evil helps us imagine what the good really is. If mass killings are evil, which they are, then we don’t merely have an obligation to let each other be, but to actively support one another in living life — that is, to help others acquire and use the necessities of life, and to participate in society with them. In other words, the opposite of evil mass shootings isn’t the kind of atomized, lonely society we’re currently engaged in cultivating (at our own risk), but a society full of engagement, where we work to build programs, policies and cultural norms that promote life and quality of life.

by Elizabeth Bruenig, Medium |  Read more:
Image: A 15th century depiction of Satan, uncredited

A Flawed Character

In recent years literary studies of the Bible have explored all kinds of topics -- save God, the chief protagonist of the narrative. That not insignificant subject has now received its due, a tour de force called "God: A Biography," by Jack Miles.

If some people may find a biography of God an irreverent enterprise, Mr. Miles is not one of them. He says that over centuries the Bible has been the fundamental document for both Jews and Christians. Its stories and characters have permeated the whole of Western culture. To track, then, the stories to their central character is in no way disrespectful. But Mr. Miles does engage in occasional provocation. At the outset he remarks that "God is no saint, strange to say." As the reader will find out, that is true enough, and the fact is not so strange.

Mr. Miles treats the Bible as a literary work. To produce a biography of a literary character is a complicated undertaking, and so in a sometimes amusing introductory chapter he guides the reader through the contrast in approaches taken by scholars and critics. With a light touch he describes his own approach as naive, seeing God as a real person, much the way a theatergoer thinks of Hamlet or a reader perceives Don Quixote. But he also knows there is a difference. "No character . . . on stage, page or screen," he says, "has ever had the reception that God has had." (...)

Who is the literary character called God? Simply put, a male with multiple personalities, which emerge gradually. At the beginning God creates the world in order to make a self-image, an indication that He does not fully understand who He is but discovers Himself through interaction with humanity. Immediately the focus narrows to the man and the woman in the garden. When they disobey their creator, He responds vindictively and so reveals His own inner conflict. Called God in Genesis 1, he is lofty, powerful and bountiful; called Lord God in Genesis 2 and 3, he is intimate and volatile. Ambivalent about His image, the creator becomes the destroyer: the flood descends. A radical fault runs through the character of God.

Still other personalities surface as the cosmic God becomes the personal deity of Abraham and the friend of the family for Jacob and Joseph. In the Exodus story he shows himself to be a warrior and soon thereafter a lawgiver and liege. This mixture of identities represents a fusion of selected traits gathered from other deities in the ancient world (and teased out of the biblical texts by a generation of historical scholars). A grand speech by Moses in Deuteronomy synthesizes these conflicting personalities to produce a relatively stable identity for God by the conclusion of the Torah.

Within this identity elements of divine self-discovery continue to develop. The first section of the Nebi'im, from Joshua through Kings, turns the liberator of Exodus into the conqueror of Canaan, the friend of the family into the "father" of Solomon, and the lawgiver of Israel into the arbiter of international relations.

But the ending of Kings threatens to terminate God's life. It reports the destruction of the people with whom He has been working out the divine image. If His biography is to continue beyond their demise, God must change, and the prophetic books following Kings record the transformation. In them the conflicted character God carries on a life-or-death struggle to reassemble the unstable elements of His personality. In the first 39 chapters of Isaiah He tries the role of executioner, but He also holds up a vision of a peaceable kingdom. Then, in the next 27 chapters of Isaiah, He forgoes destruction and insists that mystery, not power, is the source of his holiness. (...)

God begins to withdraw in the last division of the Tanakh, Mr. Miles says. For the most part, testimony about Him replaces speech by Him. Psalms perceives Him primarily as counselor. Proverbs treats Him like a picture frame; He is marginal to the content of the book. But in Job His destructive impulse comes fully into consciousness. The climax happens through the man Job, who, as the perfect image of the Creator, exposes the conflicted character of God. The outcome brings about repentance -- not of Job, for he has done no wrong, but of God, who restores good fortune to Job.

After the Book of Job, God never speaks again, though others repeat His speeches and report His miraculous deeds. Two sets of four books each shape these parts of the biography. In the Song of Solomon, God does not appear in the garden of Love. Ruth treats Him as a bystander who does not interact with the human characters. Lamentations waits sadly for this recluse who never comes. And Ecclesiastes declares Him a puzzle of no compelling importance. In literary terms Mr. Miles sees these books, taken together, as a denouement: they let time pass. (...)

At the end, Mr. Miles ponders why the life of the Lord God begins in activity and speech only to close in passivity and silence. Does God's desire for self-knowledge, shown in the creation of humanity in his image, carry the potential for tragedy? Surely the confrontation staged in Job brings God near that reality. But God is rescued. The Song of Solomon changes the subject, thereby sparing the life of God, and subsequent books give Him a different life.

by Phyllis Trible, NY Times | Read more:
Image: God, A Biography/Amazon

Sunday, October 15, 2017

[ed. Dry aku (skipjack tuna), poi, and Primo. All you need for a good party.]

"Let’s just say this: if I were at a high society dinner party, and the Hors d’Oeuvres making their rounds included the likes of Russian Caviar Shots, Kuromaguro Otoro Tartar and Kobe Beef Wontons, and the server told me, “Sir, we’ve got some amazing Dry Aku from Azama’s, fresh Poi and Primo on ice in back. You want some?” Forget that other stuff, I’d say “lead to your master, master!”

[ed. I wouldn't go that far.]

via: Dry Aku and Poi