Wednesday, September 30, 2015

A History of Everything, Including You


First there was god, or gods, or nothing. Then synthesis, space, the expansion, explosions, implosions, particles, objects, combustion, and fusion. Out of the chaos came order, stars were born and shown and died. Planets rolled across their galaxies on invisible ellipses and the elements combined and became.

Life evolved or was created. Cells trembled, and divided, and gasped and found dry land. Soon they grew legs, and fins, and hands, and antenna, and mouths, and ears, and wings, and eyes. Eyes that opened wide to take all of it in, the creeping, growing, soaring, swimming, crawling, stampeding universe.

Eyes opened and closed and opened again, we called it blinking. Above us shown a star that we called the sun. And we called the ground the earth. So we named everything including ourselves. We were man and woman and when we got lonely we figured out a way to make more of us. We called it sex, and most people enjoyed it. We fell in love. We talked about god and banged stones together, made sparks and called them fire, we got warmer and the food got better.

We got married, we had some children, they cried, and crawled, and grew. One dissected flowers, sometimes eating the petals. Another liked to chase squirrels. We fought wars over money, and honor, and women. We starved ourselves, we hired prostitutes, we purified our water. We compromised, decorated, and became esoteric. One of us stopped breathing and turned blue. Then others. First we covered them with leaves and then we buried them in the ground. We remembered them. We forgot them. We aged.

Our buildings kept getting taller. We hired lawyers and formed councils and left paper trails, we negotiated, we admitted, we got sick, and searched for cures. We invented lipstick, vaccines, pilates, solar panels, interventions, table manners, firearms, window treatments, therapy, birth control, tailgating, status symbols, palimony, sportsmanship, focus groups, zoloft, sunscreen, landscaping, cessnas, fortune cookies, chemotherapy, convenience foods, and computers. We angered militants, and our mothers.

You were born. You learned to walk, and went to school, and played sports, and lost your virginity, and got into a decent college, and majored in psychology, and went to rock shows, and became political, and got drunk, and changed your major to marketing, and wore turtleneck sweaters, and read novels, and volunteered, and went to movies, and developed a taste for blue cheese dressing.

I met you through friends, and didn’t like you at first. The feeling was mutual, but we got used to each other. We had sex for the first time behind an art gallery, standing up and slightly drunk. You held my face in your hands and said that I was beautiful. And you were too. Tall with a streetlight behind you. We went back to your place and listened to the White Album. We ordered in. We fought and made up and got good jobs and got married and bought an apartment and worked out and ate more and talked less. I got depressed. You ignored me. I was sick of you. You drank too much and got careless with money. I slept with my boss. We went into counseling and got a dog. I bought a book of sex positions and we tried the least degrading one, the wheelbarrow. You took flight lessons and subscribed to Rolling Stone. I learned Spanish and started gardening.

We had some children who more or less disappointed us but it might have been our fault. You were too indulgent and I was too critical. We loved them anyway. One of them died before we did, stabbed on the subway. We grieved. We moved. We adopted a cat. The world seemed uncertain, we lived beyond our means. I got judgmental and belligerent, you got confused and easily tired. You ignored me, I was sick of you. We forgave. We remembered. We made cocktails. We got tender. There was that time on the porch when you said, can you believe it?

This was near the end and your hands were trembling. I think you were talking about everything, including us. Did you want me to say it? So it would not be lost? It was too much for me to think about. I could not go back to the beginning. I said, not really. And we watched the sun go down. A dog kept barking in the distance, and you were tired but you smiled and you said, hear that? It’s rough, rough. And we laughed. You were like that.

Now, your question is my project and our house is full of clues. I’m reading old letters and turning over rocks. I bury my face in your sweaters. I study a photograph taken at the beach, the sun in our eyes, and the water behind us. It’s a victory to remember the forgotten picnic basket and your striped beach blanket. It’s a victory to remember how the jellyfish stung you and you ran screaming from the water. It’s a victory to remember treating the wound with meat tenderizer, and you saying, I made it better. I will tell you this, standing on our hill this morning I looked at the land we chose for ourselves, I saw a few green patches, and our sweet little shed, that same dog was barking, a storm was moving in. I did not think of heaven, but I saw that the clouds were beautiful and I watched them cover the sun.

by Jenny Hollowell, YMFY |  Read more:
Image: via:

[ed. My archives told me this was a popular post recently, so here it is again for those who haven't read it. Lovely essay.]

Tuesday, September 29, 2015


Kobayashi Kiyochika (1847 - 1915)
via:

Project Chariot: Nuke Alaska


Project Chariot: Nuke Alaska

[ed. I'm sure many are celebrating Royal Dutch Shell's decision to abandon drilling in the Chukchi Sea (myself included having studied the area for oil and gas development more than 35 years ago). But on a rough scale of bad ideas, oil drilling can't quite compare to nuclear excavation.]

Losing Liberty in an Age of Access

Afew months before 9/11, when I first moved to downtown Los Angeles, the city’s high rises teemed with lawyers and bankers. The lights stayed on late — a beacon of industriousness. But as I quickly discovered, they rolled up the sidewalks by sundown. No matter how productive and wealthy its workers, downtown was a ghost town. LA’s urban core was no place to raise a family or own a home. With its patchwork of one-way streets and expensive lots, it was hardly even a place to own a car. The boom of the late 1980s and early 1990s that had erected LA’s skyline had not fueled residential growth. Angelenos who wanted to chase the dream of property ownership were effectively chased out of downtown.

But things change. Last month, I moved back to “DTLA,” as it’s now affectionately known. Today, once-forlorn corners boast shiny new bars, restaurants, and high-end stores. The streets are full of foot traffic, fueled by new generations of artisans, artists, and knowledge workers. They work from cafés or rented apartments, attend parties on hotel rooftops, and Uber religiously through town. Yes, there are plenty of dogs. But there are babies and children too. In a little over a decade, downtown’s generational turnover has replaced a faltering economy with a dynamic one.

What happened? Partly, it’s a tale of the magnetic power possessed by entrepreneurs and developers, who often alone enjoy enough social capital to draw friends and associates into risky areas that aren’t yet trendy. Even more, it is a story that is playing out across the country. In an age when ownership meant everything, downtown Los Angeles languished. Today, current tastes and modern technology have made access, not ownership, culturally all-important, and LA’s “historic core” is the hottest neighborhood around. Likewise, from flashy metros like San Francisco to beleaguered cities like Pittsburgh, rising generations are driving economic growth by paying to access experiences instead of buying to own.

Nationwide, the line between downsizing hipsters and upwardly mobile yuppies is blurring — an indication of potent social and economic change. America’s hipsters and yuppies seem to be making property ownership uncool. But they’re just the fashionable, visible tip of a much bigger iceberg.

Rather than a fad, the access economy has emerged organically from the customs and habits of “the cheapest generation” — as it has been dubbed in The Atlantic, the leading magazine tracking upper-middle-class cultural trends. Writers Derek Thompson and Jordan Weissman recount that, in 2010, Americans aged 21 to 34 “bought just 27 percent of all new vehicles sold in America, down from the peak of 38 percent in 1985.” From 1998 to 2008, the share of teenagers with a driver’s license dropped by more than a fourth. And it isn’t just cars and driving: Thompson and Weissman cite a 2012 paper written by a Federal Reserve economist showing that the proportion of new young homeowners during the period from 2009 to 2011 was at a level less than half that of a decade earlier. It’s not quite a stampede from ownership, but it’s close.

In part, these changes can be chalked up to the post-Great Recession economy, which has left Millennials facing bleak job prospects while carrying heavy loads of student debt. But those economic conditions have been reinforced by other incentives to create a new way of thinking among Millennials. They are more interested than previous generations in paying to use cars and houses instead of buying them outright. Buying means responsibility and risk. Renting means never being stuck with what you don’t want or can’t afford. It remains to be seen how durable these judgments will be, but they are sharpened by technology and tastes, which affect not just the purchase of big-ticket items like cars and houses but also life’s daily decisions. Ride-sharing apps like Uber and Lyft and car-sharing services like Zipcar are biting into car sales. Vacation-home apps like Airbnb have become virtual rent-sharing apps. There’s something powerfully convenient about the logic of choosing to access stuff instead of owning it. Its applications are limited only by the imagination.

That is why we are witnessing more than just a minor shift in the way Americans do business. It is a transformation. Commerce is being remade in the image of a new age. Once associated with ubiquitous private property, capitalism is becoming a game of renting access to goods and services, not purchasing them for possession. (...)

The Hinge of Technology

We are now on different cultural ground than Belloc, Reich, Friedman, and even Pipes had imagined. And unfortunately for today’s conservatives and libertarians, almost all of whom are still persuaded that freedom rests upon ownership, that idea is directly challenged by the new logic of possession and use woven into the origins of digital commerce.

On the one hand, we have become accustomed, when installing software — computer programs, smartphone apps, video games, etc. — to clicking our blind assent to so-called “end-user license agreements,” which function roughly like government largesse in their lopsidedness: if you want the goods, you agree to the terms, narrowed and capricious as they may be or may one day become. Recently, what has been good for the software goose has become good for the hardware gander, with many of our devices, like our iPhones, being “owned” only in a sense dramatically attenuated by the terms of the contracts we sign when we pay for them. Not only have tech companies expanded the logic of licensing to the four corners of their market, but that full-bore advance has marched apace with a growing public belief that these terms are reasonable and commonsensical.

On the other hand, our shifting sensibilities have also helped hasten the offloading of ownership by popularizing services where once only goods would do. “Service” was once characteristically an arrangement that kept owned goods in working condition over years, perhaps decades; then, after an era of “planned obsolescence,” wherein products grew cheaper and more disposable, the current era of services arose. Today, not only has technology awakened us to the experiential advantages of short-term rentals over vacation homes, or Uber (“everyone’s private driver”) over flashy cars in the driveway. Despite the collapse of newspapers, subscriptions are booming — to everything from newsletters, podcasts, and on-demand video to short-term goods like shaving kits and steaks. The AMC theater chain recently announced it will begin experimenting with a flat monthly rate for an unlimited number of movies, in effect bringing the Netflix subscription model from the small screen to the big. Evanescence has become a cultural feature, not a bug. Snapchat, the app whose users’ pictures and videos disappear after viewing, brings a level of immediacy and impact to the social Internet akin to attending live sports or music events. Not coincidentally, sports and music figure significantly in users’ “snaps.”

Importantly, however, at a time when Facebook’s Mark Zuckerberg has deliberately eliminated clothing from his list of cognitive cares by adopting a bland uniform of hoodies and casual wear, elites are using their massive advantage in purchasing power in a manner unlike the industrial barons of old. Although the ethic of conspicuous consumption and status wealth is still on display on Wall Street, the future appears to belong to a new generation of the independently monied and independently minded, for whom ownership functions primarily as a means to the privileges of experiential choice.

The upshot of these marked changes in the culture of commerce creates problems for partisans of liberty, problems pointed in two directions. Not only is contemporary culture too Lockean, defending special property rights at the expense of a robust, general conception of them. In other respects, it is not Lockean enough. Despite the vogue for experience, too much of the propertied elite embraces a system of political patronage that further concentrates property at freedom’s expense. The rise of the sharing economy has shifted massive sums toward innovators whose financial success has enabled the rise of what Noam Scheiber, in an influential New Republic essay on Obama consigliere Valerie Jarrett, pointedly termed “boardroom liberalism”: “it is a view from on high,” he wrote — “one that presumes a dominant role for large institutions like corporations and a wisdom on the part of elites. It believes that the world works best when these elites use their power magnanimously, not when they’re forced to share it. The picture of the boardroom liberal is a corporate CEO handing a refrigerator-sized check to the head of a charity at a celebrity golf tournament. All the better if they’re surrounded by minority children and struggling moms.” Indeed, Silicon Valley has shown itself to be comfortable with influential pro-corporate operators of both parties. Meanwhile, more broadly, the affinity for ownership that arises from a proverbial “hard day’s work” is on a decline among rising generations — not so much because they are lazy, but because, increasingly, the satisfaction they derive from work is in the access to experience it unlocks. Plus, even many younger Americans who sense the hollowness and corruption of materialistic patronage prefer to focus self-interestedly on pursuing their alternate path, not fighting against the subsidized concentration of property. In this way, the relationship between ownership and freedom is eroded at both ends.

New Economy, New Politics

Rather than looking for answers among intellectual historians, perhaps the right should now look to the futurists. Indeed, some of today’s best futurists help provide a key insight: the transformation in how we do business involves a wholesale rejection of the social structure of the market.

To be sure, this kind of futurism is very much in the air. Capitalists and free-marketeers concerned to keep the wheels of productivity humming have clued in, advocating for a consumerism of experience. American religion, so often animated by the hope of reconciling material and spiritual plenty, has a stake in the pitch as well. Academic studies “proving” that experiences conduce more to happiness than property does trickle down into the public mind by way of reports like James Hamblin’s recent Atlantic article summarizing the science: “Experiential purchases like trips, concerts, movies, et cetera, tend to trump material purchases because the utility of buying anything really starts accruing before you buy it.” That’s because, one hypothesis runs, “you can imagine all sort of possibilities for what an experience is going to be.” The alternative? “With a material possession, you kind of know what you’re going to get.” Under the banner of possibility, the idea of ownership is reconfigured as an obstacle to opportunity.

Conservatives have gotten in on the act, without much undue ideological strain. In a New York Times column entitled “Abundance Without Attachment,” American Enterprise Institute president Arthur Brooks advises that America surmount the “Christmas Conundrum” of gift-grubbing by pursuing abundance but avoiding attachment. “First, collect experiences, not things,” Brooks writes with Emersonian heft. Americans are apt to lower their spirits in the “dogged pursuit of practicality and usefulness at all costs.” As Aristotle knew, and Brooks counsels, experience affords knowledge of that which is “admirable, difficult, and divine, but useless.” The economy of experience, intimates Brooks, at last achieves the American conservative’s dream: lighting the denizens of democracy with an aristocratic passion.

Gone is the ascetic, renunciatory conservatism of midcentury theorists like Christopher Lasch, or Philip Rieff, for whom “experience is a swindle; the experienced know that much.” Rieff, a nearly anti-political sociologist, associated the culture of experience with analogue, not digital, technologies, such as psychotherapy. Indeed, Rieff wrote, “the secret of all secrets” and “interpretation of all interpretations” taught by Freud was “not to attach oneself exclusively or too passionately to any one particular meaning or object.” Or, not so covertly, to any particular institution or person — a direct attack on traditional conservatism if ever there was one.

And so, as the cultural right has struggled to choose between attitudes toward attachment, the economic and political landscape has shifted decisively underfoot. At the turn of this century, one of our more idiosyncratic futurists, Jeremy Rifkin, had already raised the point, tying cultural and technological change together to account for our spirited turn against ownership. He argued that markets, which once drew people to mingle face to face at specific sites, have been replaced by networks, which disperse us as widely as our transactions. For Rifkin, and some others among the futurists, the eclipse of the market is the hallmark of a new economic — and political — age.

by James Poulos, The New Atlantis |  Read more:
Image: Flickr Ted Eytan (CC)

Side Boob and Insensibility

The family of Phaeton had long been settled in London’s Canary Wharf. The topiary of their hedge funds was in splendid order, and, for many generations, they had lived in so respectable a manner, as to engage the general good opinion of their surrounding acquaintance. Though an indulgent and obliging father, Sir Thomas Phaeton was well aware that his daughters, Elinor and Marianne, were two of the silliest girls in England. While their contemporaries were scuttling up trees to protest attacks on the environment, or making their insouciant way into comfy corners of corrupt corporations, Sir Thomas’s offspring were posing for selfies, trying to get an audition for Big Brother or The Apprentice, tweeting, twerking, tweezing, tattooing, drinking vodka-laced frappuccinos, and watching Danish TV series about murdered women. They could neither boil an egg nor butter up a boss. Their education was minimal, their aspirations absurd, their spats legendary.

It had occurred to Sir Thomas on occasion that their expensive schooling, at Bedales and Benenden, had been insufficient to instruct them in the intricacies of adult life. To redress these deficits, he sometimes made an aggravating effort to persuade his daughters to read a book. He himself favored the novels of the long eighteenth century. But they refused even self-help books: they needed no “help.” They preferred more immediate sources of merriment and deviltry, and spent their days (and nights) with models and rock stars. The Phaeton girls could have been models themselves, had they displayed more passivity, more poise, and more pouts; and they would have excelled at the guitar, had they ever learnt.

Following a tempestuous decade of marriage, it was noted that Lady Phaeton now lived elsewhere. But it was a surprise to many when she was discovered subsisting amongst the glitterati to be found, in decreasing numbers and increasing decrepitude, in Biarritz (which, to her daughters, seemed horrifically uncool). She left in her place a widowed sister, though Aunt Norris had little more interest than their mother in tending to either Sir Thomas or the girls, who, in their turn, ignored their aunt whenever possible. This left Sir Thomas in the position of sole protector of the two flibbertigibbets, who nonetheless could charm him, when they applied themselves to the task. Why would anyone wish to harm these beatific beings, Sir Thomas wondered jovially, as they spooned lobster pâté onto more and more crackers for him, in the hopes of a handout.

The girls enlarged their set of acquaintances to include stand-up comedians with god complexes, VIPs at the loucher end of the spectrum, humble sycophants, newspaper magnates, Conservative politicians, and aristocratic wannabes. But the sisters had their enemies too. Paparazzi stirred into action whenever they left their three-story penthouse (adjoining the equally well-proportioned London residence which their father shared, resignedly, with Aunt Norris). The object of the paparazzi’s assiduity was to get a photo of those two zany Phaeton chicks looking zany.

Sir Thomas was forced to await, on tenterhooks, the inevitable slaughter, by media, of his darlings; but when it finally came, it was an embarrassment, not just to him, or to Marianne and Elinor, but to the country at large.

The Phaeton girls had successfully evaded censure for two, three, perhaps four years of high living. Despite trashing every nightclub in the British Isles and beyond, slurring their speech on talk shows, and shoplifting heritage carrots from Harrods’ Food Hall, the worst of the crimes of which Marianne and Elinor had yet been accused were cellulite sins, muffin-top miseries, Chihuahua cruelties, and occasionally going about color-uncoordinated. The ups and downs of their love lives had been finely milled for scandal, but none could be found: their boyfriends were all rotters, to a man—but so were everyone else’s. (In a society in which just about everything is ill judged, it can be hard to find the right way to go wrong.) Ominously, though, as Sir Thomas would later recall to his chagrin, there had once been a curious accusation of “cleavage overload” hurled at his daughters, which might have served as a warning of the imminent debacle. Both girls had laughed it off, however, ridiculing the notion that anyone could ever get tired of breasts.

But finally, there transpired the biggest sartorial transgression currently known to humankind. England, a nation already famed for sexual confusion, was suddenly saturated with disturbing photographic evidence of sleaze. The center of the controversy was Marianne, as Sir Thomas might have guessed it would be—Marianne, who had always had the least fashion sense of the two (though neither daughter could ever have been said to dress sensibly). Her crime? The exposure of a “side boob.”

by Alexander McCall Smith, The Baffler | Read more:
Image: Imgur

It’s Sleazy, It’s Totally Illegal, and Yet It Could Become the Future of Retirement

Over 100 years ago in America — before Social Security, before IRAs, corporate pensions and 401(k)s — there was a ludicrously popular (and somewhat sleazy) retirement scheme called the tontine.

At their peak, around the turn of the century, tontines represented nearly two-thirds of the American insurance market, holding about 7.5 percent of national wealth. It’s estimated that by 1905, there were 9 million tontine policies active in a nation of only 18 million households. Tontines became so popular that historians credit them for single-handedly underwriting the ascendance of the American insurance industry.

The downfall of the tontine was equally dramatic. Not long after 1900, a spectacular set of scandals wiped the tontine from the nation’s consciousness. To this day, tontines remain outlawed, and their name is synonymous with greed and corruption. Their memory lives on mostly in fiction, where they invariably propel some murderous plot. (There’s even a "Simpsons" episode in this genre.)

Tontines, you see, operate on a morbid principle: You buy into a tontine alongside many other investors. The entire group is paid at regular intervals. The key twist: As your fellow investors die, their share of the payout gets redistributed to the remaining survivors.

In a tontine, the longer you live, the larger your profits — but you are profiting precisely off other people’s deaths. Even in their heyday, tontines were regarded as somewhat repugnant for this reason.

Now, a growing chorus of economists and lawyers is wondering if the world wasn’t too hasty in turning its back on tontines. These financial arrangements, they say, have aspects that make a lot of sense despite their history of disrepute.

Some academics even argue that with a few new upgrades, a modern tontine would be particularly suited to soothing the frustrations of 21st-century retirement. It could help people properly finance their final years of life, a time that is often wracked with terribly irrational choices. Tontines could even be a cheaper, less risky way for companies to resurrect the pension.

“This might be the iPhone of retirement products,” says Moshe Milevsky, an associate professor of finance at York University in Toronto who has become one of the tontine’s most outspoken boosters.

by Jeff Guo, WP |  Read more:
Image: bigstockphoto

Sunday, September 27, 2015

Stop Googling. Let’s Talk.

College students tell me they know how to look someone in the eye and type on their phones at the same time, their split attention undetected. They say it’s a skill they mastered in middle school when they wanted to text in class without getting caught. Now they use it when they want to be both with their friends and, as some put it, “elsewhere.”

These days, we feel less of a need to hide the fact that we are dividing our attention. In a 2015 study by the Pew Research Center, 89 percent of cellphone owners said they had used their phones during the last social gathering they attended. But they weren’t happy about it; 82 percent of adults felt that the way they used their phones in social settings hurt the conversation.

I’ve been studying the psychology of online connectivity for more than 30 years. For the past five, I’ve had a special focus: What has happened to face-to-face conversation in a world where so many people say they would rather text than talk? I’ve looked at families, friendships and romance. I’ve studied schools, universities and workplaces. When college students explain to me how dividing their attention plays out in the dining hall, some refer to a “rule of three.” In a conversation among five or six people at dinner, you have to check that three people are paying attention — heads up — before you give yourself permission to look down at your phone. So conversation proceeds, but with different people having their heads up at different times. The effect is what you would expect: Conversation is kept relatively light, on topics where people feel they can drop in and out.

Young people spoke to me enthusiastically about the good things that flow from a life lived by the rule of three, which you can follow not only during meals but all the time. First of all, there is the magic of the always available elsewhere. You can put your attention wherever you want it to be. You can always be heard. You never have to be bored. When you sense that a lull in the conversation is coming, you can shift your attention from the people in the room to the world you can find on your phone. But the students also described a sense of loss.

One 15-year-old I interviewed at a summer camp talked about her reaction when she went out to dinner with her father and he took out his phone to add “facts” to their conversation. “Daddy,” she said, “stop Googling. I want to talk to you.” A 15-year-old boy told me that someday he wanted to raise a family, not the way his parents are raising him (with phones out during meals and in the park and during his school sports events) but the way his parents think they are raising him — with no phones at meals and plentiful family conversation. One college junior tried to capture what is wrong about life in his generation. “Our texts are fine,” he said. “It’s what texting does to our conversations when we are together that’s the problem.”

It’s a powerful insight. Studies of conversation both in the laboratory and in natural settings show that when two people are talking, the mere presence of a phone on a table between them or in the periphery of their vision changes both what they talk about and the degree of connection they feel. People keep the conversation on topics where they won’t mind being interrupted. They don’t feel as invested in each other. Even a silent phone disconnects us.

In 2010, a team at the University of Michigan led by the psychologist Sara Konrath put together the findings of 72 studies that were conducted over a 30-year period. They found a 40 percent decline in empathy among college students, with most of the decline taking place after 2000.

Across generations, technology is implicated in this assault on empathy. We’ve gotten used to being connected all the time, but we have found ways around conversation — at least from conversation that is open-ended and spontaneous, in which we play with ideas and allow ourselves to be fully present and vulnerable. But it is in this type of conversation — where we learn to make eye contact, to become aware of another person’s posture and tone, to comfort one another and respectfully challenge one another — that empathy and intimacy flourish. In these conversations, we learn who we are.

Of course, we can find empathic conversations today, but the trend line is clear. It’s not only that we turn away from talking face to face to chat online. It’s that we don’t allow these conversations to happen in the first place because we keep our phones in the landscape.

In our hearts, we know this, and now research is catching up with our intuitions. We face a significant choice. It is not about giving up our phones but about using them with greater intention. Conversation is there for us to reclaim. For the failing connections of our digital world, it is the talking cure.

by Sherry Turkle, NY Times |  Read more:
Image: Yann Kebbi

Samsung-Oculus Consumer Virtual Reality Headset to Cost $99

[ed. My prediction for Christmas gift of the year (along with drones, drones and more drones). Also, a plea: could we please stop with the Christmas marketing and decorations in September?] 

The Invisible Labor of Fashion Blogging

Earlier this month, the biannual circus that is New York Fashion Week saw non-stop coverage on social media via Instagram, Twitter, and Snapchat, with the scene repeating itself in London, Milan, and Paris through early October. Though coverage of designers, models, A-listers, and celebrities was in no short supply in mainstream and industry publications, there was another formidable yet familiar force on the scene: fashion bloggers.

It’s been nearly a decade since these independent voices “took over the tents,” as Women’s Wear Daily proclaimed of the fashion blogosphere’s first wave in the mid-aughts. While industry veterans initially saw the inclusion of bloggers as an invasion, the latter’s presence at runway shows and designer fetes no longer draws the ire it once did. Instead, both designers and fashion editors recognize the power of bloggers. The resurgence of Birkenstocks, the frenzy over fringe, and the ubiquity of off-the-shoulder styles are among the recent trends that have been brought into the mainstream by a collective of online tastemakers, whose scrappy origins grow more and more distant every year. With annual incomes of top-ranking bloggers climbing into the seven-figure range, it’s not surprising that they’re frequently hailed as savvy entrepreneurs.

In the popular imagination, blogging has become a viable career path with legions of aspirants. As many other creative workers struggle to find stable and fulfilling careers, bloggers and others with digital clout seem to have shaped their careers with ease. The impeccably curated online presences of these young women—fashion blogging is heavily skewed female—seem to offer hope and a sense of control in an economy marked by persistent instability and precarious employment conditions.

But this idealized profession is less glamorous than it first appears. In a new study to be published this fall in the journal Social Media + Society, we examine the gap between the rhetoric and reality of fashion blogging. (Our analysis of 760 Instagram images by 38 top-ranked female professionals is part of a larger, multi-year project on the subject.) Pro-bloggers, we learned, must continually reconcile a series of competing demands: They have to appear authentic but also remain on brand, stay creative while tracking metrics, and satisfy both their readers and the retail brands that bankroll them. Many work up to 100 hours a week, and the flood of new bloggers means companies increasingly expect to not have to pay for partnerships. Meanwhile, the nature of the job requires obscuring the hard work and discipline that goes into crafting the perfect persona online.

by Brooke Erin Duffy and Emily Hund, The Atlantic |  Read more:
Image: John Taggart / Reuters

Saturday, September 26, 2015

A Facelift for Shakespeare

The Oregon Shakespeare Festival will announce next week that it has commissioned translations of all 39 of the Bard’s plays into modern English, with the idea of having them ready to perform in three years. Yes, translations—because Shakespeare’s English is so far removed from the English of 2015 that it often interferes with our own comprehension.

Most educated people are uncomfortable admitting that Shakespeare’s language often feels more medicinal than enlightening. We have been told since childhood that Shakespeare’s words are “elevated” and that our job is to reach up to them, or that his language is “poetic,” or that it takes British actors to get his meaning across.

But none of these rationalizations holds up. Much of Shakespeare goes over our heads because, even though we recognize the words, their meaning often has changed significantly over the past four centuries.

In “Hamlet,” when Polonius famously advises Laertes to “neither a borrower nor a lender be,” much of what he says before that point reaches our modern ears in a fragmentary state at best. In the lines, “These few precepts in thy memory / Look thou character,” look means “make sure that,” and character is a verb, meaning “to write.” Polonius is telling Laertes, in short, “Note these things well.”

He goes on to say: “Take each man’s censure, but reserve thy judgment,” which seems to mean that you should let other people criticize you but refrain from judging them—strange advice. But by “take censure” Shakespeare meant “evaluate,” so that Polonius is really saying “assess” other men but don’t jump to conclusions about them.

We can piece these meanings together, of course, by reading the play and consulting stacks of footnotes. But Shakespeare didn’t intend for us to do that. He wrote plays for performance. We’re supposed to be able to hear and understand what’s spoken on the stage, in real time.

That’s hard when we run up against a passage like this one from “King Lear,” when Edmund is dismissing those who look down on him for his low origins:

Why “bastard”? Wherefore “base”?
When my dimensions are as well compact,
My mind as generous, and my shape as true
As honest madam’s issue?


Isn’t it odd for someone to present being “well compact” as a selling point? But for Shakespeare, compact meant “constructed.” And why would Edmund defend himself against the charge of illegitimacy by noting his generosity? Because in Shakespeare’s day, generous could mean “noble.” Nor did madam then have the shady connotation that it does today.

Understanding generous to mean “noble” is not a matter of appreciating elevated language: We cannot reach up to a meaning that is no longer available to us. Nor is there anything poetic in knowing that character was once a verb meaning “to write”: In 2015, that usage is simply opaque, and being British doesn’t help matters.

The idea of translating Shakespeare into modern English has elicited predictable resistance in the past. To prove that the centuries were not so formidable a divide, the actor and author Ben Crystal has documented that only about 10% of the words that Shakespeare uses are incomprehensible in modern English. But that argument is easy to turn on its head. When every 10th word makes no sense—it’s no accident that the word decimate started as meaning “to reduce by a 10th” and later came to mean “to destroy”—a playgoer’s experience is vastly diluted.

It is true that translated Shakespeare is no longer Shakespeare in the strictest sense. But are we satisfied with Shakespeare’s being genuinely meaningful only to an elite few unless edited to death or carefully excerpted, with most of the rest of us genuflecting in the name of “culture” and keeping our confusion to ourselves? Should we have to pore laboriously over Shakespeare on the page before seeing his work performed?

by John H. McWhorter, WSJ |  Read more:
Image: Pep Montserrat

Adidas Tubular X Knit
via:
[ed. Nice.]

via: here and here

Friday, September 25, 2015

In Memoriam: Yogi Berra

As a boy of 8 and 9 and 10, growing up in the Bronx, I was a big New York Yankees fan. When you grow up in the Bronx, that’s really all there is to brag about. A zoo and the Yankees.

Nearly every game aired on channel 11 WPIX, and I watched as many as I could, which was nearly all of them.

The Yankees are by far the most successful team in the history of American sports. Not even close. They’re probably the most successful team in the world. For this reason, rooting for the Yankees has often been equated with rooting for a large, wealthy corporation like IBM or GM. I’ve always thought it’s a very poor analogy.

Rooting for the Yankees is actually like rooting for the United States. Each in their own way, the Yankees and United States are the 300 lb. gorilla, that most powerful of entities winning far more than anyone else. Their wealth creates many advantages. Supporters expect them to win, and they usually do. Opponents absolutely revel in their defeats.

All that success means you will be adored by some non-natives who are tired of losing and want to bask in your glory, even if it must be from afar. But mostly you are hated. Anywhere you go in America, some people love the Yankees and many more hate them. Just like the United States is either loved or hated everywhere else in the world.

Who hates IBM?

And just as U.S. history, so stuffed with victory, is chock full of famous figures, so too is Yankee lore replete with famous men in pinstripes.

There are 53 former Yankee players, managers, and executives in the Baseball Hall of Fame, just over 1/6 of the Hall’s total membership.

Can I name them all? Of course not. That’s like naming all the presidents. I have a Ph.D. in history and I still get bogged down once I reach the 1840s (who comes after Van Buren?), and can’t resume a steady line until I re-emerge with Buchannan in 1856; you know, the guy before Lincoln.

For the average person, there are the biggies: Washington, Lincoln, a couple of Roosevelts and so forth.

For Yankee fans naming their club’s Hall of Famers is actually tougher than naming presidents. There have only been 43 presidents. So most fans know a bunch but not all of them, and then everyone knows the biggies, the Washingtons and Lincolns of baseball.

You don’t have to be a Yankees fan. Hell, you don’t even have to know anything about baseball. You’ve all heard of these guys because they transcend baseball. They’re part of American culture.

Babe Ruth, Lou Gehrig, Joe DiMaggio, Micky Mantle, and Yogi Berra. Those five.

Ruth is probably the single greatest baseball player of all time and still the most famous American athlete who ever lived; we’ll see how famous Michael Jordan is nearly 70 years after his death. Gherig’s got a disease named after him and hardly anyone knows its actual name (amyotrophic lateral sclerosis). DiMaggio became a memorable lyric in a seminal Simon and Garfunkle song thirty years after he was the topic of his own hit song. The Mick’s boyish good looks and runaway success made him a poster boy of mid-century American baby boomer aspirations. And Yogi had a cartoon bear named after him.

Yogi also said all that stuff. Things you’ve heard that you may or may not have realized he said. Or stuff you thought he said that he may not have said.

Best known is “It ain’t over til it’s over,” which is among the most famous of American axioms, and which he actually said, while managing the New York Mets in 1973. But there are a lot of others.
  • When you come to a fork in the road, take it (giving directions to his home).
  • You can observe a lot by just watching.
  • No one goes there nowadays, it’s too crowded (speaking of the Copa Cabana nightclub).
  • Baseball is ninety percent mental and the other half is physical.
  • A nickel ain’t worth a dime anymore.
  • Always go to other people’s funerals, otherwise they won’t come to yours.
  • We made too many wrong mistakes.
Or, as only Yogi could put it, speaking to the phenomenon of misattribution: I really didn’t say everything I said.

But he really did say that.

by The Public Professor |  Read more:
Image: uncredited

Justin Angelos
via:

Thursday, September 24, 2015

Sex: The Kotaku Review

If you’re already a fan of Sex—and there are plenty of you out there—you probably don’t need this review. But if you find yourself on the fence about whether to try this much-heralded, much-argued-over activity, pull up a chair! We’ve got a lot to discuss.

Like many other extended franchise juggernauts, Sex has been around in some form or another for a long time. Originally released as an open-source application and carefully iterated upon over the years, it’s been through its fair share of reimaginings, reboots, and back-to-basics redesigns. Today’s Sex is the most technically advanced version yet, but as we all know, it takes more than eye-popping visuals and high-tech peripherals to make for a truly meaningful experience.

Sex is best understood as a freeform co-op experience where partners work together to achieve one or more user-defined goals. It’s most often played in groups of two, but sometimes more (or less). Broadly speaking, each match-up follows a similar structure–all players are helping one another to achieve a similar goal, and if they work well together, every player can “win.” Take a closer look, though, and you’ll see how creative Sex teams can be, combining inventive techniques with high-level mechanical mastery to achieve unusual but no less satisfying victories.

Aficionados will be pleased to hear that the Sex’s visual presentation is as great as ever–even though it doesn’t seem to have progressed much as of late. Then again, why mess with something that’s already working so well? Today’s Sex features advanced graphical techniques like soft body physics and subsurface scattering; these were incredible when they were first introduced, and they stand the test of time. But with technological innovations coming faster than ever and innovative new VR technology on the horizon, it’ll be important for Sex to step up its technology in the coming years to keep pace.

As true gamers know, it’s gameplay that matters most. The mechanics undergirding Sex are deceptively simple–even if you’ve never played, you probably already understand the fundamentals. There’s some stroking, and sliding, and slapping, and smacking, and, well, you know. All of that. The beauty of Sex is that those basic actions can be combined in all sorts of interesting ways. Sex embraces what game designers call the property of “emergence,” i.e. the designed opportunity for varied combinations of simple components to create a complex end result.

Despite those strong fundamentals, Sex is not without its share of technical issues. Sex can, and often does, fall prey to many of the same kinds of bugs and glitches we’ve seen in other multiplayer games: synchronization errors, dropped connections, poor response times, and the like. Some people seem to wait around forever in the matchmaking lobby, never getting to the actual game.

by Matthew S. Burns, Kotaku | Read more:
Image: Shutterstock

The Now Generation

Wednesday, September 23, 2015


[ed. It is.]
via:

Disconfirming Books

Yesterday the The New York Times had a fascinating piece about how ebook sales, contra Aggregation Theory, are actually declining even as publishers and book stores are thriving on the back of print:
Five years ago, the book world was seized by collective panic over the uncertain future of print. As readers migrated to new digital devices, ebook sales soared, up 1,260 percent between 2008 and 2010, alarming booksellers that watched consumers use their stores to find titles they would later buy online. Print sales dwindled, bookstores struggled to stay open, and publishers and authors feared that cheaper ebooks would cannibalize their business… 
But the digital apocalypse never arrived, or at least not on schedule. While analysts once predicted that ebooks would overtake print by 2015, digital sales have instead slowed sharply. Now, there are signs that some ebook adopters are returning to print, or becoming hybrid readers, who juggle devices and paper. Ebook sales fell by 10 percent in the first five months of this year, according to the Association of American Publishers, which collects data from nearly 1,200 publishers. Digital books accounted last year for around 20 percent of the market, roughly the same as they did a few years ago. 
Ebooks’ declining popularity may signal that publishing, while not immune to technological upheaval, will weather the tidal wave of digital technology better than other forms of media, like music and television.
First off, I’m not necessarily surprised that publishers haven’t all gone bankrupt en masse. Much like the music labels publishers have always provided more than distribution, including funding (using a venture capital-like process where one hit pays for a bunch of losers), promotion (discovery is the biggest challenge in a world of abundance, and breaking through is expensive), and expertise (someone needs to do the editing, layout, cover design, etc.). And, as long as there is any print business at all, distribution still matters to a degree given the economics of writing a book: very high fixed costs with minimal marginal costs, which dictates as wide a reach as possible.

Still, none of this explains why ebooks have been stopped in their tracks, and that’s where this discussion gets interesting: not only is it worth thinking about the ebook answer specifically, but also are there broader takeaways that explain what the theory got wrong, and how it can be made better?

EBOOK LESSONS TO BE LEARNED

I think there are three things to be learned from the plateauing in ebook sales:

Price: The first thing to consider about ebooks — and the New York Times’ article touches on this — is that they’re not any cheaper than printed books; indeed, in many cases they are more expensive. The Wall Street Journal wrote earlier this month:
When the world’s largest publishers struck e-book distribution deals with Amazon.com Inc. over the past several months, they seemed to get what they wanted: the right to set the prices of their titles and avoid the steep discounts the online retail giant often applies. But in the early going, that strategy doesn’t appear to be paying off. Three big publishers that signed new pacts with Amazon— Lagardere SCA’s Hachette Book Group, News Corp’s HarperCollins Publishers and CBS Corp.’s Simon & Schuster—reported declining e-book revenue in their latest reporting periods.
Pricing is certainly an art — go too low and you leave money on the table, go too high and you lose too many customers — and there is obviously a case to be made (and Amazon has made it) that in the case of books there is significant elasticity (i.e. price has a significant impact on purchase decisions). Then again, while e-book sales have fallen, they’ve stayed the same percentage of overall book sales — about 20% — which potentially means that the price change didn’t really have an effect at all (more on this in a bit).

What is more interesting about the pricing issue, though, is that the publishers have removed what is traditionally one of digital’s advantages: that it is cheaper. That means the chief advantage of ebooks is that they are more convenient to acquire and store, and that’s about it. And, by extension, that raises the question about just how much lower prices play a role in the success of other aggregators.

User Experience: Note what is lacking when it comes to ebook’s advantages: the user experience. True, some people certainly prefer an e-reader (or their phone or tablet), but a physical book has its advantages as well: relative indestructibility, and little regret if it is destroyed or lost; tangibility, both in regards to feel and in the ability to notate; the ability to share or borrow; and, of course, the fact a book is an escape from the screens we look at nearly constantly. At the very best the user experience comparison (excluding the convenience factor) is a push; I’d argue it tilts towards physical books.

This is in marked contrast to many of the other industries mentioned above. When it comes to media, choosing a show on demand or an individual song is vastly preferable to a programming guide or a CD. Similarly, Uber is better than a taxi in nearly every way, particularly when it comes to payments; Airbnb offers far more selection and rooms that simply aren’t possible through hotel chains; Amazon has superior selection and superior prices, with delivery to your doorstep to boot. It’s arguable the user experience is undervalued in my Aggregation Theory analysis.

Modularization: Notice, though, that there is something in common to all of my user experience examples: what matters is not only that the aggregators are digital, but also that they broke up the incumbent offering to its atomic unit. Netflix offered shows, not channels; first iTunes then Spotify offered songs, not albums; Uber offered the ability to chart individual cars on-demand; Airbnb offered rooms, not hotels; Amazon offers every product, not just the ones that will fit in a bricks-and-mortar retail store.

Ebooks, on the other hand, well, they’re pretty much the same thing as physical books, except they need an expensive device to read them on, while books have their own built-in screen that is both disposable and of a superior resolution (no back-lighting though).

by Ben Thompson, Stratechery |  Read more:
Image: Stratechery

When Dinner Proves Divisive: One Strategy, Many Dishes

[ed. See also: Best Weeknight Recipes]

Back when I cooked only to please myself and one or two other consenting adults, choosing recipes was a breeze. Nothing was off limits. Dishes with olives, stinky cheeses, bitter greens and mushrooms — sometimes all of the above — were on regular rotation. Then I began cooking for kids (picky, omnivorous and otherwise). With them came their nut-allergic friends, vegan guitar teachers and chile-fearing in-laws. Forced to adapt my NC-17 cooking style to a G-rated audience, I paged through cookbooks in search of “crowd pleasers” that proved elusive.

Eventually, I realized that the quest for a perfect recipe that pleases everyone at the table, including oneself, was fruitless.

But in the process, a workaround solution emerged: recipes that could be configured to produce many different dishes at one meal. Like Transformers or fantasy football teams, these meals are both modular and complete, constructed from parts that can be added or subtracted from at whim.

Suddenly, my weeknight repertoire increased exponentially. It’s easier on the cook when the week assumes a familiar pattern — pasta one night, a main-course salad another night, beans on a third — but to prevent boredom, the dishes themselves needn’t be exactly the same. (Unless, of course, the culinary conservative in your household demands otherwise.)

Just like taco night or baked-potato night, the meal starts with a base element: pasta, beans, fluffy greens. After that, it’s about piling on, or politely passing along, the garnishes.

The definition of a garnish may need some stretching: This is not a shy sprinkling of parsley or a scattering of sesame seeds. The garnish that makes a meal must be full-throated and filling. Half of a ripe avocado is a garnish. Likewise, a soft-yolk egg (boiled, poached or fried). Bacon lardons, shredded chicken and diced steak. Crushed chiles and leftover roasted vegetables. With enough garnishes, even the plainest of plain foods — pasta with butter and cheese — can balloon into a lively meal.

by Julia Moskin, NY Times |  Read more:
Image: Melina Hammer

Tuesday, September 22, 2015


RLoN Wang, Frame of mind
via:

Death to the Internet! Long Live the Internet!

Net neutrality, cultural decay, the corporate web, classism, & the decline of western civilization — all in one convenient essay!

Even now it’s a struggle to clearly remember that ecstatic time of positive internet esprit de corps before money and narcissism utterly dominated the culture. Those ancient ‘90s to early oughts before endlessly aggressive advertising, encyclopedic terms of service, incessant tracking, the constant need to register everywhere, subversive clickbait, the legions of trolls, threats of doxxing, careers ended by a single tweet, and all those untiring spam bots which attempt to plague every digital inch of it.

Difficult to explain to anyone under twenty-five who did not directly witness the foundational times. Or anyone over twenty-five who did not participate. Or to anyone right now who uses only Facebook and Amazon. That lost age has become the Old West of the internet: a brief memory before once verdant lands were dominated and overrun by exploitative business interests and ignorant bumbling settlers. You can’t go back, and there’s no museum for an experience. That early culture was ineffable and fleeting. Not unlike, say: the concept of lifetime job security, which no longer even seems plausible.

Now, of course, plenty of happy and creative people still use the internet (at least, to like, buy an appliance or a book or something) but they don’t make up most of internet culture; that majority of online participation which sets the social standards, creates the original content, and is now broadly, inescapably corralled by social media. Those who spend more than 20 hours a week actively participating online (like me) who are forced into the corporate tide, or relegated to the sluggish unknown hinterlands. (...)

Need we wonder why the book “Nineteen Eighty-Four” remains so relevant? Even thirty years after Steve Jobs commemorated the futuristic date by ironically pretending to destroy the entrenched corporate power structure. The same man who turned out to be one of the most proprietary-minded technologists ever to influence popular computing culture. The person who cemented the sale of style over utility, which continues to unendingly trick people. Selling the trappings of refined taste instead of core pragmatism. Like how the classic campaign to “Think Different” fetishized intellectual and artistic rebellion in order to ironically sell a massmarket consumer product. And it worked amazingly. People have been strongly influenced to desire a unique personal experience and an individualized version of success instead of a shared communal growth. So in this fragmented and increasingly de-localized culture, everyone becomes the protagonist of their own little narcissistic adventure instead of a powerful collective assisting each other for the greater good. And because not everyone can be that one-in-billion genius, much existential disappointment has been ingrained once it was set as the highest goal.

This is advantageous to business interests because unsatisfied people are more susceptible to the sale of solutions to combat unhappiness. And this emotional and cultural development also makes it easier to dehumanize others, to be jealous of their successes, and feel left out when not receiving high accolades. Creating the much lamented vicious cycle of kindergarten graduation ceremonies and participation trophies which has wrought themost egotistical generation ever recorded. It also has an oligarchic benefit of justifying power held in the small circles of the moneyed class, because success, even if born into, is often assumed to be deserved.

So it’s no coincidence that wealthy special interests have gained massive control over democracy by incentivizing and preaching the supremacy of individual gains over communal interests. Unlike a more simplistic fascism, this grants minority power to the upperclass by motivating the populace to work hard towards individual goals and individual distractions without requiring the classic top-down crushing social conformity which is more obvious and easier to fight. Instead, the insidious dreams of grand individual success, in spite of all contrary indications, keeps everyone’s broader rewards lowered. It’s like a lottery for human desires: many pay in and get essentially nothing while a tiny few win it all so as to demonstrate it is supposedly possible. Justified elite power is the cultural root of corruption, as Thomas Jefferson ironically understood, and must be fought with repeated revolution.

We all recognize a nebulous natural cynicism these days found not only in the post-apocalyptic and zombie fictions so symbolically appealing to our collective unconscious, but also in the simple facts of a historically deadlocked legislature, a rampantly scare-mongering media, the rise once again of an excessively wealthy upperclass, and the corruption of debt-based higher learning. That last being perhaps the most intellectually disheartening, as the ivory tower repeatedly demonstrates its moral bankruptcy by a reliance on horrific levels of tuition, exploitive wasteful sporting, shoddy oversight of publishing, general lack of moral center, and a scattered vision for the future (pigeon-holed rather correctly by conservatives as often out of touch). Much could perhaps be excused by the inevitable corruption of institutionalization, but where is the forethought of previous generations? Why must we rely on impulsive social media and a polarized profit-oriented mass media for our appraisals of the future?

If Obama’s unpredictable election proved anything it’s that positive ideological movements are so frightening to the moneyed establishment they’ll foster complete obstruction to thwart even the simple belief that hope and change are actually possible. Generating cynicism aids complacency, because it’s difficult for a person dealing with all their own daily struggles to constantly study the complex system and renew the idealism required to force political change, especially during periods of nominally acceptable economic stability. Revolutions are motivated by hunger and heavy oppression, generally years after the slow and determined rise of a stratified class system (a pattern which has plagued us since the dawn of civilization).

For thirty years now capitalism’s trickle-down variant has been systematically attempting to recreate an intransigent system of wealth and privilege. Conservative propaganda has assured us that if the rich succeed, everyone benefits. But how long must this ludicrous delusion be perpetuated? Is not the entire history of civil humanity a testament to the popular misery of allowing an upper class minority to rule? This should be especially poignant in a country which was designed to break hereditary dominance and unrepresentative power. Yet here we are again, watching civilization repeat its famous pattern, locking the populace into hard work and distraction without sharing in the full rewards. America chugs along with its bread and circuses, like a late-season Happy Days episode, where the original magic is gone but the characters continue acting out a hollow version of the thing we used to love and cherish. So goes sitcoms… so goes the world wide web… so goes civilization…

The rise of an entirely corporate internet is just one more idealistic casualty of allowing the amoral dollar to inform every aspect our lives. Market efficiencies, so touted by the right, can generate competition between otherwise possible monopolies, but function best only in fields of limited and uncoordinated resources. They are not necessitated to everything, and especially something as nearly immaterial and gigantic as cyberspace, where supply and demand do not function normally; a place where capitalism has often struggled to find what it can sell. Where demand has to be generated artificially with subtle and disguised viral marketing to trick and deceive us. The newest things you didn’t know you needed but all the cool kids have. Since wealth expands to dominate all emerging cultural forms, it works to control even the nearly limitless virtual environments formed of patterned energy and communal human consciousness.

In the same manner that liberty gets subsumed for security, creativity often dies upon the altar of sales. Advertising’s goal is convincing and deceiving, not compassion. It is the art of propaganda and should constantly be doubted. Excessive needs, worries, and calamities are fostered so that new cures and products can be sold. Just as rulers create fear to limit freedom, so corporations must generate the need for increased consumption.

Cultivating social anxiety can make warrantless wiretapping, indefinite detention, terrorist watchlists, illegal foreign prisons, preemptive perpetual war, pushbutton murder by drone, and being bathed in x-rays at every airport seem incrementally acceptable. If you pile on the impediments slowly, and each seems necessary at the time, they morph into those inevitable and accepted hassles of modern life. Such as how general anxiety generates the sale of status items, snake-oil cures, distracting entertainments, and self-help regimes — it’s the creep of supposed necessity. Just like websites becoming overrun with advertisements, click-bait, registering, tracking, profiling, and endless general noise. In return for which we get increasingly bland and controlled services. With all these small losses, the cultural whole is diminished.

by Nicholas Kerkhoff, Medium | Read more:
Image: uncredited

The Dimming of the Light


With its revolutionary heat and rational cool, French thought once dazzled the world. Where did it all go wrong?

There are many things we have come to regard as quintessentially French: Coco Chanel’s little black dress, the love of fine wines and gastronomy, the paintings of Auguste Renoir, the smell of burnt rubber in the Paris Métro. Equally distinctive is the French mode and style of thinking, which the Irish political philosopher Edmund Burke described in 1790 as ‘the conquering empire of light and reason’. He meant this as a criticism of the French Revolution, but this expression would undoubtedly have been worn as a badge of honour by most French thinkers from the Enlightenment onwards.

Indeed, the notion that rationality is the defining quality of humankind was first celebrated by the 17th-century thinker René Descartes, the father of modern French philosophy. His skeptical method of reasoning led him to conclude that the only certainty was the existence of his own mind: hence his ‘cogito ergo sum’ (‘I think, therefore I am’). This French rationalism was also expressed in a fondness for abstract notions and a preference for deductive reasoning, which starts with a general claim or thesis and eventually works its way towards a specific conclusion – thus the consistent French penchant for grand theories. As the essayist Emile Montégut put it in 1858: ‘There is no people among whom abstract ideas have played such a great role, and whose history is rife with such formidable philosophical tendencies.’

The French way of thinking is a matter of substance, but also style. This is most notably reflected in the emphasis on rhetorical elegance and analytical lucidity, often claimed to stem from the very properties of the French language: ‘What is not clear,’ affirmed the writer Antoine de Rivarol in 1784, somewhat ambitiously, ‘is not French.’ Typically French, too, is a questioning and adversarial tendency, also arising from Descartes’ skeptical method. The historian Jules Michelet summed up this French trait in 1974 in the following way: ‘We gossip, we quarrel, we expend our energy in words; we use strong language, and fly into great rages over the smallest of subjects.’ A British Army manual issued before the Normandy landings in 1944 sounded this warning about the cultural habits of the natives: ‘By and large, Frenchmen enjoy intellectual argument more than we do. You will often think that two Frenchmen are having a violent quarrel when they are simply arguing about some abstract point.’

Yet even this disputatiousness comes in a very tidy form: the habit of dividing issues into two. It is not fortuitous that the division of political space between Left and Right is a French invention, nor that the distinction between presence and absence lies at the heart of Jacques Derrida’s philosophy of deconstruction. French public debate has been framed around enduring oppositions such as good and evil, opening and closure, unity and diversity, civilisation and barbarity, progress and decadence, and secularism and religion.

Underlying this passion for ideas is a belief in the singularity of France’s mission. This is a feature of all exceptionalist nations, but it is rendered here in a particular trope: that France has a duty to think not just for herself, but for the whole world. In the lofty words of the author Jean d’Ormesson, writing in the magazine Le Point in 2011: ‘There is at the heart of Frenchness something which transcends it. France is not only a matter of contradiction and diversity. She also constantly looks over her shoulder, towards others, and towards the world which surrounds her. More than any nation, France is haunted by a yearning towards universality.’

This specification of a distinct French way of thinking is not rooted in a claim about Gallic ‘national character’. These ideas are not a genetic inheritance, but rather the product of specific social and political factors. The Enlightenment, for example, was a cultural phenomenon which spread rationalist ideas across Europe and the Americas. But in France, from the mid-18th century, this intellectual movement produced a particular type of philosophical radicalism, which was articulated by a remarkable group of thinkers, the philosophes. Thanks to the influence of the likes of Voltaire, Diderot and Rousseau, the French version of rationalism took on a particularly anti-clerical, egalitarian and transformative quality. These subversive precepts also circulated through another French cultural innovation, the salon: this private cultural gathering flourished in high society, contributing to the dissemination of philosophical and artistic ideas among French elites, and the empowerment of women.

This intellectual effervescence challenged the established order of the ancien régime during the second half of the 18th century. It also gave a particularly radical edge to the French Revolution, compared, notably, with its American counterpart. Thus, 1789 was not only a landmark in French thought, but the culmination of the Enlightenment’s philosophical radicalism: it gave rise to a new republican political culture, and enduringly associated the very idea of Frenchness with novelty and resistance to oppression. It also crystallised an entirely original way of thinking about the public sphere, centred around general principles such as the ‘Declaration of the Rights of Man’, the civic conception of the nation (resting on shared values as opposed to blood ties), the ideals of liberty, equality and fraternity, and the notions of the general interest and popular sovereignty.

One might object that, despite this common and lasting revolutionary heritage, the French have remained too diverse and individualistic to be characterised in terms of a general mind-set. Yet there are two decisive reasons why it is possible – and indeed necessary – to speak of a collective French way of thinking. Firstly, since the Enlightenment, France has granted a privileged role to thinkers, recognising them as moral and spiritual guides to society – a phenomenon reflected in the very notion of the ‘intellectual’, which is a late-19th-century (French) invention. Public intellectuals exist elsewhere, of course, but in France they enjoy an unparalleled degree of visibility and social legitimacy.

Secondly, to an extent that is also unique in modern Western culture, France’s major cultural bodies – from the State to the great institutions of secondary and higher education, the major academies, the principal publishing houses, and the leading press organs – are all concentrated in Paris. This cultural centralisation extends to the school curriculum (all high-school students have to study philosophy up to the baccalauréat), and this explains how and why French ways of thought have exhibited such a striking degree of stylistic consistency.

by Sudhir Hazareesingh, Aeon | Read more:
Image: Jean-Paul Sartre and Simone de Beauvoir having lunch at the "La Coupole" Brasserie, December 1973. Photo by Guy Le Querrec/Magnum