Tuesday, June 27, 2017


Rachel Campbell
via:

Reading Thoreau at 200

One of the smaller ironies in my life has been teaching Henry David Thoreau at an Ivy League school for half a century. Asking young people to read Thoreau can make me feel like Victor Frankenstein, waiting for a bolt of lightning: look, it’s moving, it’s alive, it’s alive! Most students are indifferent—they memorize, regurgitate, and move serenely on, untouched. Those bound for Wall Street often yawn or snicker at his call to simplify, to refuse, to resist. Perhaps a third of them react with irritation, shading into hatred. How dare he question the point of property, the meaning of wealth? The smallest contingent, and the most gratifying, are those who wake to his message.

Late adolescence is a fine time to meet a work that jolts. These days, Ayn Rand’s stock is stratospheric, J. D. Salinger’s, once untouchable, in decline. WASPs of any gender continue to weep at A River Runs Through It, and first-generation collegians still thrill to Gatsby, even when I remind them that Jay is shot dead in his gaudy swimming pool. In truth, films move them far more; they talk about The Matrix the way my friends once discussed Hemingway or Kerouac. But Walden can still start a fight. The only other book that possesses this galvanizing quality is Moby-Dick.

Down the decades, more than a few students have told me that in bad times they return to Thoreau, hoping for comfort, or at least advice. After the electoral map bled red last fall, I went to him for counsel too, but found mostly controversy. In this bicentennial year of Thoreau’s birth, Walden, or Life in the Woods (1854) is still our most famous antebellum book, and in American history he is the figure who most speaks for nature. The cultural meme of the lone seeker in the woods has become Thoreau’s chief public legacy: regrettable for him, dangerous for us. (...)

Our times have never needed the shock of Thoreau more. We face a government eager to kill all measures of natural protection in the name of corporate profit. Elected officials openly bray that environmentalism “is the greatest threat to freedom.” On federal, state, and local levels, civil liberties and free speech are under severe attack. Thoreau is too; the barriers to reading him as a voice of resistance—or reading him at all—are multiplying swiftly.

First, he is becoming an unperson. From the 1920s to the early 2000s, Walden was required reading in hundreds of thousands of U.S. high school and college survey courses. Today, Thoreau is taught far less widely. The intricate prose of Walden is a tough read in the age of tweets, so much so that several “plain English” translations are now marketed. “Civil Disobedience” was a major target of McCarthyite suppression in the 1950s, and may be again.

Second, as F. Scott Fitzgerald said, in the end authors write for professors, and the scholarly fate of Thoreau is clouded. Until the postwar era, Thoreau studies were largely left to enthusiasts. Academic criticism now argues for many versions of Thoreau (manic-depressive, gay, straight, misogynist, Marxist, Catholic, Buddhist, faerie-fixated). But other aspects still await full study: the family man, the man of spirituality, the man of science—and the man who wrote the Journal.

Those who study his peers, such as Emerson, Melville, or Dickinson, routinely examine each author’s entire output. Thoreau scholars have yet to deal fully or consistently with the Journal, which runs longer than two million words (many still unpublished), and fills 47 manuscript volumes, or 7,000 pages. It is the great untold secret of American letters, and also the distorting lens of Thoreau studies.

I spent years reading manuscript pages of the Journal, watching Thoreau’s insights take form, day upon day, as unmediated prose experiments. Unlike Emerson’s volumes, arrayed in topical order, Thoreau’s Journal follows time. Some notations arise from his surveying jobs, hiking through fields and pausing to note discoveries: a blooming plant, a foraging bird, the look of tree-shadows on water. His eye and mind are relentless. Although the entries are in present tense and seem written currente calamo, offhandedly, with the pen running on, in fact he worked from field notes, usually the next day, turning ground-truth into literature. He finds a riverbank hollow of frost crystals, and replicates exactly how they look, at a distance and then closer, imagining how they formed. His interest is in the objects, but also in how a subject perceives them—the phenomenology of observation and learning. He finds a mushroom, phallus impudicus, in the form of a penis: “Pray, what was Nature thinking of when she made this? She almost puts herself on a level of those who draw in privies.” His father’s pig escapes and leads its pursuers all over town, helpless before the animal’s cunning. He watches snowflakes land on his coat sleeve: “And they all sing, melting as they sing, of the mysteries of the number six; six, six, six.” None of these entries reached print; they celebrate instead the gift of writing.

Third, Thoreau’s literary genes have split and recombined in our culture, with disturbing results. Organic hipster? Off-the-grid prepper? His popular image has become both blurred and politicized. If Thoreau as American eco-hero peaked around the first Earth Day (1970), today he is derided by conservatives who detest his anti-business sentiments and by postmodern thinkers for whom nature is a suspect green blur. (I still recall one faculty meeting at which a tenured English professor dismissed DNA as all right, “if you believe in that sort of thing.”)

Thoreau has always had detractors, even among his friends. Emerson’s delicate, vicious smear job at his funeral, a masterly takedown in eulogy form that enraged family and friends, set the pattern for enemies like James Russell Lowell (though happily not Lowell’s goddaughter, Virginia Woolf). Our own period sensibilities can flinch when confronted with Thoreaus we did not expect—the efficient capitalist, improving graphite mixes for the family pencil works; the schoolmaster who caned nine pupils at random, then quit in a fury; the early Victorian who may have chosen chastity because his brother John never lived a full life. (Henry’s most explicit statement on the subject of sex, even in the Journal: “I fell in love with a shrub oak.”)

Yet lately I have noted a new wave of loathing. When witnesses to his life still abounded, the prime criticism of Thoreau was Not Genteel. Now, the tag is Massive Hypocrite. Reader comments on Goodreads and Amazon alone are a deluge of angry, misspelled assertions that Thoreau was a rich-boy slacker, a humorless, arrogant, lying elitist. In the trolling of Thoreau by the digital hive mind, the most durable myth is Cookies-and-Laundry: that Thoreau, claiming independence at Walden, brought his washing home to his mother, and enjoyed her cooking besides. Claims by Concord neighbors that he was a pie-stealing layabout appear as early as the 1880s; Emerson’s youngest son felt compelled to rebut them, calling his childhood friend wise, gentle, and lovable.

The most recent eruption is “Pond Scum,” a 2015 New Yorker piece of fractal wrongness by Kathryn Schulz, who paints Thoreau as cold, parochial, egotistical, incurious, misanthropic, illogical, na├»ve, and cruel—and misses the real story of Walden, his journey from alienation to insight. I have spent a lifetime with Thoreau. I neither love nor hate him, but I know him well. I tracked down his papers, lived in Concord, walked his trails, repeated his journeys, and read, twice, the full Journal. I knew we were in the realm of alternative facts when Schulz dismissed Thoreau as “a well-off Harvard-educated man without dependents.” For that misreading alone, Schulz stands as the Kellyanne Conway of Thoreau commentary. He was the first in his family to attend college, a minority admit (owing to regional bias against French names), working-class to the bone, and after John’s death, the one son, obliged to support his family’s two businesses, boarding house and pencil factory—inhaling graphite dust from the latter fatally weakened his lungs. He was graduated from Harvard, yes, but into a wrenching depression, the Panic of 1837, and during Walden stays, he washed his dishes, floors, and laundry with cold pond water.

Did he go home often? Of course, because his father needed help at the shop. Did he do laundry in town? We do not know, but as the only surviving son of aging boardinghouse-keepers, Thoreau was no stranger to the backbreaking, soul-killing round of 19th-century commercial domestic labor. He knew no other life until he made another one, at Walden.

Pushback on “Pond Scum” was swift and gratifying, and gifted critics such as Donovan Hohn, Jedediah Purdy, and Rebecca Solnit, who have written so well on Thoreau, reassure me that as his third century opens, intelligent readers will continue to find him. But the path to Walden is, increasingly, neglected and overgrown. I constantly meet undergraduates who have never hiked alone, held an after-school job, or lived off schedule. They don’t know the source of milk or the direction of north. They really don’t like to unplug. In seminars, they look up from Walden in cautious wonder: “Can you even say this?” Thoreau worries them; he smells of resistance and of virtue. He is powerfully, compulsively original. He will not settle.

by William Howarth, American Scholar |  Read more:
Image: Pablo Sanchez/ Flickr; Photo-illustration by David Herbick

Shaka

“Hang loose,” “Right on,” “Thank you,” “Things are great,” “Take it easy” – in Hawaii, the shaka sign expresses all those friendly messages and more. As kamaaina know, to make the shaka, you curl your three middle fingers while extending your thumb and baby finger. For emphasis, quickly turn your hand back and forth with your knuckles facing outward.

As the story goes, that ubiquitous gesture traces its origins back to the early 1900s when Hamana Kalili worked at Kahuku Sugar Mill. His job as a presser was to feed cane through the rollers to squeeze out its juice. One day, Kalili’s right hand got caught in the rollers, and his middle, index and ring fingers were crushed.

After the accident, the plantation owners gave Kalili a new job as the security officer for the train that used to run between Sunset Beach and Kaaawa. Part of his job was to prevent kids from jumping on the train and taking joyrides as it slowly approached and departed Kahuku Station.

If Kalili saw kolohe (mischievous) kids trying to get on the train, he would yell and wave his hands to stop them. Of course, that looked a bit strange since he had only two fingers on his right hand. The kids adopted that gesture; it became their signal to indicate Kalili was not around or not looking, and the coast was clear for them to jump on the train.

According to a March 31, 2002 Honolulu Star-Bulletin story, Kalili was the choir director at his ward (congregation) of the Church of Jesus Christ of Latter-day Saints (Mormon) in Laie. Even though his back was to the congregation, worshippers recognized him when he raised his hands to direct the choir because of his missing fingers.

Kalili also served as “king” of the church fundraiser – complete with a hukilau, luau and show – that was held annually for years until the 1970s. Photos show him greeting attendees with his distinctive wave.

The term “shaka” is not a Hawaiian word. It’s attributed to David “Lippy” Espinda, a used car pitchman who ended his TV commercials in the 1960s with the gesture and an enthusiastic “Shaka, brah!” In 1976, the shaka sign was a key element of Frank Fasi’s third campaign for mayor of Honolulu. He won that race and used the shaka icon for three more successful mayoral bids, serving six terms in all.

In Hawaii, everyone from keiki to kupuna uses the shaka to express friendship, gratitude, goodwill, encouragement and unity. A little wave of the hand spreads a lot of aloha.

by Cheryl Chee Tsutsumi, Hawaiian Airlines | Read more:
Image: uncredited

Against Murderism

I.

A set of questions, hopefully confusing:

Alice is a white stay-at-home mother who is moving to a new neighborhood. One of the neighborhoods in her city is mostly Middle Eastern immigrants; Alice has trouble understanding their accents, and when they socialize they talk about things like which kinds of hijab are in fashion right now. The other neighborhood is mostly white, and a lot of them are New Reformed Eastern Evangelical Episcopalian like Alice, and everyone on the block is obsessed with putting up really twee overdone Christmas decorations just like she is. She decides to move to the white neighborhood, which she thinks is a better cultural fit. Is Alice racist?

Bob is the mayor of Exampleburg, whose bus system has been losing a lot of money lately and will have to scale back its routes. He decides that the bus system should cut its least-used route. This turns out to be a bus route in a mostly-black neighborhood, which has only one-tenth the ridership of the other routes but costs just as much. Other bus routes, most of which go through equally poor mostly-white neighborhoods, are not affected. Is Bob racist?

Carol is a gay libertarian who is a two-issue voter: free markets and gay rights. She notices that immigrants from certain countries seem to be more socialist and more anti-gay than the average American native. She worries that they will become citizens and vote for socialist anti-gay policies. In order to prevent this, she supports a ban on immigration from Africa, Latin America, and the Middle East. Is Carol racist?

Dan is a progressive member of the ACLU and NAACP who has voted straight Democrat the last five elections. He is studying psychology, and encounters The Bell Curve and its theory that some of the difference in cognitive skills between races is genetic. After looking up various arguments, counterarguments, and the position of experts in the field, he decides that this is probably true. He avoids talking about this because he expects other people would misinterpret it and use it as a justification for racism; he thinks this would be completely unjustified since a difference of a few IQ points has no effect on anyone’s basic humanity. He remains active in the ACLU, the NAACP, and various anti-racist efforts in his community. Is Dan racist?

Eric is a restauranteur who is motivated entirely by profit. He moves to a very racist majority-white area where white people refuse to dine with black people. Since he wants to attract as many customers as possible, he sets up a NO BLACKS ALLOWED sign in front of his restaurant. Is Eric racist?

Fiona is an honest-to-goodness white separatist. She believes that racial groups are the natural unit of community, and that they would all be happiest set apart from each other. She doesn’t believe that any race is better than any other, just that they would all be happier if they were separate and able to do their own thing. She supports a partition plan that gives whites the US Midwest, Latinos the Southwest, and blacks the Southeast, leaving the Northeast and Northwest as multiracial enclaves for people who like that kind of thing. She would not use genocide to eliminate other races in these areas, but hopes that once the partition is set up races would migrate of their own accord. She works together with black separatist groups, believing that they share a common vision, and she hopes their countries will remain allies once they are separate. Is Fiona racist?

II.

As usual, the answer is that “racism” is a confusing word that serves as a mishmash of unlike concepts. Here are some of the definitions people use for racism:

1. Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.

2. Definition By Belief: A belief that some race has negative qualities or is inferior, especially if this is innate/genetic.

3. Definition By Consequences: Anything whose consequence is harm to minorities or promotion of white supremacy, regardless of whether or not this is intentional.

Some thoughts:

Definition By Consequences Doesn’t Match Real-World Usage

I know that Definition By Consequences is the really sophisticated one, the ones that scholars in the area are most likely to unite around. But I also think it’s uniquely bad at capturing the way anyone uses the word “racism” in real life. Let me give four examples.

First, by this definition, racism can never cause anything. People like to ask questions like “Did racism contribute to electing Donald Trump?” Under this definition, the question makes no sense. It’s barely even grammatical. “Did things whose consequence is harm minorities whether or not such harm is intentional contribute to the election of Donald Trump?” Huh? If racism is just a description of what consequences something has, then it can’t be used as an causal explanation.

Second, by this definition, many racist things would be good. Suppose some tyrant wants to kill the ten million richest white people, then redistribute their things to black people. This would certainly challenge white supremacy and help minorities. So by this definition, resisting this tyrant would be racist. But obviously this tyrant is evil and resisting him is the right thing to do. So under this definition, good policies which deserve our support can nevertheless be racist. “This policy is racist” can no longer be a strong argument against a policy, even when it’s true.

Third, by this definition, it doesn’t make a lot of sense to say a particular person is racist. Racism is a property of actions, not of humans. While there are no doubt some broad patterns in people, the question “Is Bob racist?” sounds very odd in this framework, sort of like “Does Bob cause poverty?” No doubt Bob has done a few things which either help or hurt economic equality in some small way. And it’s possible that Bob is one of the rare people who organizes his life around crusading against poverty, or around crusading against attempts to end poverty. But overall the question will get you looked at funny. Meanwhile, questions like “Is Barack Obama racist?” should lead to a discussion of Obama’s policies and which races were helped or hurt by them; issues like Obama’s own race and his personal feelings shouldn’t come up at all.

Fourth, by this definition, it becomes impossible to assess the racism of an action without knowing all its consequences. Suppose the KKK holds a march through some black neighborhood to terrorize the residents. But in fact the counterprotesters outnumber the marchers ten to one, and people are actually reassured that the community supports them. The march is well-covered on various news organizations, and outrages people around the nation, who donate a lot of money to anti-racist organizations and push for stronger laws against the KKK. Plausibly, the net consequences of the march were (unintentionally) very good for black people and damaging to white supremacy. Therefore, by the Sophisticated Definition, the KKK marching the neighborhood to terrorize black residents was not racist. In fact, for the KKK not to march in this situation would be racist!

So Definition By Consequences implies that racism can never be pointed to as a cause of anything, that racist policies can often be good, that nobody “is a racist” or “isn’t a racist”, and that sometimes the KKK trying to terrorize black people is less racist than them not trying to do this. Not only have I never heard anyone try to grapple with these implications, I see no sign anyone has ever thought of them. And now that I’ve brought them up, I don’t think anyone will accept them as true, or even worry about the discrepancy.

I think this is probably because it’s a motte-and-bailey, more something that gets trotted out to win arguments than anything people actually use in real life.

by Scott Alexander, Slate Star Codex |  Read more:

Is the Staggeringly Profitable Business of Scientific Publishing Bad For Science?

In 2011, Claudio Aspesi, a senior investment analyst at Bernstein Research in London, made a bet that the dominant firm in one of the most lucrative industries in the world was headed for a crash. Reed-Elsevier, a multinational publishing giant with annual revenues exceeding £6bn, was an investor’s darling. It was one of the few publishers that had successfully managed the transition to the internet, and a recent company report was predicting yet another year of growth. Aspesi, though, had reason to believe that that prediction – along with those of every other major financial analyst – was wrong.

The core of Elsevier’s operation is in scientific journals, the weekly or monthly publications in which scientists share their results. Despite the narrow audience, scientific publishing is a remarkably big business. With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more profitable. In 2010, Elsevier’s scientific publishing arm reported profits of £724m on just over £2bn in revenue. It was a 36% margin – higher than Apple, Google, or Amazon posted that year.

But Elsevier’s business model seemed a truly puzzling thing. In order to make money, a traditional publisher – say, a magazine – first has to cover a multitude of costs: it pays writers for the articles; it employs editors to commission, shape and check the articles; and it pays to distribute the finished product to subscribers and retailers. All of this is expensive, and successful magazines typically make profits of around 12-15%.

The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.

It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill. Outside observers tend to fall into a sort of stunned disbelief when describing this setup. A 2004 parliamentary science and technology committee report on the industry drily observed that “in a traditional market suppliers are paid for the goods they provide”. A 2005 Deutsche Bank report referred to it as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.

Scientists are well aware that they seem to be getting a bad deal. The publishing business is “perverse and needless”, the Berkeley biologist Michael Eisen wrote in a 2003 article for the Guardian, declaring that it “should be a public scandal”. Adrian Sutton, a physicist at Imperial College, told me that scientists “are all slaves to publishers. What other industry receives its raw materials from its customers, gets those same customers to carry out the quality control of those materials, and then sells the same materials back to the customers at a vastly inflated price?” (A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”.)

Many scientists also believe that the publishing industry exerts too much influence over what scientists choose to study, which is ultimately bad for science itself. Journals prize new and spectacular results – after all, they are in the business of selling subscriptions – and scientists, knowing exactly what kind of work gets published, align their submissions accordingly. This produces a steady stream of papers, the importance of which is immediately apparent. But it also means that scientists do not have an accurate map of their field of inquiry. Researchers may end up inadvertently exploring dead ends that their fellow scientists have already run up against, solely because the information about previous failures has never been given space in the pages of the relevant scientific publications. A 2013 study, for example, reported that half of all clinical trials in the US are never published in a journal.

According to critics, the journal system actually holds back scientific progress. In a 2008 essay, Dr Neal Young of the National Institutes of Health (NIH), which funds and conducts medical research for the US government, argued that, given the importance of scientific innovation to society, “there is a moral imperative to reconsider how scientific data are judged and disseminated”.

Aspesi, after talking to a network of more than 25 prominent scientists and activists, had come to believe the tide was about to turn against the industry that Elsevier led. More and more research libraries, which purchase journals for universities, were claiming that their budgets were exhausted by decades of price increases, and were threatening to cancel their multi-million-pound subscription packages unless Elsevier dropped its prices. State organisations such as the American NIH and the German Research Foundation (DFG) had recently committed to making their research available through free online journals, and Aspesi believed that governments might step in and ensure that all publicly funded research would be available for free, to anyone. Elsevier and its competitors would be caught in a perfect storm, with their customers revolting from below, and government regulation looming above.

In March 2011, Aspesi published a report recommending that his clients sell Elsevier stock. A few months later, in a conference call between Elsevier management and investment firms, he pressed the CEO of Elsevier, Erik Engstrom, about the deteriorating relationship with the libraries. He asked what was wrong with the business if “your customers are so desperate”. Engstrom dodged the question. Over the next two weeks, Elsevier stock tumbled by more than 20%, losing £1bn in value. The problems Aspesi saw were deep and structural, and he believed they would play out over the next half-decade – but things already seemed to be moving in the direction he had predicted.

Over the next year, however, most libraries backed down and committed to Elsevier’s contracts, and governments largely failed to push an alternative model for disseminating research. In 2012 and 2013, Elsevier posted profit margins of more than 40%. The following year, Aspesi reversed his recommendation to sell. “He listened to us too closely, and he got a bit burned,” David Prosser, the head of Research Libraries UK, and a prominent voice for reforming the publishing industry, told me recently. Elsevier was here to stay.

Aspesi was not the first person to incorrectly predict the end of the scientific publishing boom, and he is unlikely to be the last. It is hard to believe that what is essentially a for-profit oligopoly functioning within an otherwise heavily regulated, government-funded enterprise can avoid extinction in the long run. But publishing has been deeply enmeshed in the science profession for decades. Today, every scientist knows that their career depends on being published, and professional success is especially determined by getting work into the most prestigious journals. The long, slow, nearly directionless work pursued by some of the most influential scientists of the 20th century is no longer a viable career option. Under today’s system, the father of genetic sequencing, Fred Sanger, who published very little in the two decades between his 1958 and 1980 Nobel prizes, may well have found himself out of a job.

Even scientists who are fighting for reform are often not aware of the roots of the system: how, in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale. And no one was more transformative and ingenious than Robert Maxwell, who turned scientific journals into a spectacular money-making machine that bankrolled his rise in British society. Maxwell would go on to become an MP, a press baron who challenged Rupert Murdoch, and one of the most notorious figures in British life. But his true importance was far larger than most of us realise. Improbable as it might sound, few people in the last century have done more to shape the way science is conducted today than Maxwell.

by Stephen Buranyi, The Guardian |  Read more:
Image: Dom McKenzie

Monday, June 26, 2017

A Utopia for a Dystopian Age

The term “utopia” was coined 500 years ago. By conjoining the Greek adverb “ou” (“not”) and the noun “topos” (“place”) the English humanist and politician Thomas More conceived of a place that is not — literally a “nowhere” or “noplace.” More’s learned readers would also have recognized another pun. The pronunciation of “utopia” can just as well be associated with “eu-topia,” which in Greek means “happy place.” Happiness, More might have suggested, is something we can only imagine. And yet imagining it, as philosophers, artists and politicians have done ever since, is far from pointless.

More was no doubt a joker. “Utopia,” his fictional travelogue about an island of plenty and equality, is told by a character whose name, Hythloday, yet another playful conjoining of Greek words, signifies something like “nonsense peddler.” Although More comes across as being quite fond of his noplace, he occasionally interrupts the narrative by warning against the islanders’ rejection of private property. Living under the reign of the autocratic Henry VIII, and being a prominent social figure, More might not have wanted to rock the boat too much.

Precisely that — rocking the boat — has, however, been the underlying aim of the great utopias that have shaped Western culture. It has animated and informed progressive thinking, providing direction and a sense of purpose to struggles for social change and emancipation. From the vantage point of the utopian imagination, history — that gushing river of seemingly contingent micro-events — has taken on meaning, becoming a steadfast movement toward the sought-for condition supposedly able to justify all previous striving and suffering.

Utopianism can be dreamy in a John Lennon “Imagine”-esque way. Yet it has also been ready to intervene and bring about concrete transformation.

Utopias come in different forms. Utopias of desire, as in Hieronymus Bosch’s painting “The Garden of Earthly Delights,” focus on happiness, tying it to the satisfaction of needs. Such utopias, demanding the complete alleviation of pain and sometimes glorious spaces of enjoyment and pleasure, tend, at least in modern times, to rely on technology. The utopias of technology see social, bodily and environmental ills as requiring technological solutions. We know such solutions all too well: ambitious city-planning schemes and robotics as well as dreams of cosmic expansion and immortality. (...)

Today, the utopian impulse seems almost extinguished. The utopias of desire make little sense in a world overrun by cheap entertainment, unbridled consumerism and narcissistic behavior. The utopias of technology are less impressive than ever now that — after Hiroshima and Chernobyl — we are fully aware of the destructive potential of technology. Even the internet, perhaps the most recent candidate for technological optimism, turns out to have a number of potentially disastrous consequences, among them a widespread disregard for truth and objectivity, as well as an immense increase in the capacity for surveillance. The utopias of justice seem largely to have been eviscerated by 20th-century totalitarianism. After the Gulag Archipelago, the Khmer Rouge’s killing fields and the Cultural Revolution, these utopias seem both philosophically and politically dead.

The great irony of all forms of utopianism can hardly escape us. They say one thing, but when we attempt to realize them they seem to imply something entirely different. Their demand for perfection in all things human is often pitched at such a high level that they come across as aggressive and ultimately destructive. Their rejection of the past, and of established practice, is subject to its own logic of brutality.

And not only has the utopian imagination been stung by its own failures, it has also had to face up to the two fundamental dystopias of our time: those of ecological collapse and thermonuclear warfare. The utopian imagination thrives on challenges. Yet these are not challenges but chillingly realistic scenarios of utter destruction and the eventual elimination of the human species. Add to that the profoundly anti-utopian nature of the right-wing movements that have sprung up in the United States and Europe and the prospects for any kind of meaningful utopianism may seem bleak indeed. In matters social and political, we seem doomed if not to cynicism, then at least to a certain coolheadedness.

Anti-utopianism may, as in much recent liberalism, call for controlled, incremental change. The main task of government, Barack Obama ended up saying, is to avoid doing stupid stuff. However, anti-utopianism may also become atavistic and beckon us to return, regardless of any cost, to an idealized past. In such cases, the utopian narrative gets replaced by myth. And while the utopian narrative is universalistic and future-oriented, myth is particularistic and backward-looking. Myths purport to tell the story of us, our origin and of what it is that truly matters for us. Exclusion is part of their nature.

Can utopianism be rescued? Should it be? To many people the answer to both questions is a resounding no.

There are reasons, however, to think that a fully modern society cannot do without a utopian consciousness. To be modern is to be oriented toward the future. It is to be open to change even radical change, when called for. With its willingness to ride roughshod over all established certainties and ways of life, classical utopianism was too grandiose, too rationalist and ultimately too cold. We need the ability to look beyond the present. But we also need More’s insistence on playfulness. Once utopias are embodied in ideologies, they become dangerous and even deadly.

by Espen Hammer, NY Times | Read more:
Image: Hieronymus Bosch

A Cyberattack ‘the World Isn’t Ready For’

There have been times over the last two months when Golan Ben-Oni has felt like a voice in the wilderness.

On April 29, someone hit his employer, IDT Corporation, with two cyberweapons that had been stolen from the National Security Agency. Mr. Ben-Oni, the global chief information officer at IDT, was able to fend them off, but the attack left him distraught.

In 22 years of dealing with hackers of every sort, he had never seen anything like it. Who was behind it? How did they evade all of his defenses? How many others had been attacked but did not know it?

Since then, Mr. Ben-Oni has been sounding alarm bells, calling anyone who will listen at the White House, the Federal Bureau of Investigation, the New Jersey attorney general’s office and the top cybersecurity companies in the country to warn them about an attack that may still be invisibly striking victims undetected around the world. (...)

Two weeks after IDT was hit, the cyberattack known as WannaCry ravaged computers at hospitals in England, universities in China, rail systems in Germany, even auto plants in Japan. No doubt it was destructive. But what Mr. Ben-Oni had witnessed was much worse, and with all eyes on the WannaCry destruction, few seemed to be paying attention to the attack on IDT’s systems — and most likely others around the world.

The strike on IDT, a conglomerate with headquarters in a nondescript gray building here with views of the Manhattan skyline 15 miles away, was similar to WannaCry in one way: Hackers locked up IDT data and demanded a ransom to unlock it.

But the ransom demand was just a smoke screen for a far more invasive attack that stole employee credentials. With those credentials in hand, hackers could have run free through the company’s computer network, taking confidential information or destroying machines.

Worse, the assault, which has never been reported before, was not spotted by some of the nation’s leading cybersecurity products, the top security engineers at its biggest tech companies, government intelligence analysts or the F.B.I., which remains consumed with the WannaCry attack.

Were it not for a digital black box that recorded everything on IDT’s network, along with Mr. Ben-Oni’s tenacity, the attack might have gone unnoticed.

Scans for the two hacking tools used against IDT indicate that the company is not alone. In fact, tens of thousands of computer systems all over the world have been “backdoored” by the same N.S.A. weapons. Mr. Ben-Oni and other security researchers worry that many of those other infected computers are connected to transportation networks, hospitals, water treatment plants and other utilities. (...)

The WannaCry attack — which the N.S.A. and security researchers have tied to North Korea — employed one N.S.A. cyberweapon; the IDT assault used two.

Both WannaCry and the IDT attack used a hacking tool the agency had code-named EternalBlue. The tool took advantage of unpatched Microsoft servers to automatically spread malware from one server to another, so that within 24 hours North Korea’s hackers had spread their ransomware to more than 200,000 servers around the globe.

The attack on IDT went a step further with another stolen N.S.A. cyberweapon, called DoublePulsar. The N.S.A. used DoublePulsar to penetrate computer systems without tripping security alarms. It allowed N.S.A. spies to inject their tools into the nerve center of a target’s computer system, called the kernel, which manages communications between a computer’s hardware and its software.

In the pecking order of a computer system, the kernel is at the very top, allowing anyone with secret access to it to take full control of a machine. It is also a dangerous blind spot for most security software, allowing attackers to do what they want and go unnoticed. In IDT’s case, attackers used DoublePulsar to steal an IDT contractor’s credentials. Then they deployed ransomware in what appears to be a cover for their real motive: broader access to IDT’s businesses.

by Nicole Perlroth, NY Times |  Read more:
Image: Patrick Semansky/AP

Sunday, June 25, 2017

Did the Fun Work?

If anything can make enchantment terse, it is the German compound noun. Through the bluntest lexical conglomeration, these words capture concepts so ineffable that they would otherwise float away. Take the Austrian art historian Alois Riegl’s term, Kunstwollen—Kunst (art) + wollen (will), or “will to art”—later defined by Erwin Panofsky as “the sum or unity of creative powers manifested in any given artistic phenomenon.” (Panofsky then appended to this mouthful a footnote parsing precisely what he meant by “artistic phenomenon.”) A particular favorite compound of mine is Kurort, literally “cure-place,” but better translated as “spa town” or “health resort.” There’s an elegiac romance to Kurort that brings to mind images of parasols and gouty gentlemen taking the waters, the world of Thomas Mann’s Magic Mountain. Nevertheless, Kurort’s cocktail of connotations—mixing leisure, self-improvement, health, physical pleasure, relaxation, gentility, and moral rectitude—remains as fresh as ever. Yoga retreats and team-building ropes courses may have all but replaced mineral baths, but wellness vacations and medical tourism are still big business.

What continues to fuel this industry (by now a heritage one) is the durable belief that leisure ought to achieve something—a firmer bottom, new kitchen abilities, triumph over depression. In fact, why not go for the sublime leisure-success trifecta: physical, practical, and spiritual? One vacation currently offered in Sri Lanka features cycling, a tea tutorial, and a visit to a Buddhist temple, a package that promises to be active (but not draining), educational (but not tedious), and fun (but not dissolute). The “Experiences” section of Airbnb advertises all kinds of self- and life-improving activities, including a Korean food course, elementary corsetry, and even a microfinance workshop. (...)

Leisure, it turns out, requires measurement and evaluation. First of all, our irksome question remains: When partaking of leisure, how can you know that you aren’t slipping into idleness? Second, because leisure is a deserved reward, it should be fun, amusing, diverting, or otherwise pleasurable. This requirement begets another set of questions, perhaps even more existential in scope: How do leisure seekers even know whether they’re enjoying themselves, and if they are, whether the enjoyment . . . worked? Was the restoration sufficient? The self improved? The fun had?

These questions are most easily, if superficially, answered via the medley of social media platforms and portable devices bestowed on us by the wonders of consumer-product-driven innovation. Fitbit points, likes, and heart-eyed emoji faces have become the units of measurement by which we evaluate our own experiences. These tokens offer reassurance that our time is being optimally spent; they represent our leisure accomplishments. Social media and camera-equipped portable devices have given us the opportunity to solicit positive feedback from our friends, and indeed from the world at large, nonstop. Even when we are otherwise occupied or asleep, our photos and posts beam out, ever ready to scoop up likes and faves. Yet under the guise of fun and “connection,” we are simply extending the Taylorist drive to document, measure, and analyze into the realm of leisure. Thinkers from Frank Lloyd Wright to John Maynard Keynes once predicted that technology would free us from toil, but as we all know, the devices it has yielded have only ended up increasing workloads. They have also taken command of leisure, yoking it to the constant labor of self-branding required under neoliberal capitalism, and making us complicit in our own surveillance to boot.

Not that there’s anything inherently wrong or self-exploitative about showing off your newly acquired basket-weaving skills on Instagram—and anyway, the line between leisure and labor is not always clearly drawn. From gardening to tweeting, labor often overlaps with pleasure and entertainment under certain conditions. But the fact that the platforms on which we document, communicate, and measure our leisure are owned by massive for-profit corporations that trade upon our freely given content ought to make us wonder not only what, exactly, they might be getting out of all this activity, but also how it frames our own ideas of what leisure is. If the satisfaction of posting on social media derives from garnering likes in the so-called attention economy, then posters will, according to a crude market logic, select what they believe to be the most “likeable” content for posting, and furthermore, will often alter their behavior to generate precisely that content. The mirror of social media metrics offers to show us whether we enjoyed ourselves, but just as with mirrors, we have to work to get back the reflection we want to see.

So Many Feels


The cult of productivity is a greedy thing; it sucks up both the time we spend in leisure and the very feelings it stirs in us. Happiness and other pleasant sensations must themselves become productive, which is why we talk of leisure being “restorative” or “rejuvenating.” Through coffee breaks and shorter workweeks, employers from municipal governments to investment banks are encouraging their workers to take time off, all under the guise of benevolent care. But these schemes are ultimately aimed at maximizing productivity and quelling discontent (and besides, employers maintain the power to retract these privileges at their own whims). Work depletes us emotionally, physically, and intellectually, and that is why we are entitled to periods of leisure—not because leisure is a human right or good in and of itself, but because it enables us to climb back onto the hamster wheel of marketplace activity in good cheer.

As neoliberalism reduces happiness to its uses, it steers our interests toward confirming our own feelings via external assessment. This assessment just so happens to require apparatuses (smartphones, laptops, Apple watches) and measurement units (faves, shares, star ratings) that turn us into eager buyers of consumer products and require our willing submission to corporate surveillance. None of which means that your Airbnb truffle-hunting experience—as well as subsequently posting about it and basking in the likes—didn’t make you happy. It simply means that the events and behavior that brought about this happiness coincide with the profit motives of a vast network of institutions that extends far beyond any one individual.

So they want us to buy their stuff and hand over our data. Fine. But why do they demand that we be so insistently, outwardly happy?

by Miya Tokumitsu , The Baffler | Read more:
Image: via

Tuesday, June 20, 2017

Administration


[ed. Hi Duck Soup readers. I'm moving, so posts will be sporadic for a while (if non-existent). Please come back in a week or two, or check out the archives (lots of great stuff there). I'll see you soon.] 

Monday, June 19, 2017

Bears and Moose Are Par for the Course

Golfers Gary Cox and Devery Prince became an unexpected trio recently when a black bear joined their morning game at hole No. 8 on Moose Run's Creek Course in an encounter Cox caught on video.

"You can see we're jumping around reaching, grabbing clubs to make sure we had something to defend ourselves," Cox said. "You know, he wasn't really aggressive, but he wasn't afraid of us at all."

The bear stood up and used the pin for support as it swatted at the flag atop the pin. The bear soon gave up, then found the pair's carry bags more appealing.

Cox and Prince yelled and growled at the bear. Cox threw a golf ball at a bag to startle the bear, which strolled into the woods with Prince's coffee container in its mouth.

"If he decided he was going to have some adolescent adrenaline rush or something, you know he could," Cox said. "… (I)t was a little unnerving. Not that I think he was acting aggressive in that way, but you just don't know what they're going to do."

Cox and Prince's encounter isn't the only wildlife sighting on a course this summer. Both Moose Run's Hill Course and Anchorage Golf Course have been home to bigger challenges than sand traps and water hazards.

"Animal sightings are pretty much a daily occurrence out there at Moose Run," said Moose Run general manager Don Kramer. "… Most of the time the golfers are the ones that call them in and we have somebody on each course riding around … and if there's a bear around we try and shoo them off with the carts." (...)

Fish and Game officials advise caution in a bear encounter.

"Basically if you have a bear come into your golf cart, you shouldn't be pulling out your 7-iron or going after it," said Fish and Game spokesman Ken Marsh.

Because bear encounters aren't unusual, staff at Anchorage Golf Course carry air horns and staff at Moose Run carry bear spray. Pins at those courses are removed at night because moose and bears like to play with them.

Most Alaska golfers know that wildlife is par for the course.

"The golfers out there are fairly local and they understand that," Kramer said. "It's Alaska, it happens all over."

by Chris Lawrence, Alaska Dispatch | Read more:
Image: Marc Lester
[ed. I used to golf and work at both of these courses and bear/moose sitings were a daily occurrence. As a marshal I'd gently herd animals away, or at least get between them and the players with my cart. Most of the time there weren't any problems, but every once in a while... See also: Black bear kills teen runner during trail race near Anchorage]

How Did Health Care Get to Be Such a Mess?

The problem with American health care is not the care. It’s the insurance.

Both parties have stumbled to enact comprehensive health care reform because they insist on patching up a rickety, malfunctioning model. The insurance company model drives up prices and fragments care. Rather than rejecting this jerry-built structure, the Democrats’ Obamacare legislation simply added a cracked support beam or two. The Republican bill will knock those out to focus on spackling other dilapidated parts of the system.

An alternative structure can be found in the early decades of the 20th century, when the medical marketplace offered a variety of models. Unions, businesses, consumer cooperatives and ethnic and African-American mutual aid societies had diverse ways of organizing and paying for medical care.

Physicians established a particularly elegant model: the prepaid doctor group. Unlike today’s physician practices, these groups usually staffed a variety of specialists, including general practitioners, surgeons and obstetricians. Patients received integrated care in one location, with group physicians from across specialties meeting regularly to review treatment options for their chronically ill or hard-to-treat patients.

Individuals and families paid a monthly fee, not to an insurance company but directly to the physician group. This system held down costs. Physicians typically earned a base salary plus a percentage of the group’s quarterly profits, so they lacked incentive to either ration care, which would lose them paying patients, or provide unnecessary care.

This contrasts with current examples of such financing arrangements. Where physicians earn a preset salary — for example, in Kaiser Permanente plans or in the British National Health Service — patients frequently complain about rationed or delayed care. When physicians are paid on a fee-for-service basis, for every service or procedure they provide — as they are under the insurance company model — then care is oversupplied. In these systems, costs escalate quickly.

Unfortunately, the leaders of the American Medical Association saw early health care models — union welfare funds, prepaid physician groups — as a threat. A.M.A. members sat on state licensing boards, so they could revoke the licenses of physicians who joined these “alternative” plans. A.M.A. officials likewise saw to it that recalcitrant physicians had their hospital admitting privileges rescinded.

The A.M.A. was also busy working to prevent government intervention in the medical field. Persistent federal efforts to reform health care began during the 1930s. After World War II, President Harry Truman proposed a universal health care system, and archival evidence suggests that policy makers hoped to build the program around prepaid physician groups.

A.M.A. officials decided that the best way to keep the government out of their industry was to design a private sector model: the insurance company model.

In this system, insurance companies would pay physicians using fee-for-service compensation. Insurers would pay for services even though they lacked the ability to control their supply. Moreover, the A.M.A. forbade insurers from supervising physician work and from financing multispecialty practices, which they feared might develop into medical corporations.

With the insurance company model, the A.M.A. could fight off Truman’s plan for universal care and, over the next decade, oppose more moderate reforms offered during the Eisenhower years.

Through each legislative battle, physicians and their new allies, insurers, argued that federal health care funding was unnecessary because they were expanding insurance coverage. Indeed, because of the perceived threat of reform, insurers weathered rapidly rising medical costs and unfavorable financial conditions to expand coverage from about a quarter of the population in 1945 to about 80 percent in 1965.

But private interests failed to cover a sufficient number of the elderly. Consequently, Congress stepped in to create Medicare in 1965. The private health care sector had far more capacity to manage a large, complex program than did the government, so Medicare was designed around the insurance company model. Insurers, moreover, were tasked with helping administer the program, acting as intermediaries between the government and service providers.

With Medicare, the demand for health services increased and medical costs became a national crisis. To constrain rising prices, insurers gradually introduced cost containment procedures and incrementally claimed supervisory authority over doctors. Soon they were reviewing their medical work, standardizing treatment blueprints tied to reimbursements and shaping the practice of medicine.

by Christy Ford Chapin, NY Times |  Read more:
Image: Tim Lahan

Sunday, June 18, 2017

The Sharing Depot

Anyone living in the cramped confines of a city apartment knows the pain of not quite having enough space. There’s nowhere to put that Ping-Pong table you’ve always wanted. Your bike is hanging on the wall, and you’ve already stepped on your kid’s Legos twice this week. Storage is expensive. Every new possession, hobby, and project costs not just money, but precious square footage.

The Sharing Depot, Toronto’s first library of things, helps space-starved urbanites cut costs and clutter without giving up access to the stuff they love. A sort of Zipcar for the little things, the Sharing Depot, which opened earlier this year, lets members borrow items like camping gear, sports equipment, toys, and garden tools. Members pay between $50 and $100 Canadian annually; the higher the level of membership the longer you may keep an item.

When I reach Sharing Depot cofounder Ryan Dyment on the phone, the storefront is busy and loud. Patrons can browse an extensive inventory online or search the Depot’s crowded shelves in person. Skill workshops and swap meets keep sharers engaged, and a volunteer program provides free membership in exchange for working a few shifts per month. “You meet a lot of interesting people,” Dyment says earnestly of the Depot’s growing community.

Before they started up the project, Dyment and cofounder Lawrence Alvarez, a community activist, polled local Torontonians to find out what items people needed occasionally but didn’t have room for at home or found too expensive to buy outright. “The most popular were camping gear, toys, party supplies, those kinds of things,” Dyment says. “So we said ‘OK, let’s do a crowdfund, and see if people want to put their money where their mouth is.’”

The project’s IndieGoGo campaign raised more than $30,000, allowing the Sharing Depot to build out its storefront space, pay a few months’ rent, and acquire some basic inventory. Dyment says the Depot is currently dependent on grants for about 20 percent of its funding, but he hopes membership fees can sustain the operation going forward. Now that they’re set up, most of the new items come in via donation. “We have a wish list from people who come in and request things,” Dyment says. “So for example, we didn’t really have great sewing machines, but it was requested, so we did that.”

Dyment recently heard from someone calling to ask if they had a paper shredder.

“I was like, ‘Yeah, you know, actually that’s a good request, I don’t have one,’” Dyment tells me. “And then literally within the hour of hanging up on this person and disappointing them, someone came in and dropped off two paper shredders.”

While loaning out chop saws, folding chairs, and chocolate fondue fountains might not sound like a very direct way to change the world, the ethos of the Sharing Depot taps into much deeper economic and environmental issues for Dyment. The former accountant walked away from a finance career for more meaningful pursuits in 2009, after he began to question the sustainability of modern economies and monetary systems. “Generally the largest impact that an individual makes is through the products they consume,” he says. “We have to find a way to consume less stuff if we care about the environment, if we want to live here for many generations.”

The Sharing Depot has its roots in the Institute for a Resource-Based Economy, a nonprofit started in 2011 to “promote the sharing economy and provide a transition solution to our planetary crises.” Dyment and Alvarez are among the group’s founders. The institute held workshops on “debt-based currency and disruptive technologies,” hosted educational talks, and eventually settled into its biggest project: the Toronto Tool Library.

It was a great concept—a single home repair project could require 15 different tools, but here you could borrow the gear you needed to put up those shelves or retile your bathroom, and then just return it all when you were done. (...)

Some Sharing Depot members, Dyment tells me, “just have way too much stuff and are doing their best to downsize if they can.” But others, he says, are post-recession, new economy kids who grew up with abundance and now have less than their parents did. “They live in these much smaller spaces, but they still want access to some of this stuff and they don’t know another way,” he says.

by Jed Oelbaum, Make Change | Read more:
Image: Sharing Depot

Valerius De Saedeleer, A winter landscape, 1927
via:

Open and Destroy

Saturday, June 17, 2017

The Last Picture Show

Last Sunday, I was checking the listings, looking for something to cover for tonight’s weekly film review (preferably something/anything that didn’t involve aliens, comic book characters, or pirates), and was intrigued by Sofia Coppola’s remake of The Beguiled. Being a lazy bastard, I was happy to discover that the exclusive Seattle booking was at my neighborhood theater (the Guild 45th!), which is only a three-block walk from my apartment.

Imagine my surprise when I went to their website for show times and was greeted by this message: “The Seven Gables and Guild 45th Theaters have closed. Please stay tuned for further details on our renovation plans for each location. During the down time, we look forward to serving you at the Crest Cinema Center.” The Crest (now Landmark’s sole local venue open for business) is another great neighborhood theater, programmed with first-run films on their final stop before leaving Seattle (and at $4 for all shows, a hell of a deal). But for how long, I wonder?

It’s weird, because I drive past the Guild daily, on my way to work; and I had noticed that the marquees were blank one morning last week. I didn’t attach much significance to it at the time; while it seemed a bit odd, I just assumed that they were in the process of putting up new film titles. Also, I've been receiving weekly updates from the Landmark Theaters Seattle publicist for years; last week's email indicated business as usual (advising me on upcoming bookings, available press screeners, etc.), and there was absolutely no hint that this bomb was about to drop.

Where was the “ka-boom”?! There was supposed to be an Earth-shattering “ka-boom”. Oh, well.

It would appear that the very concept of a "neighborhood theater" is quickly becoming an anachronism, and that makes me feel sad, somehow. Granted, not unlike many such “vintage” venues, the Guild had seen better days from an aesthetic viewpoint; the floors were sticky, the seats less than comfortable, and the auditorium smelled like 1953...but goddammit, it was “my” neighborhood theater, it’s ours because we found it, and now we wants it back (it’s my Precious).

My gut tells me the Guild isn’t being “renovated”, but rather headed for the fires of Mount Doom; and I suspect the culprit isn’t so much Netflix, as it is Google and Amazon. You may be shocked, shocked to learn that Seattle is experiencing a huge tech boom. Consequently, the housing market (including rentals) is tighter than I’ve ever seen it in the 25 years I’ve lived here. The creeping signs of over-gentrification (which I first started noticing in 2015) are now reaching critical mass. Seattle’s once-distinctive neighborhoods are quickly losing their character, and mine (Wallingford) is the latest target on the urban village “up-zoning” hit list. Anti-density groups are rallying, but I see the closure of our 100 year-old theater as a harbinger of ticky-tacky big boxes.

Some of my fondest memories of the movie-going experience involve neighborhood theaters; particularly during a 2 ½ year period of my life (1979-1981) when I was living in San Francisco. But I need to back up for a moment. I had moved to the Bay Area from Fairbanks, Alaska, which was not the ideal environment for a movie buff. At the time I moved from Fairbanks, there were only two single-screen movie theaters in town. To add insult to injury, we were usually several months behind the Lower 48 on first-run features (it took us nearly a year to even get Star Wars).

Keep in mind, there was no cable service in the market, and VCRs were a still a few years down the road. There were occasional midnight movie screenings at the University of Alaska, and the odd B-movie gem on late night TV (which we had to watch in real time, with 500 commercials to suffer through)...but that was it. Sometimes, I’d gather up a coterie of my culture vulture pals for the 260 mile drive to Anchorage, where there were more theaters for us to dip our beaks into.

Consequently, due to the lack of venues, I was reading more about movies, than actually watching them. I remember poring over back issues of The New Yorker at the public library, soaking up Penelope Gilliat and Pauline Kael; but it seemed requisite to live in NYC (or L.A.) to catch all of these cool arthouse and foreign movies they were raving about (most of those films just didn’t make it out up to the frozen tundra). And so it was that I “missed” a lot of 70s cinema.

Needless to say, when I moved to San Francisco, which had a plethora of fabulous neighborhood theaters in 1979, I quickly set about making up the deficit. While I had a lot of favorite haunts (The Surf, The Balboa, The Castro, and the Red Victorian loom large in my memory), there were two venues in particular where I spent an unhealthy amount of time: The Roxie and The Strand.

That’s because they were “repertory” houses; meaning they played older films (frequently double and triple bills, usually curated by some kind of theme). That 2 ½ years I spent in the dark was my film school; that’s how I got caught up with Stanley Kubrick, Martin Scorsese, Robert Altman, Hal Ashby, Terrence Malick, Woody Allen, Sidney Lumet, Peter Bogdanovich, Werner Herzog, Ken Russell, Lindsay Anderson, Wim Wenders, Michael Ritchie, Brian De Palma, etc.

Of course, in 2017 any dweeb with an internet connection can catch up on the history of world cinema without leaving the house...which explains (in part) why these smaller movie houses are dying. But they will never know the sights, the sounds (the smells) of a cozy neighborhood dream palace. Everybody should experience the magic at least once. C’mon-I’ll save you the aisle seat.

by Dennis Hartley, Hullabaloo |  Read more:
Image: Dennis Hartley

Flight of the Conchords



The 15 People You Meet in a Golf Gallery

Before the 2014 PGA Tour season, I’d covered exactly five golf tournaments in my life. It was still a nice novelty, and I had the luxury of focusing solely on the players. This year, I’ve been to 17 tournaments, and when you’re constantly wandering from hole to hole hoping for something dramatic to happen, the mind tends to wander. At some point, I found myself focusing on the gallery.

Let me be the first to tell you: Golf fans are weird. They’re by turns boring, crazed, needy, angry, pathetic, and excessively polite. (When I told a fellow journalist I’d be writing about the different types of golf fans, his immediate reaction was, “They’re all wankers.” I don’t endorse this view … not fully.) So, here are the 15 kinds of people you meet in a golf gallery. I’m leaving off kids, since they have license to behave in ways that are far sadder (and therefore funnier) when observed in adults.

1. THE “BABA BOOEY!” TEE-BOX SCREAMER

Just typing those words made me angry. Howard Stern saddled his producer Gary Dell’Abate with the nickname “Baba Booey” in 1990 because of some confusion about a cartoon called Quick Draw McGraw, stemming from his hobby of — look, it doesn’t matter. They gave him the nickname for some reason, and Stern’s fans seized on “Baba Booey” as a rally cry for all manner of high jinks, but mostly prank phone calls. Somehow it made its way into golf as the hilarious thing you yell after a tee shot, and it may have been kinda funny in 1990 for a while. In the ensuing 24 years, however, it has become the sport’s vuvuzela, shouted incessantly week after week until you want to blindly attack the entire gallery in the hopes of getting lucky and wounding the culprit. The really heartbreaking thing about the “Baba Booey!” cry, though, is that someone always laughs, adding a bit of positive reinforcement to ensure our nightmare will continue. (See also: “MASHED POTATOES!” and “YOU DA MAN!”)

2. THE KISS-ASS


I was in Memphis last week for the St. Jude Classic, and there was a guy who followed Davis Love III around from hole to hole saying, “Thanks for coming to Memphis, Davis!” in the most obsequious way possible. Love responded graciously, since he’s legitimately one of the nicest golfers around, but my god, I hated that kiss-ass. I wanted to grab him and yell, “You know he’s making money for this, right? You know he’s super rich and will become richer after this weekend?” It wouldn’t matter, because the kiss-ass was all about his own gratification. For him, that slight nod from Love was all the confirmation he needed that he was a special fan among the ungrateful rubes. I hope he fell in a pond.

3. THE GROWN MAN DRESSED AS RICKIE FOWLER


It’s really, really depressing how often this happens. And how they wait against the ropes, having pushed children aside, a sad, hopeful smile on their faces as Rickie walks by. As though he’s going to stop, turn in surprise, begin laughing hysterically at the clever idea, and invite them to hang out for the rest of the round. It’s always so satisfying when he ignores them completely.

4. THE PREMATURE “OHHH, GREAT SHOT!” GUY

This guy usually turns up at the green when a player is about to hit out of a bunker. The second the ball comes out of the sand, “Great Shot!” guy shouts, “Ohhh, great shot!” He has no idea where the ball is going. It could stay in the fringe, roll over the green into the water, or cause a bird to explode in mid-flight. Doesn’t matter. To him, the shot was stellar the moment blade struck ball.

5. THE OVER-INVESTED FAN

A cousin of “Ohhh, Great Shot!” guy. This is the person who seems like he or she has $10,000 invested in a shot, because they can’t stop screaming encouragement at the ball, regardless of who hit it. “BREAK! OH, COME ON, BREAK, BALL! OH GOD, JUST BREAK! NOOOO!” The instinct is to tap them gently on the shoulder and say, “Hey, you see that golfer out there? Just to be totally clear … you know he’s not you, right?”

by Shane Ryan, Grantland |  Read more:
Image: Erin Hills, USGA Open 2017
[ed. I was watching the US Open today and have a special distaste for fans in the Number #1 category. Some asshole (and it's always a guy) screaming "In Da Hole!" (or some similarly inane verbal fart) after every shot. Makes you want to reach through the TV and strangle them. See the post following this one.] 

The Five Universal Laws of Human Stupidity

In 1976, a professor of economic history at the University of California, Berkeley published an essay outlining the fundamental laws of a force he perceived as humanity’s greatest existential threat: Stupidity.

Stupid people, Carlo M. Cipolla explained, share several identifying traits: they are abundant, they are irrational, and they cause problems for others without apparent benefit to themselves, thereby lowering society’s total well-being. There are no defenses against stupidity, argued the Italian-born professor, who died in 2000. The only way a society can avoid being crushed by the burden of its idiots is if the non-stupid work even harder to offset the losses of their stupid brethren.

Let’s take a look at Cipolla’s five basic laws of human stupidity:

Law 1: Always and inevitably everyone underestimates the number of stupid individuals in circulation.

No matter how many idiots you suspect yourself surrounded by, Cipolla wrote, you are invariably lowballing the total. This problem is compounded by biased assumptions that certain people are intelligent based on superficial factors like their job, education level, or other traits we believe to be exclusive of stupidity. They aren’t. Which takes us to:

Law 2: The probability that a certain person be stupid is independent of any other characteristic of that person.

Cipolla posits stupidity is a variable that remains constant across all populations. Every category one can imagine—gender, race, nationality, education level, income—possesses a fixed percentage of stupid people. There are stupid college professors. There are stupid people at Davos and at the UN General Assembly. There are stupid people in every nation on earth. How numerous are the stupid amongst us? It’s impossible to say. And any guess would almost certainly violate the first law, anyway.

Law 3. A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.

Cipolla called this one the Golden Law of stupidity. A stupid person, according to the economist, is one who causes problems for others without any clear benefit to himself.

The uncle unable to stop himself from posting fake news articles to Facebook? Stupid. The customer service representative who keeps you on the phone for an hour, hangs up on you twice, and somehow still manages to screw up your account? Stupid.

This law also introduces three other phenotypes that Cipolla says co-exist alongside stupidity. First there is the intelligent person, whose actions benefit both himself and others. Then there is the bandit, who benefits himself at others’ expense. And lastly there is the helpless person, whose actions enrich others at his own expense. Cipolla imagined the four types along a graph, like this:


The non-stupid are a flawed and inconsistent bunch. Sometimes we act intelligently, sometimes we are selfish bandits, sometimes we act helplessly and are taken advantage of by others, and sometimes we’re a bit of both. The stupid, in comparison, are paragons of consistency, acting at all times with unyielding idiocy.

However, consistent stupidity is the only consistent thing about the stupid. This is what makes stupid people so dangerous. Cipolla explains:
Essentially stupid people are dangerous and damaging because reasonable people find it difficult to imagine and understand unreasonable behavior. An intelligent person may understand the logic of a bandit. The bandit’s actions follow a pattern of rationality: nasty rationality, if you like, but still rationality. The bandit wants a plus on his account. Since he is not intelligent enough to devise ways of obtaining the plus as well as providing you with a plus, he will produce his plus by causing a minus to appear on your account. All this is bad, but it is rational and if you are rational you can predict it. You can foresee a bandit’s actions, his nasty maneuvres and ugly aspirations and often can build up your defenses.
With a stupid person all this is absolutely impossible as explained by the Third Basic Law. A stupid creature will harass you for no reason, for no advantage, without any plan or scheme and at the most improbable times and places. You have no rational way of telling if and when and how and why the stupid creature attacks. When confronted with a stupid individual you are completely at his mercy.
by Corinne Purtill, Quartz |  Read more:
Image: uncredited

The Amazon-Walmart Showdown That Explains the Modern Economy

With Amazon buying the high-end grocery chain Whole Foods, something retail analysts have known for years is now apparent to everyone: The online retailer is on a collision course with Walmart to try to be the predominant seller of pretty much everything you buy.

Each one is trying to become more like the other — Walmart by investing heavily in its technology, Amazon by opening physical bookstores and now buying physical supermarkets. But this is more than a battle between two business titans. Their rivalry sheds light on the shifting economics of nearly every major industry, replete with winner-take-all effects and huge advantages that accrue to the biggest and best-run organizations, to the detriment of upstarts and second-fiddle players.

That in turn has been a boon for consumers but also has more worrying implications for jobs, wages and inequality.

To understand this epic shift, you can look not just to the grocery business, but also to my closet, and to another retail acquisition announced Friday morning. (...)

Amazon vs. Walmart

Walmart’s move might seem a strange decision. It is not a retailer people typically turn to for $88 summer weight shirts in Ruby Wynwood Plaid or $750 Italian wool suits. Then again, Amazon is best known as a reseller of goods made by others.

Walmart and Amazon have had their sights on each other for years, each aiming to be the dominant seller of goods — however consumers of the future want to buy them. It increasingly looks like that “however” is a hybrid of physical stores and online-ordering channels, and each company is coming at the goal from a different starting point.

Amazon is the dominant player in online sales, and is particularly strong among affluent consumers in major cities. It is now experimenting with physical bookstores and groceries as it looks to broaden its reach.

Walmart has thousands of stores that sell hundreds of billions of dollars’ worth of goods. It is particularly strong in suburban and rural areas and among low- and middle-income consumers, but it’s playing catch-up with online sales and affluent urbanites.

Why are these two mega-retailers both trying to sell me shirts? The short answer is because they both want to sell everything.

More specifically, Bonobos is known as an innovator in exactly this type of hybrid of online and physical store sales. Its website and online customer service are excellent, and it operates stores in major cities where you can try on garments and order items to be shipped directly. Because all the actual inventory is centralized, the stores themselves can occupy minimal square footage.

So the acquisition may help Walmart build expertise in the very areas where it is trying to gain on Amazon. You can look at the Amazon acquisition of Whole Foods through the same lens. The grocery business has a whole different set of challenges from the types of goods that Amazon has specialized in; you can’t store a steak or a banana the way you do books or toys. And people want to be able to make purchases and take them home on the spur of the moment.

Just as Walmart is using Bonobos to get access to higher-end consumers and a more technologically savvy way of selling clothes, Amazon is using Whole Foods to get the expertise and physical presence it takes to sell fresh foods.

But bigger dimensions of the modern economy also come into play.

by Neil Irwin, NY Times | Read more:
Image: Antonio de Luca

Friday, June 16, 2017

Jeff Bezos Is the Most Powerful Person in Tech

Since Amazon was a tiny startup selling paperbacks, Jeff Bezos has been focused on the long game. In his first letter to shareholders in 1997, he advised investors to strap in for a bumpy financial ride that could include short-term quarterly losses and risky acquisitions that fail to pan out. “We will make bold rather than timid investment decisions when we see a sufficient probability of gaining market leadership advantages,” he wrote. “Some of these investments will pay off, others will not, and we will have learned another valuable lesson in either case.”

Two decades later, what Bezos was building toward book by book has arrived. Amazon is an internet goliath whose products affect nearly every online user. If you’ve ever received an Amazon package, watched a livestream on Twitch, or checked an actor’s filmography on IMDb, you’ve dealt with the retail giant’s properties directly. But if you’ve streamed a movie on Netflix, booked a flight on Expedia, or sent a selfie on Snapchat, you’ve also felt Amazon’s reach via its cloud computing services, now in use by more than 1 million online businesses.

Amazon is steadily eating the digital world. On Friday, it took a critical step in subsuming the physical world as well by launching a bid to acquire the upscale grocer Whole Foods for $13.7 billion. For Amazon, Whole Foods opens up plenty of new business opportunities. The grocer’s affluent shoppers are ideal customers for Amazon Prime. Amazon has been trying and mostly failing to break into the grocery market for years — Whole Foods offers an immediate boost in market share and a group of trusted brand products that Amazon customers may actually pay to have delivered to their door. And with 464 new stores in its arsenal, Amazon has a variety of test beds for its retail experiments, like the convenience store without cashiers it’s been testing in Seattle.

The surprising acquisition left little doubt that Bezos has become the most powerful man in the technology sector, with investments spanning a wide range of galaxy-conquering pursuits. He’s building a digital media giant at The Washington Post, which has become a powerful international force in journalism covering issues such as the Trump White House and police-citizen interactions. He’s doubled down on his commitment to Blue Origin, his space exploration startup, by promising $1 billion in additional investment every year. He also has investments in Uber, Airbnb, and Twitter (and a giant clock that’s supposed to last for 10,000 years).

Ultimately, though, the nexus of Bezos’s power lies with Amazon, whose soaring stock price has catapulted its CEO to an $84 billion net worth, making him the second-richest person on earth. For years, Amazon was powerful but unprofitable, forcing Bezos to keep falling back on the long-term rhetoric of that first shareholder letter. But the rise of cloud computing, which Amazon dominates over rivals Microsoft and Google, has been a boon for the company’s bottom line. Amazon has now been profitable for eight straight quarters, giving Bezos more leeway to keep pursuing his ambitious, capital-intensive goals in retail, such as owning a massive chain of grocery stores and building an internal FedEx rival.

Bezos’s power will only continue to grow as his company becomes more essential to the everyday lives of millions of people. Amazon’s original value proposition was low prices first and foremost, but that advantage is fading. Instead, Amazon wants to save customers time, whether through packages delivered in an hour via Prime Now, or laundry detergent mindlessly ordered by mashing a Dash button. Even before the Whole Foods buy, Amazon was the one tech giant whose core business was about organizing atoms as well as bits. Now the company will have a physical footprint that extends to 42 states and three countries.

by Victor Luckerson, The Ringer |  Read more:
Image:AP Images/Ringer illustration
[ed. $84 billion. 84,000 millions. And he's only the second richest person on earth. Imagine what just a quarter of that wealth could accomplish in terms of alleviating many of the social impacts of technological displacement that his company is and will be directly responsible for. See also: At Last, Jeff Bezos Offers a Hint of His Philanthropic Plans (ed. or not)]