Wednesday, July 31, 2013


Indigo & Umber

Dropcam

[ed. For a more detailed account of Dropcam's home surveillance system, read this.]

Dropcam, the San Francisco startup that makes a $149 camera that can stream and record video to the cloud, announced $30 million in new funding on Wednesday.

That round, led by Institutional Venture Partners, along with other investments from Accel Partners, Menlo Ventures, and Kleiner Perkins Caufield & Byers, will be used to speed up product plans that originally were slated for 2015 and beyond, says Dropcam CEO Greg Duffy.

"We're pulling in most of our long-term plans," Duffy told CNET. "We're going to try to take care of those much sooner. Definitely expect new stuff coming out in the interim."

Just what exactly those new things are, Duffy's not saying, but he did hint at broadening the company's investment in computer vision technology, which is currently used to spot and report motion. "We want to make these more reliable and more useful," he said.

To do that the company plans to triple its staff of about 40 people. Most of that group is in San Francisco, Calif., along with a group in China that handles manufacturing. Dropcam is on its third-generation model, which how offers HD recording. That model was introduced at last year's CES. Since then the company has added new features, like standard-definition recording and video rotation, and changed how it detects and reports motion, all through software updates.

The investment underscores the growing appeal of hardware companies that are tied with Web-connected services. That same group includes wearable technologies from Jawbone and Fitbit, all the way to Nest and its Web-connected smart thermostat. The big difference in Dropcam's case is that it would be relatively useless if not connected to the Web, where Dropcam customers, according to the company, are uploading more video per minute than YouTube.

by Josh Lowensohn, CNET | Read more:
Image: Dropcam

The Blockbuster Heist That Rocked the Deep Web


Before he gutted and nearly destroyed one of the most influential criminal markets on the Internet, a man using the nickname Boneless published a detailed guide on the art of disappearing.

“I have some experience in this area,” he wrote, detailing how fugitives should best go about buying phony passports, dodging cops, and keeping their stories straight.

The guide was just one of many contributions Boneless made to HackBB, a popular destination on the Deep Web, a group of sites that sit hidden behind walls of encryption and anonymity. Back in 2012, the forum was a top destination for buying stolen credit cards, skimming ATMs, and hacking anything from personal computers to server hardware. And thanks to Tor’s anonymizing software, members were shielded from the ire of law enforcement around the globe. It was one of the safest and most popular places on the Deep Web to break the law.

Then one day in March, HackBB simply vanished, its databases destroyed. One user likened the events to burning a city—its library, market, bank, and entire community—to the ground. It wasn’t hard to guess who’d done it. A few days earlier, Boneless had disappeared—and with him, a serious chunk of the market’s sizable hoards of money. (...)

All business is inherently risky on the Deep Web. Escrow funds in particular require serious trust, which is itself a valuable commodity on the anonymous Web. The popular drug market Silk Road established a highly successful escrow service by building years of trust and name recognition.

Silk Road’s founder, Dread Pirate Roberts, is rumored to conduct thorough background checks on staff, an act that would require extraordinary trust, considering the immense illegality of Silk Road’s existence. Such a policy, though extremely difficult to enact, would severely diminish the chances of a staff betrayal. It would also create a delicate house of cards that could completely collapse if Dread Pirate Roberts were ever apprehended.

At HackBB, Boneless either shared no identifying details with OptimusCrime, or he was supremely confident in his ability to go away without getting caught. He did, after all, write the book on how to disappear completely.

Such was Boneless’s reputation that, after the attacks, many wondered if he was really even responsible in the first place. Forum members suggested Boneless actually sold his powerful administrator account to the highest bidder.

"Someone got a hold of his credentials somehow," wrote one HackBB moderator, "He probably sold them."

by Patrick Howell O'Neill, Daily Dot | Read more:
Image: uncredited

Jimmy Cliff


Inner Peace


When we think of silence, because we yearn for it perhaps, or because we’re scared of it — or both — we’re forced to recognise that what we’re talking about is actually a mental state, a question of consciousness. Though the external world no doubt exists, our perception of it is always very much our perception, and tells us as much about ourselves as it does about the world. There are times when a noise out there is truly irritating and has us yearning for peace. Yet there are times when we don’t notice it at all. When a book is good, the drone of a distant lawnmower is just not there. When the book is bad but we must read it for an exam, or a review, the sound assaults us ferociously.

If perception of sound depends on our state of mind, then conversely a state of mind can hardly exist without an external world with which it is in relation and that conditions it — either our immediate present environment, or something that happened in the past and that now echoes or goes on happening in our minds. There is never any state of mind that is not in some part, however small, in relation to the sounds around it — the bird singing and a television overheard as I write this now, for example. (...)

It’s fairly easy to concentrate on the body in motion. If you’re running or swimming, it’s possible to move into a wordless or semi-wordless state that gives the impression of silence for long periods. In fact one of the refreshing, even addictive, things about sport is the feeling that the mind has been given a break from its duty of constantly building up our ego.

But in Vipassana you concentrate on sensation in stillness, sitting down, not necessarily cross-legged, though most people do sit that way. And sitting without changing position, sitting still. As soon as you try to do this, you become aware of a connection between silence and stillness, noise and motion. No sooner are you sitting still than the body is eager to move, or at least to fidget. It grows uncomfortable. In the same way, no sooner is there silence than the mind is eager to talk. In fact we quickly appreciate that sound ismovement: words move, music moves, through time. We use sound and movement to avoid the irksomeness of stasis. This is particularly true if you are in physical pain. You shift from foot to foot, you move from room to room.

Sitting still, denying yourself physical movement, the mind’s instinctive reaction is to retreat into its normal buzzing monologue — hoping that focusing the mind elsewhere will relieve physical discomfort. This would normally be the case; normally, if ignored, the body would fidget and shift, to avoid accumulating tension. But on this occasion we are asking it to sit still while we think and, since it can’t fidget, it grows more and more tense and uncomfortable. Eventually, this discomfort forces the mind back from its chatter to the body. But finding only discomfort or even pain in the body, it again seeks to escape into language and thought. Back and forth from troubled mind to tormented body, things get worse and worse.

Silence, then, combined with stillness — the two are intimately related — invites us to observe the relationship between consciousness and the body, in movement and moving thought. Much is said when people set off to meditation retreats about the importance of ‘finding themselves’. And there is much imagined drama. People expect old traumas to surface, as though in psychoanalysis. In fact, what you actually discover is less personal than you would suppose. You discover how the construct of consciousness and self, something we all share, normally gets through time, to a large extent by ignoring our physical being and existence in the present moment. Some of the early names for meditation in the Pali language of the Buddhist scriptures, far from linking it to religion, referred only to ‘mental exercises’.

by Tim Parks, Aeon |  Read more:
Image: Judy Shreve, Winter Leaves

Tuesday, July 30, 2013


Bumper Crop, Mike Carroll (Lanai)
via:

Antibiotic Resistance: The Last Resort


As a rule, high-ranking public-health officials try to avoid apocalyptic descriptors. So it was worrying to hear Thomas Frieden and Sally Davies warn of a coming health “nightmare” and a “catastrophic threat” within a few days of each other in March.

The agency heads were talking about the soaring increase in a little-known class of antibiotic-resistant bacteria: carbapenem-resistant Enterobacteriaceae (CREs). Davies, the United Kingdom's chief medical officer, described CREs as a risk as serious as terrorism (see Nature 495, 141; 2013). “We have a very serious problem, and we need to sound an alarm,” said Frieden, director of the US Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia.

Their dire phrasing was warranted. CREs cause bladder, lung and blood infections that can spiral into life-threatening septic shock. They evade the action of almost all antibiotics — including the carbapenems, which are considered drugs of last resort — and they kill up to half of all patients who contract them. In the United States, these bacteria have been found in 4% of all hospitals and 18% of those that offer long-term critical care. And an analysis carried out in the United Kingdom predicts that if antibiotics become ineffective, everyday operations such as hip replacements could end in death for as many as one in six1.

The language used by Davies and Frieden was intended to break through the indifference with which the public usually greets news about antibiotic resistance. To close observers, however, it also had a tinge of exasperation. CREs were first identified almost 15 years ago, but did not become a public-health priority until recently, and medics may not have appreciated the threat that they posed. Looking back, say observers, there are lessons for researchers and health-care workers in how to protect patients, as well as those hospitals where CREs have not yet emerged.

“It is not too late to intervene and prevent these from becoming more common,” says Alexander Kallen, a medical epidemiologist at the CDC. At the same time, he acknowledges that in many places, CREs are here for good.

Hindsight is key to the story of CREs, because it was hindsight that identified them in the first place. In 2000, researchers at the CDC were grinding through analyses for a surveillance programme known as Intensive Care Antimicrobial Resistance Epidemiology (ICARE), which had been running for six years to monitor intensive-care units for unusual resistance factors. In the programme's backlog of biological samples, scientists identified one from the Enterobacteriaceae family, a group of gut-dwelling bacteria. This particular sample — of Klebsiella pneumoniae, a common cause of infection in intensive-care units — had been taken from a patient at a hospital in North Carolina in 1996 (ref. 2). It was weakly resistant to carbapenems, powerful broad-spectrum antibiotics developed in the 1980s.

Antibiotics have been falling to resistance for almost as long as people have been using them; Alexander Fleming, who discovered penicillin, warned about the possibility when he accepted his Nobel prize in 1945. Knowing this, doctors have used the most effective drugs sparingly: careful rationing of the powerful antibiotic vancomycin, for example, meant that bacteria took three decades to develop resistance to it. Prudent use, researchers thought, would keep the remaining last-resort drugs such as the carbapenems effective for decades.

The North Carolinan strain of Klebsiella turned that idea on its head. It produced an enzyme, dubbed KPC (for Klebsiella pneumoniae carbapenemase), that broke down carbapenems. What's more, the gene that encoded the enzyme sat on a plasmid, a piece of DNA that can move easily from one bacterium to another. Carbapenem resistance had arrived.

by Maryn McKenna, Nature |  Read more:
Image: Nature

Stranded by Sprawl


Detroit is a symbol of the old economy’s decline. It’s not just the derelict center; the metropolitan area as a whole lost population between 2000 and 2010, the worst performance among major cities. Atlanta, by contrast, epitomizes the rise of the Sun Belt; it gained more than a million people over the same period, roughly matching the performance of Dallas and Houston without the extra boost from oil.

Yet in one important respect booming Atlanta looks just like Detroit gone bust: both are places where the American dream seems to be dying, where the children of the poor have great difficulty climbing the economic ladder. In fact, upward social mobility — the extent to which children manage to achieve a higher socioeconomic status than their parents — is even lower in Atlanta than it is in Detroit. And it’s far lower in both cities than it is in, say, Boston or San Francisco, even though these cities have much slower growth than Atlanta.

So what’s the matter with Atlanta? A new study suggests that the city may just be too spread out, so that job opportunities are literally out of reach for people stranded in the wrong neighborhoods. Sprawl may be killing Horatio Alger.

The new study comes from the Equality of Opportunity Project, which is led by economists at Harvard and Berkeley. There have been many comparisons of social mobility across countries; all such studies find that these days America, which still thinks of itself as the land of opportunity, actually has more of an inherited class system than other advanced nations. The new project asks how social mobility varies across U.S. cities, and finds that it varies a lot. In San Francisco a child born into the bottom fifth of the income distribution has an 11 percent chance of making it into the top fifth, but in Atlanta the corresponding number is only 4 percent.

When the researchers looked for factors that correlate with low or high social mobility, they found, perhaps surprisingly, little direct role for race, one obvious candidate. They did find a significant correlation with the existing level of inequality: “areas with a smaller middle class had lower rates of upward mobility.” This matches what we find in international comparisons, where relatively equal societies like Sweden have much higher mobility than highly unequal America. But they also found a significant negative correlation between residential segregation — different social classes living far apart — and the ability of the poor to rise.

And in Atlanta poor and rich neighborhoods are far apart because, basically, everything is far apart; Atlanta is the Sultan of Sprawl, even more spread out than other major Sun Belt cities. This would make an effective public transportation system nearly impossible to operate even if politicians were willing to pay for it, which they aren’t. As a result, disadvantaged workers often find themselves stranded; there may be jobs available somewhere, but they literally can’t get there.

by Paul Krugman, NY Times |  Read more:
Image via: 

Monday, July 29, 2013


Katsuyuki Nishijima
via:

Olga Kurylenko

via:

Star Wars


In the days before the Internet, eating at an unknown restaurant meant relying on a clutch of quick and dirty heuristics. The presence of many truck drivers or cops at a lonely diner supposedly vouchsafed its quality (though it may simply have been the only option around). For “ethnic” food, there was the classic benchmark: “We were the only non-[insert ethnicity] people in there.” Or you could spend anxious minutes on the sidewalk, under the watchful gaze of the host, reading curling, yellowed reviews, wondering if what held in 1987 was still true today. In an information-poor environment, you sometimes simply went with your gut (and left clutching it).

Today, via Yelp (or TripAdvisor or Amazon, or any Web site teeming with “user-generated content”), you are often troubled by the reverse problem: too much information. As I navigate a Yelp entry to simply determine whether a place is worth my money, I find myself battered between polar extremes of experience: One meal was “to die for,” another “pretty lame.” Drifting into narrow currents of individual proclivity (writing about a curry joint where I had recently lunched, one reviewer noted that “the place had really good energy, very Spiritual [sic], which is very important to me”), I eventually capsize in a sea of confusion. I either quit the place altogether or, by the time I arrive, am weighed down by a certain exhaustion of expectation, as if I had already consumed the experience and was now simply going through the motions.

What I find most striking is that, having begun the process of looking for reviews of the restaurant, I find myself reviewing the reviewers. The use of the word “awesome”—a term whose original connotation is so denuded that I suspect it will ultimately come to exclusively signify its ironic, air-quote-marked opposite—is a red flag. So are the words “anniversary” or “honeymoon,” often written by people with inflated expectations for their special night; their complaint with any perceived failure on the part of the restaurant or hotel to rise to this momentous occasion is not necessarily mine. I reflexively downgrade reviewers writing in the sort of syrupy dross picked up from hotel brochures (“it was a vision of perfection”).

In one respect, there is nothing new in reviewing the reviewer; our choices in pre-Internet days were informed either by friends we trusted or critics whose voices seemed to carry authority. But suddenly, the door has been opened to a multitude of voices, each bearing no preexisting authority or social trust. It is no longer merely enough to read that someone thought the vegetarian food was bad (you need to know if she is a vegetarian), or the hotel in Iowa City was the best they have ever seen (just how many hotels have they seen?), or a foreign film was terrible (wait, they admit they don’t like subtitles?). Critics have always had to be interrogated this way (what dendritic history of logrolling lay behind the rave about that book?), but with the Web, a thousand critics have bloomed. The messy, complicated, often hidden dynamics of taste and preference, and the battles over it, are suddenly laid out right in front of us.

by Tom Vanderbilt, Wilson Quarterly |  Read more:
Image: Henglein and Steets

Filippo De Pisis (Luigi Tibertelli): Natura morta con trota (1951)
via:

Krzysztof WodiczkoDis-Armor, 1999-2001
via:

Garry Davis, Man of No Nation, Dies at 91

On May 25, 1948, a former United States Army flier entered the American Embassy in Paris, renounced his American citizenship and, as astonished officials looked on, declared himself a citizen of the world.

In the decades that followed, until the end of his long life last week, he remained by choice a stateless man — entering, leaving, being regularly expelled from and frequently arrested in a spate of countries, carrying a passport of his own devising, as the international news media chronicled his every move.

His rationale was simple, his aim immense: if there were no nation-states, he believed, there would be no wars.

Garry Davis, a longtime peace advocate, former Broadway song-and-dance man and self-declared World Citizen No. 1, who is widely regarded as the dean of the One World movement, a quest to erase national boundaries that today has nearly a million adherents worldwide, died on Wednesday in Williston, Vt. He was 91, and though in recent years he had largely ceased his wanderings and settled in South Burlington, Vt., he continued to occupy the singular limbo between citizen and alien that he had cheerfully inhabited for 65 years.

“I am not a man without a country,” Mr. Davis told Newsweek in 1978, “merely a man without nationality.”

Mr. Davis was not the first person to declare himself a world citizen, but he was inarguably the most visible, most vocal and most indefatigable.

The One World model has had its share of prominent adherents, among them Albert Schweitzer, Jean-Paul Sartre, Albert Einstein and E. B. White.

But where most advocates have been content to write and lecture, Mr. Davis was no armchair theorist: 60 years ago, he established the World Government of World Citizens, a self-proclaimed international governmental body that has issued documents — passports, identity cards, birth and marriage certificates — and occasional postage stamps and currency.

He periodically ran for president of the world, always unopposed.

by Margalit Fox, NY Times |  Read more:
Image: Carl Gossett/The New York Times


[ed. Dave Matthews, Tim Reynolds - One Sweet World]

O.K., Glass

On a weekday afternoon in late June, a nondescript forty-year-old man in beige shorts, a blue Penguin sports shirt, and what appears to be a pair of shale-colored architect’s glasses with parts of the frame missing gets on an uptown No. 6 train at Union Square to go see his psychoanalyst, on East Eighty-eighth Street. As the man walks into the frigid subway car, he unexpectedly jerks his head up and down. A pink light comes on above the right lens. He slides his index finger against the right temple of the glasses as if flicking away a fly. The man’s right eyebrow rises and his right eye squints. He appears to be mouthing some words. A lip-reader would come away with the following message: “Forever 21 world traveler denim shorts, $22.80. Horoscope: Cooler heads prevail today, helping you strike a compromise in a matter you refused to budge on last week.”

There is a tap on his shoulder. He turns around. An older man, dressed for the office in a blue blazer, says, “Are those them?”

“Yes.”

“My kid wants one.”

“If you give them to your kid, you’ll be able to see everything he sees from your computer. You could follow him around all day.”

The businessman considers this. “Are they foldable?”

The man with the glasses shakes his head. A young college student in a hoodie and Adidas track pants, carrying a Pace University folder, takes out one of his earbuds. “What does it do?”

Everyone on the train is now staring at the man with the glasses.

The man with the glasses jerks his head up and down. The soft pink light is on above his right eye. “O.K., Glass,” the man says. “Take a picture.” The pink light is replaced by a shot of the subway car, the college student with the earbuds, the older man, now immortalized. If they are paying close attention, they can see a microscopic version of themselves and the world around them displayed on the screen above the man’s right eye. “I can also take a video of you,” the man says. “O.K., Glass. Record a video.”

“That is so dope,” the college student says. It appears to the man that the student is thinking over the situation. There’s something else he wants to say. It’s as if the man with the glasses has some form of mastery of the world around him, and maybe even within himself. (...)

My first encounter with Google Glass came on a Saturday in June, when I showed up at the Glass Explorers’ “Basecamp,” a sunny spread atop the Chelsea Market. My tech sherpa, a bright-eyed young woman, set me up with a mimosa as we perused the various shades of Glass frames, each named for a color that occurs in nature: cotton, shale, charcoal, sky, and tangerine. I went for shale, which happens to be the preference of Glass Explorers in San Francisco. (New Yorkers, naturally, go for bleak charcoal.) I was told that I was one of the first few hundred Explorers in the city, which made me feel like some third-rate Shackleton embarked on my own Nimrod Expedition into the neon ice. The lightweight titanium frames were fitted over my nose, a button was pressed near my right ear, and the small screen, or Optical Head Mounted Display, flickered to pink-ish life. I was told how to talk to my new friend, each command initiated with the somewhat resigned “O.K., Glass.” In deference to Eunice and Lenny, I started off with two simple instructions, picked up by a microphone that sits just above my right eye, at the tip of my eyebrow.

“O.K., Glass. Google translate ‘hamburger’ into Korean.”

Haembeogeo,” a gentle, vowel-rich voice announced after a few seconds of searching, as both English and Hangul script appeared on the display above my right eye. Since there are no earbuds to plug into Glass, audio is conveyed through a “bone conduction transducer.” In effect, this means that a tiny speaker vibrates against the bone behind my right ear, replicating sound. The result is eerie, as if someone is whispering directly into a hole bored into your cranium, but also deeply futuristic. You can imagine a time when different parts of our bodies are adapted for different needs. If a bone can hear sound, why can’t my fingertips smell the bacon strips they’re about to grab?

Glass responds to a combination of voice and touch-pad commands. After the initial “O.K., Glass,” you can tap and slide your way through the touch pad, but since there is no keyboard or touch screen, Googling and Gmailing will probably always involve voice recognition.

“O.K., Glass. Google translate ‘hamburger’ into Russian.”

Gamburrrger,” a voice purred, not so gently, like my grandmother at the end of a long hot day.

And, all of a sudden, I felt something for this technology.

by Gary Shteyngart, New Yorker | Read more:
Photograph by Emiliano Granado

Sunday, July 28, 2013

Olomana


Hapa


Think About Nature


The main question I'm asking myself, the question that puts everything together, is how to do cosmology; how to make a theory of the universe as a whole system. This is said to be the golden age of cosmology and it is from an observational point of view, but from a theoretical point of view it's almost a disaster. It's crazy the kind of ideas that we find ourselves thinking about. And I find myself wanting to go back to basics—to basic ideas and basic principles—and understand how we describe the world in a physical theory.

What's the role of mathematics? Why does mathematics come into physics? What's the nature of time? These two things are very related since mathematical description is supposed to be outside of time. And I've come to a long evolution since the late 80's to a position, which is quite different from the ones that I had originally, and quite surprising even to me. But let me get to it bit by bit. Let me build up the questions and the problems that arise.

One way to start is what I call "physics in a box" or, theories of small isolated systems. The way we've learned to do this is to make an accounting or an itinerary—a listing of the possible states of a system. How can a possible system be? What are the possible configurations? What were the possible states? If it's a glass of Coca Cola, what are the possible positions and states of all the atoms in the glass? Once we know that, we ask, how do the states change? And the metaphor here—which comes from atomism that comes from Democritus and Lucretius—is that physics is nothing but atoms moving in a void and the atoms never change. The atoms have properties like mass and charge that never change in time. The void—which is space in the old days never changed in time—was fixed and they moved according to laws, which were originally given by or tried to be given by Descartes and Galileo, given by Newton much more successfully.

And up until the modern era, where we describe them in quantum mechanics, the laws also never changed. The laws lets us predict where the positions of the atoms will be at a later time, if we know the positions of all the atoms at a given moment. That's how we do physics and I call that the Newtonian Paradigm because it was invented by Newton. And behind the Newtonian Paradigm is the idea that the laws of nature are timeless; they act on the system, so to speak, from outside the system and they evolve from the past to the present to the future. If you know the state any time, you can predict the state at any other time. So this is the framework for doing physics and it's been very successful. And I'm not challenging its success within the proper domain—small parts of the universe.

The problem that I've identified—that I think is at the root of a lot of the spinning of our wheels and confusion of contemporary physics and cosmology—is that you can't just take this method of doing science and scale it up to the universe as a whole. When you do, you run into questions that you can't answer. You end up with fallacies; you end up saying silly things. One reason is that, on a cosmological scale, the questions that we want to understand are not just what are the laws, but why are these the laws rather than other laws? Where do the laws come from? What makes the laws what they are? And if the laws are input to the method, the method will never explain the laws because they're input.

Also, given the state of the universe of the system at one time, we use the laws to predict the state at a later time. But what was the cause of the state that we started with that initial time? Well, it was something in the past so we have to evolve from further into the past. And what was the reason for that past state? Well, that was something further and further in the past. So we end up at the Big Bang. It ends up that any explanation for why are we sitting in this room—why is the earth in orbit around the sun where it is now—any question of detail that we want to ask about the universe ends up being pushed back using the laws to the initial conditions of the Big Bang.

And then we end up with wondering, why were those initial conditions chosen? Why that particular set of initial conditions? Now we're using a different language. We're not talking about particles and Newton's laws, we're talking about quantum field theory. But the question is the same; what chose the initial conditions? And since the initial conditions are input to this method that Newton developed, it can't be explained within that method. So if we want to ask cosmological questions, if we want to really explain everything, we need to apply a different method. We need to have a different starting point.

by Lee Smolan, Edge |  Read more:
Image: uncredited

Enough is Enough

Kurt Vonnegut and novelist Joseph Heller were once allegedly at a party hosted by a billionaire hedge fund manager. Vonnegut mentioned that their wealthy host made more money in one day than Heller ever made from his novel Catch-22.

Heller responded: “Yes, but I have something he will never have: enough.”

Whether it’s true or not, I’ve always thought this to be one of the smartest finance stories ever told.

All throughout college, I had one career plan: investment banking. The industry was attractive to me, and thousands of other students blinded by a lack of life experience, for one reason: You can make a lot of money. Six figures right out of school, and millions later in your career.

There’s just one catch. Your life becomes abjectly miserable.

One-hundred-hour work weeks, the most pressure you’ve ever experienced, and less exposure to sunlight than death row inmates. They had a saying: “If you don’t come to work on Saturday, don’t bother coming back on Sunday.” The senior bosses were worth millions, but stressed, overweight, anxious, never saw their kids, and hadn’t taken a vacation in years — I’m unfairly generalising, but only slightly. Almost no one actually enjoys it. I quickly cried uncle, moved on, and never looked back.

In his book 30 Lessons for Living, gerontologist Karl Pillemer interviewed 1,000 elderly Americans (most in their 80s and 90s), seeking wisdom from those with the most experience. One quote from the book stuck out:
No one — not a single person out of a thousand — said that to be happy you should try to work as hard as you can to make money to buy the things you want. 
No one — not a single person — said it’s important to be at least as wealthy as the people around you, and if you have more than they do it’s real success. 
No one — not a single person — said you should choose your work based on your desired future earning power.The elderly didn’t say that money isn’t important. They didn’t even rule out that more money might have made them happier. They just seemed to understand the concept of enough.
Studies show that money does increase happiness. The latest research shows there’s not even a known satiation point — a higher income makes virtually everyone happier, although each additional dollar delivers less happiness than the one before it. (...)

In other words, young investment bankers assume a big income will make them happier because they think about a nice house and fancy cars, not working until 4 a.m. and having no social life.

In a New York Times column three years ago, David Brooks put a twist on this thinking by analysing the life of actress Sandra Bullock. He wrote:
Two things happened to Sandra Bullock this month. First, she won an Academy Award for best actress. Then came the news reports claiming that her husband is an adulterous jerk. So the philosophic question of the day is: Would you take that as a deal? Would you exchange a tremendous professional triumph for a severe personal blow?
“If you had to take more than three seconds to think about this question, you are absolutely crazy.” Brooks concludes. But for the same reason investment bankers choose a miserable life while assuming money will make them happier, I’m willing to bet many otherwise happy people would have gladly changed shoes with Bullock three years ago. Research is clear that some things completely override any happiness that can be gained from money or work success. It’s just hard to realise that because money is tangible, measurable, and universal, whereas the “other factors” Kahneman mentions that have a greater impact on our happiness are vague and nuanced.

by Motley Fool Staff, Motley Fool |  Read more:
Image: via

NSA: The Decision Problem


Shortly after noon, local time, on 19 August 1960, over the North Pacific Ocean near Hawaii, a metal capsule about the size and shape of a large kitchen sink fell out of the sky from low earth orbit and drifted by parachute toward the earth. It was snagged in mid-air, on the third pass, by a C-119 "flying boxcar" transport aircraft from Hickam Air Force base in Honolulu, and then transferred to Moffett Field Naval Air Station, in Mountain View, California—where Google's fleet of private jets now sit parked. Inside the capsule was 3000 feet of 70mm Kodak film, recording seven orbital passes over 1,650,000 square miles of Soviet territory that was closed to all overflights at the time.

This spectacular intelligence coup was preceded by 13 failed attempts. Secrecy all too often conceals waste and failure within government programs; in this case, secrecy was essential to success. Any reasonable politician, facing the taxpayers, would have canceled the Corona orbital reconnaissance program after the eleventh or twelfth unsuccessful launch.

The Corona program, a joint venture between the CIA, the NSA, and the Department of Defense, was coordinated by the Advanced Research Projects Agency (ARPA) and continued, under absolute secrecy, for 12 more years and 126 more missions, becoming the most productive intelligence operation of the Cold War. "It was as if an enormous floodlight had been turned on in a darkened warehouse," observed former CIA program director Albert D. Wheelon, after the operation was declassified by order of President Clinton in 1995. "The Corona data quickly assumed the decisive role that the Enigma intercepts had played in World War II."

The resources and expertise that were gathered to support the Corona program, operating under cover of a number of companies and institutions centered around Sunnyvale, California (including Fairchild, Lockheed, and the Stanford Industrial Park) helped produce the Silicon Valley of today. Google Earth is Corona's direct descendant, and it is a fact as remarkable as the fall of the Berlin wall that anyone, anywhere in the world, can freely access satellite imagery whose very existence was a closely guarded secret only a generation ago.

PRISM, on the contrary, has been kept in the dark. Setting aside the question of whether wholesale, indiscriminate data collection is legal—which, evidently, its proponents believed it was—the presumed reason is that for a surveillance system to be effective against bad actors, the bad actors have to be unaware that they are being watched. Unfortunately, the bad actors to be most worried about are the ones who suspect that they are being watched. The tradecraft goes way back. With the privacy of houses came eavesdropping; with the advent of written communication came secret opening of mail; with the advent of the electric telegraph came secret wiretaps; with the advent of photography came spy cameras; with the advent of orbital rocketry came spy satellites. To effectively spy on the entire Internet you need your own secret Internet—and Edward Snowden has now given us a glimpse into how this was done.

The ultimate goal of signals intelligence and analysis is to learn not only what is being said, and what is being done, but what is being thought. With the proliferation of search engines that directly track the links between individual human minds and the words, images, and ideas that both characterize and increasingly constitute their thoughts, this goal appears within reach at last. "But, how can the machine know what I think?" you ask. It does not need to know what you think—no more than one person ever really knows what another person thinks. A reasonable guess at what you are thinking is good enough.

Data mining, on the scale now practiced by Google and the NSA, is the realization of what Alan Turing was getting at, in 1939, when he wondered "how far it is possible to eliminate intuition, and leave only ingenuity," in postulating what he termed an "Oracle Machine." He had already convinced himself of the possibility of what we now call artificial intelligence (in his more precise terms, mechanical intelligence) and was curious as to whether intuition could be similarly reduced to a mechanical procedure—although it might (indeed should) involve non-deterministic steps. He assumed, for sake of argument, that "we do not mind how much ingenuity is required, and therefore assume it to be available in unlimited supply."

And, as if to discount disclaimers by the NSA that they are only capturing metadata, Turing, whose World War II work on the Enigma would make him one of the patron saints of the NSA, was already explicit that it is the metadata that count. If Google has taught us anything, it is that if you simply capture enough links, over time, you can establish meaning, follow ideas, and reconstruct someone's thoughts. It is only a short step from suggesting what a target may be thinking now, to suggesting what that target may be thinking next. (...)

What we have now is the crude equivalent of snatching snippets of film from the sky, in 1960, compared to the panopticon that was to come. The United States has established a coordinated system that links suspect individuals (only foreigners, of course, but that definition becomes fuzzy at times) to dangerous ideas, and, if the links and suspicions are strong enough, our drone fleet, deployed ever more widely, is authorized to execute a strike. This is only a primitive first step toward something else. Why kill possibly dangerous individuals (and the inevitable innocent bystanders) when it will soon become technically irresistible to exterminate the dangerous ideas themselves?

by George Dyson, Edge |  Read more:
Image: uncredited

Nick Cave's Love Song Lecture

Though the love song comes in many guises – songs of exultation and praise, songs of rage and of despair, erotic songs, songs of abandonment and loss – they all address God, for it is the haunted premises of longing that the true love song inhabits. It is a howl in the void, for Love and for comfort and it lives on the lips of the child crying for his mother. It is the song of the lover in need of her loved one, the raving of the lunatic supplicant petitioning his God. It is the cry of one chained to the earth, to the ordinary and to the mundane, craving flight; a flight into inspiration and imagination and divinity. The love song is the sound of our endeavours to become God-like, to rise up and above the earthbound and the mediocre.

The loss of my father, I found, created in my life a vacuum, a space in which my words began to float and collect and find their purpose. The great W.H. Auden said "The so-called traumatic experience is not an accident, but the opportunity for which the child has been patiently waiting – had it not occurred, it would have found another- in order that its life come a serious matter." The death of my father was the "traumatic experience" Auden talks about that left the hole for God to fill. How beautiful the notion that we create our own personal catastrophes and that it is the creative forces within us that are instrumental in doing this. We each have a need to create and sorrow is a creative act. The love song is a sad song, it is the sound of sorrow itself. We all experience within us what the Portugese call Suadade, which translates as an inexplicable sense of longing, an unnamed and enigmatic yearning of the soul and it is this feeling that lives in the realms of imagination and inspiration and is the breeding ground for the sad song, for the Love song is the light of God, deep down, blasting through our wounds.

In his brilliant lecture entitled "The Theory and Function of Duende" Frederico Garcia Lorca attempts to shed some light on the eerie and inexplicable sadness that lives in the heart of certain works of art. "All that has dark sound has duende", he says, "that mysterious power that everyone feels but no philosopher can explain." In contemporary rock music, the area in which I operate, music seems less inclined to have its soul, restless and quivering, the sadness that Lorca talks about. Excitement, often; anger, sometimes: but true sadness, rarely, Bob Dylan has always had it. Leonard Cohen deals specifically in it. It pursues Van Morrison like a black dog and though he tries to he cannot escape it. Tom Waits and Neil Young can summon it. It haunts Polly Harvey. My friend and Dirty 3 have it by the bucket load. The band Spiritualised are excited by it. Tindersticks desperately want it, but all in all it would appear that duende is too fragile to survive the brutality of technology and the ever increasing acceleration of the music industry. Perhaps there is just no money in sadness, no dollars in duende. Sadness or duende needs space to breathe. Melancholy hates haste and floats in silence. It must be handled with care.

All love songs must contain duende. For the love song is never truly happy. It must first embrace the potential for pain. Those songs that speak of love without having within in their lines an ache or a sigh are not love songs at all but rather Hate Songs disguised as love songs, and are not to be trusted. These songs deny us our humanness and our God-given right to be sad and the air-waves are littered with them. The love song must resonate with the susurration of sorrow, the tintinnabulation of grief. The writer who refuses to explore the darker regions of the heart will never be able to write convincingly about the wonder, the magic and the joy of love for just as goodness cannot be trusted unless it has breathed the same air as evil – the enduring metaphor of Christ crucified between two criminals comes to mind here – so within the fabric of the love song, within its melody, its lyric, one must sense an acknowledgement of its capacity for suffering.

by Nick Cave, Everything2 |  Read more:
Image via:

Me And You And Everyone We Know (Intro)


by Miranda July
[ed. One of my favorite films.]

Of Love and Fungus


Woody and Mia had opposite sides of Central Park. Tom and I have opposite sides of the East River.

We’re hoping for a better outcome.

For the four and a half years that we’ve been together, we’ve been apart, me in Manhattan, Tom in Brooklyn, at least most of the workweek, and during chunks of the weekend, too. We tell our friends that it’s a borough standoff, a game of Big Apple chicken, but that’s just a line and a lie, a deflection of the questions you field when you challenge the mythology of romance.

Moving in with each other: that’s supposed to be the ultimate prize, the real consummation. You co-sign a lease, put both names on the mailbox, settle on a toothpaste and the angels weep.

But why not seize the intimacy without forfeiting the privacy? Establish a different rhythm? One night with him, one night with a pint of Chubby Hubby and “Monday Night Football” or a marathon of “Scandal,” my wit on ice, my stomach muscles on hiatus, my body sprawling ever less becomingly across the couch. Isn’t that the definition of having it all?

Others think so. On Wikipedia there’s a phrase, a page and an acronym devoted to the likes of Tom and me, a tribe grown larger over the last few decades. We’re “Living Apart Together,” and we’re not loopy, we’re LAT, just as Woody and Mia weren’t freaks, just trailblazers, until he blazed his trail to a less trammeled pasture. By one estimate at least 6 percent of American couples married and unmarried — Tom and I belong to the latter group — don’t cohabitate. For Western Europeans, naturally, the figure seems to be higher. They have their problems and their airs over there, but they’ve always been expert at leisure and love.

The LAT life is healthy, according to all the studies. O.K., one study, admittedly puny in scope, but it just came out. Published in the current issue of the Journal of Communication, it closely followed 63 couples, about half of whom lived together and half of whom couldn’t, separated by circumstance rather than choice. The couples in commuter relationships said that their conversations were less frequent but deeper. They confessed more, listened harder and experienced a greater sense of intimacy. Absence worked its aphoristic magic on the heart. Fondness bloomed, no doubt because covers didn’t get stolen and someone else’s dishes weren’t left in the sink.

Just as there are couples with distance forced on them, there are couples with no option other than proximity, given the cost of two households or the kids in the mix. But there are also couples like Tom and me. We’d made our own homes before meeting each other. We’d tailored our budgets accordingly. We relish a measure of independence, can vanquish loneliness with a subway ride and don’t feel much loneliness in the first place. He’s in my head all the time.

Or he’s on my screen. That’s the thing about our wired age: apart is actually the new together, because alone isn’t alone anymore. On top of calling, there’s Skyping, e-mailing, texting, sexting: a Kama Sutra of electronic intercourse. Why bother with movers and bicker over wall art?

Even in earlier eras, before the ready meeting place of cyberspace, this sort of arrangement worked. Fannie Hurst, a hugely popular short-story writer in the early 20th century, and her husband had separate studio apartments in the same building on the Upper West Side of Manhattan. They made appointments to see each other. She explained that most of the other marriages she’d observed were “sordid endurance tests, overgrown with the fungi of familiarity and contempt.” Tom and I don’t want to be fungal. On this we’re resolute.

by Frank Bruni, NY Times |  Read more:
Image via:

Saturday, July 27, 2013


Nick Lamia
via:

J.J. Cale (December, 1938 – July, 2013)


J. J. Cale, a musician and songwriter whose blues-inflected rock influenced some of the genre’s biggest names and whose songs were recorded by Eric Clapton and Johnny Cash among others, died on Friday in La Jolla, Calif. He was 74.

Mr. Cale suffered a heart attack and died at Scripps Memorial Hospital around 8 p.m. on Friday evening, a statement posted on his Web site said.

He is best known as the writer of “Cocaine” and “After Midnight,” songs made famous when they were recorded by his collaborator, Eric Clapton.

A multi-instrumentalist, Mr. Cale often played all of the parts on his albums, also recording and mixing them himself. He is also credited as one of the architects of the 1970s Tulsa sound, a blend of rockabilly, blues, country and rock that came to influence Neil Young and Brian Ferry, among others. He won a Grammy Award in 2007 for an album with Mr. Clapton.

“Basically, I’m just a guitar player that figured out I wasn’t ever gonna be able to buy dinner with my guitar playing,” Mr. Cale told an interviewer for his official biography. “So I got into songwriting, which is a little more profitable business.”

John Weldon Cale was born in Oklahoma in 1938. He recorded “After Midnight” in the mid-1960s, according to the biography, but had retreated to his native Tulsa and “given up on the business part of the record business” by the time Mr. Clapton covered it in 1970. He heard it on the radio that year, he told NPR, “and I went, ‘Oh, boy, I’m a songwriter now. I’m not an engineer or an elevator operator.' ”

Mr. Cale released an album, “Naturally,” in 1972, to capitalize on that success, and continued to tour and release new music until 2009. But he declined to put his image on any of his covers and kept his vocals low amid the instruments on his recordings. He developed a reputation as a private figure and a musician’s musician while his songs were covered by Lynyrd Skynyrd, The Band, Deep Purple and Tom Petty, among others.

“I’d like to have the fortune,” he said in his biography, “but I don’t care too much about the fame.”

by Ravi Somaiya, NY Times |  Read more:

Enjoy the Rules

In his first book, You Are Not a Gadget, Lanier (an accomplished musician) describes the way in which file sharing has gutted the “musical middle class.” The magical thinking which pervades justification for endless digital copying has provided no answers to the vast decline in revenues for music. Despite the anti-elite posturing that has long attended pro-piracy arguments, the elites in the music business are doing fine. Jay-Z can continue to make his millions selling cell phones and T-shirts. It’s the musical middle class, people who clawed their way to sustainable employment in the arts in jobs as session musicians or A&R guys or similar, who have lost the most. People may have thought that they were merely robbing from the rich when they used file sharing services, but last time I checked, the guys in Metallica were still millionaires.

There were legitimate complaints, in the early years of file sharing, that there were no practical, legal alternatives that permitted consumers to purchase digital content. Those complaints can no longer be taken seriously. Now, it’s easy to get songs for 89 cents, albums for $5, 48-hour movie rentals for $2, endless apps for a couple bucks, access to Netflix’s vast streaming database for less than $10 a month…. Yet unauthorized and unpaid downloads continue to number in the hundreds of millions. Can it really be that less than $10 a month is still too much for access to so much content? How low, exactly, must the price point be before there is no longer a legitimate excuse for not paying it? What if that price can’t sustain the people who create the content?

People are still getting paid, with digital file sharing—the search engines that direct you to the files, the programs that enable the downloads, the ISPs that provide the bandwidth, the electric companies that power the computer. But among these winners, Lanier’s chief targets in Who Owns the Future? are what he calls the “Siren Servers,” which capture vast amounts of value that was once more evenly distributed throughout the economy, partly by preying on the willingness of many millions of people to provide content for free (or “free,” in his telling).

Lanier’s question is whether large masses of people deserve to somehow share in that value more broadly, given that it derives from their effort. It’s by now a well-worn cliché that Facebook’s hundreds of millions of users are not its customers but rather its product, a giant focus group that provides corporations with fine-grained data to parse and eyeballs for advertising. For individual Facebook users, the deal seems all right: a full-featured suite of social networking and data-hosting features at the cost of giving your data to strangers. Whether it works for society is a different question: The market value that siren servers create inevitably accrues to capital-intensive but low-employee companies, linking innovation to job destruction. (...)

His solution is, essentially, to rewind the Internet, and reverse a decision he considers momentous and destructive. For Lanier, the fundamental flaw of the Internet is that its links are not two-way—that is, by default, a link leads forward to another page, but that page does not by default contain a link back to pages that link to it. What Lanier laments is that linkbacks, trackbacks, pingbacks, and similar are not embedded in the basic technological architecture of the internet.

Two-way linking, Lanier argues, would be a key instrument of reciprocity on the Web, fostering a culture of mutually beneficial cooperation and due credit. Lanier, walking the talk, traces his idea to Ted Nelson, a network theorist who envisioned something like the Internet before it existed. The essence of network life would be linking reciprocity, Nelson believed, because only this made networking fair for all participants.

Having perceiving the inadequate distribution of wealth our digital networks have fostered, Lanier dreams of an Internet where not only are links reciprocal but so is wealth generation. In the (overlong) last section of Who Owns the Future? Lanier describes his proposal for introducing this sort of reciprocity, a system wherein micropayments are constantly shuffled between online participants as each uses another’s data. The point is to cut people who aren’t holding stock in Google in on the action. The network would have embedded within it a financial transfer system that remunerated people for their data, whether personal or creative or annotative. Two-way linking, married to a system of automatic micropayments, makes this possible. “If the system remembers where information originally comes from,” Lanier writes, “then the people who are the sources of information can be paid for it.”

Today’s pure amateurs, who never derive any money from online interactions — which must describe the large majority of internet users — would in Lanier’s system be eligible to earn when they, say, logged onto Facebook or posted to YouTube. The more views, the higher the payment. Whenever a company scrapes users’ data, the users would be remunerated, depending on how much it is used. Meanwhile, as these users read blog posts or view videos, they would automatically pay for it, though some of that cost would be subsidized through advertising and through the value that flows to them when their data is scraped by Google Analytics.

by Freddie deBoer, TNI |  Read more:
Image: Found on a wall in Ljubljana. Unknown artist. 11th June 2009

Kent Klich
via:

Helping Make the Best of the End of Life

Bettie Lewis was dying of metastatic cancer. Like many people coming to the end of life, she harbored two great fears: uncontrolled pain and abandonment. Though she was not completely comfortable, her pain was well controlled and causing her little distress. She had also developed confidence that her family and the team caring for her would remain with her to the end. She would not die alone. Yet she was deeply anxious that she would not survive long enough to see her soon-to-be-born grandson.

Fortunately she had sought care at a hospital with an outstanding palliative care program, including a team of nurses and nurse practitioners, physicians, social workers, chaplains, and volunteers who make it their mission to ensure the best possible care for patients and families facing life-ending illnesses. Though medicine had been unable to provide Lewis a cure, her healthcare team had not forgotten its core mission, which is to care.

Unsurprisingly, palliative care does not generate large amounts of revenue, nor is it the sort of service that many hospitals choose to advertise. But it when it is done well, it can make a huge difference. Over one million Americans die every year, and in many parts of the country, over three-quarters die in a hospital or long-term care facility. While many say it might be better to die at home, for a majority, this simply is not what happens.

Palliative care enhances understanding, reduces suffering, and helps patients, families, and the healthcare team clarify goals. Because the focus is not on making the disease go away, it is possible to focus attention on living with it as well as possible. This bears repeating: the goal is not just to die well but to live well. Members of the team can talk openly with patients and families about what is happening and what lies ahead, helping them navigate these uncharted waters.

Without such expertise and commitment, some in healthcare can get dying very wrong. We can fail to ensure that patients and families understand the terminal nature of the situation. We can fail to relieve suffering, including pain, nausea, respiratory distress, and unrecognized depression. And we can fail to address conflicts over the goals of care – sometimes some family members push for comfort while others cling tightly to cure.

Patients, families, and health professionals all intend to do the right thing. We genuinely want to care for the gravely ill and do what we can to make their experience as comfortable and meaningful as possible. Many of us simply don’t know how to do it. What should we do? What should we say? What should we avoid saying and doing? Left alone in a state of denial, many of us might cloak the whole experience in fear and embarrassment. But given the right support and guidance, we can shine.

by Richard Gunderman and Peg Nelson, The Atlantic |  Read more:
Image: garryknight/flickr

The Wastefulness of Automation


Chris Dillow observes that "one function of the welfare state is to ensure that capital gets a big supply of labour, by making eligibity for unemployment benefit conditional upon seeking work." And despite noting that when jobs are scarce, paying some to "lie fallow" so others can work might be a good thing, he concludes that "this is certainly not in the interests of capitalists, who want a large labour supply - a desire which is buttressed by the morality of reciprocal altruism and the work ethic." (emphasis mine). Basic Income, therefore, is not going to happen because capitalist interests, claiming the moral high ground, will ensure that it never gains political traction.

But what if capitalists DON'T want a large labour supply? What if automation means that what capitalists really want is a very small, highly skilled workforce to control the robots that do all the work? What if paying people enough to live on simply is not cost-effective compared to the running costs of robots? In short, what if the costs of automated production fall to virtually zero?

I don't think I am dreaming this. I've noted previously that forcing down labour costs is one of the ways in which firms avoid the up-front costs of automation. But as automation becomes cheaper, and the efficiency gains from automation become larger, we may reach a situation where employing the majority of people at wages on which they can afford to live simply is not worthwhile. Robots can produce far more for far less.

This creates an interesting problem. The efficiency gains from automating production tend to create an abundance of products, which forces down prices. This sounds like a good thing: if goods and services are cheap and abundant, people can have whatever they want, can't they? Well, not if they are unemployed and have no unearned income. It is all too easy to foresee a nightmare future in which people who have been supplanted by robots scratch out a living from subsistence farming on motorway verges (all other land being farmed by robots), while lorries carrying products they cannot afford to buy flash past on the way to the stores that only those lucky enough to have jobs frequent.

But it wouldn't actually be like that. If only a small number of people can afford to buy the products produced by all these robots, then unless there is a vibrant export market for those products - which requires the majority of people in other countries to be doing rather better than merely surviving on a basic subsistence income - producers have a real problem. They would normally expect increasing efficiencies of production to push up profits, either because demand for products would be sufficient to maintain prices while production costs are falling, or because lower production costs feeding through into lower prices gave them a competitive advantage. But the efficiencies of production created by automating - including, eventually, the low-skill jobs that at the moment are too expensive to automate - may actually result in the destruction of profits. The fact is that robots are brilliant at supply, but they don't create demand. Only humans create demand - and if the majority of humans are so poor that they can only afford basic essentials, the economy will be constrained by lack of demand, not lack of supply. There would be no scarcity of products, at least to start with....but there would be scarcity of the means to obtain them.

by Frances Coppola, Pieria | Read more:
Image: uncredited

Friday, July 26, 2013


Lucio Fontana, Concetto spaziale, 1977
via:

In Florida, a Food-stamp Recruiter Deals with Wrenching Choices

A good recruiter needs to be liked, so Dillie Nerios filled gift bags with dog toys for the dog people and cat food for the cat people. She packed crates of cookies, croissants, vegetables and fresh fruit. She curled her hair and painted her nails fluorescent pink. “A happy, it’s-all-good look,” she said, checking her reflection in the rearview mirror. Then she drove along the Florida coast to sign people up for food stamps.

Her destination on a recent morning was a 55-and-over community in central Florida, where single-wide trailers surround a parched golf course. On the drive, Nerios, 56, reviewed techniques she had learned for connecting with some of Florida’s most desperate senior citizens during two years on the job. Touch a shoulder. Hold eye contact. Listen for as long as it takes. “Some seniors haven’t had anyone to talk to in some time,” one of the state-issued training manuals reads. “Make each person feel like the only one who matters.”

In fact, it is Nerios’s job to enroll at least 150 seniors for food stamps each month, a quota she usually exceeds. Alleviate hunger, lessen poverty: These are the primary goals of her work. But the job also has a second and more controversial purpose for cash-strapped Florida, where increasing food-stamp enrollment has become a means of economic growth, bringing almost $6 billion each year into the state. The money helps to sustain communities, grocery stores and food producers. It also adds to rising federal entitlement spending and the U.S. debt.

Nerios prefers to think of her job in more simple terms: “Help is available,” she tells hundreds of seniors each week. “You deserve it. So, yes or no?”

In Florida and everywhere else, the answer in 2013 is almost always yes. A record 47 million Americans now rely on the Supplemental Nutrition Assistance Program (SNAP), also known as food stamps, available for people with annual incomes below about $15,000. The program grew during the economic collapse because 10 million more Americans dropped into poverty. It has continued to expand four years into the recovery because state governments and their partner organizations have become active promoters, creating official “SNAP outreach plans” and hiring hundreds of recruiters like Nerios.

A decade ago, only about half of eligible Americans chose to sign up for food stamps. Now that number is 75 percent. (...)

Did he deserve it, though? Lonnie Briglia, 60, drove back to his Spanish Lakes mobile home with the recruiter’s pamphlets and thought about that. He wasn’t so sure.

Wasn’t it his fault that he had flushed 40 years of savings into a bad investment, buying a fleet of delivery trucks just as the economy crashed? Wasn’t it his fault that he and his wife, Celeste, had missed mortgage payments on the house where they raised five kids, forcing the bank to foreclose in 2012? Wasn’t it his fault the only place they could afford was an abandoned mobile home in Spanish Lakes, bought for the entirety of their savings, $750 in cash?

“We made horrible mistakes,” he said. “We dug the hole. We should dig ourselves out.”

Now he walked into their mobile home and set the SNAP brochures on the kitchen table. They had moved in three months before, and it had taken all of that time for them to make the place livable. They patched holes in the ceiling. They fixed the plumbing and rewired the electricity. They gave away most of their belongings to the kids — “like we died and executed the will,” he said. They decorated the walls of the mobile home with memories of a different life: photos of Lonnie in his old New Jersey police officer uniform, or in Germany for a manufacturing job that paid $25 an hour, or on vacation in their old pop-up camper.

A few weeks after they moved in, some of their 11 grandchildren had come over to visit. One of them, a 9-year-old girl, had looked around the mobile home and then turned to her grandparents on the verge of tears: “Grampy, this place is junky,” she had said. He had smiled and told her that it was okay, because Spanish Lakes had a community pool, and now he could go swimming whenever he liked.

Only later, alone with Celeste, had he said what he really thought: “A damn sky dive. That’s our life. How does anyone fall this far, this fast?”

And now SNAP brochures were next to him on the table — one more step down, he thought, reading over the bold type on the brochure. “Applying is easy.” “Eat right!” “Every $5 in SNAP generates $9.20 for the local economy.”

by Eli Saslow, Washington Post |  Read more:
Image: Michael S. Williamson / The Washington Post

Fifth Avenue, New York City, 1975, Joel Meyerowitz
via:

Call of the Wild: Pouncing on Reports of Anchorage’s Big Wild Life

Most people get to the office and pour themselves a cup of coffee. Jessy Coltrane, the Anchorage area wildlife biologist with the Alaska Department of Fish and Game, grabs a cup of coffee and, if it’s summer, answers her first bear or moose call of the day. Some days the call comes before the java. Some days she never makes it into the office, unless you consider her pickup truck her office.

One day in mid-June, her first call was from Laura Krip in Muldoon, but Coltrane was chasing bears and moose in east Anchorage and couldn’t check her office messages until after noon. Krip’s message, now some six hours old, said she and her 6-year-old daughter had just found themselves nose to nose with a grizzly bear.

Krip has walked her dogs in a wooded area along Chester Creek, just east of Muldoon Road, one or more times a day for years. By her estimate she’s walked those trails “thousands of times.” That day was different. (...)

In May and June it’s not just the bears that keep Coltrane hopping. She gets as many calls about moose. Cow moose drop calves from mid-May to mid-June and the young calves are often separated from their mothers by fences, traffic and other urban hazards. Young moose calves are also vulnerable to bears. Hiding out in the city is not a surefire way to avoid the big predators.

On the same morning as Krip’s close encounter, before she got to her office, Coltrane fielded several moose calls. A caller had heard thrashing and bawling in his backyard off Upper DeArmoun Road and assumed that a bear had killed a calf, a cow, or both during the night. He wanted any carcasses hauled away so the bear wouldn’t linger and possibly defend its kill. This situation occurs as many as a dozen times each summer in Anchorage. Sometimes the bear is still there. Coltrane and her assistant, Dave Battle, spent several hair-raising minutes crawling through a dense thicket of alders looking for a dead moose. Battle’s 12-gauge shotgun was locked and loaded; Coltrane was packing a can of bear spray because her finger’s broken and bandaged. The search didn’t turn up any carcasses. A cow moose is a formidable opponent, even for a bear, and it appears that she won the skirmish. (...)

Being the Anchorage area wildlife biologist is a lot like being a firefighter. You’re on standby and whatever your immediate plans might be, the rest of your day is just a phone call away.

Coltrane and Battle arrived at the Alaska Department of Fish and Game office after noon and Coltrane finally heard Krip’s adrenaline-spiked phone message. She headed for Muldoon, knowing it was too late to find the bear, but wanting to familiarize herself with the trail where Krip had encountered the bear. On the way she received a more-urgent call. A woman, who had locked herself in a bathroom, was reporting a bear in her house. Coltrane called the woman and headed for the Hillside.

by Rick Sinnott, Alaska Dispatch |  Read more:
Image: Rick Sinnott