Thursday, December 14, 2017

via: here and here

That Giving Feeling

The central question that private bankers ask their clients is: “What does your money mean to you?” It’s a fundamental moral issue at all levels of wealth. Revealing answers range from the odious (controlling the lives of your family members) to the visionary (saving the world).

Eventually, bankers say, new wealth enjoys the luxury lifestyle for about five years before they start looking for some purpose in their lives.

New and old Asian wealth have confused and conflated the meaning of charity versus philanthropy, and the need to accomplish more with their vast assets. The best analogy is that charity is when you hand money to the Salvation Army in the street, who then decides how to distribute it. Philanthropy is when you stand in the street and decide by yourself who to hand money to.

Living with the obligations and responsibilities of wealth isn’t easy. Big money creates its own gravity, forcing their owners’ lives into an orbit. Gift giving as a form of charity is certainly commendable and flexible, allowing donors to shift the management of charity to established organisations.

But this concept is becoming inadequate, even corrupted, considering the super wealth being created by technology success. And charities are also becoming a source of potential abuse. (...)

Here’s a twist on the spirit of giving. In his recent Facebook post, Mark Zuckerberg said he intended to divest between 35 million and 75 million Facebook shares in the next 18 months to fund his charity. He currently holds 53 per cent of the voting stock. If he sold 35 million shares, his voting stake would be reduced to 50.6 per cent.

But, according to the Financial Times, if he sold 75 million shares, he would be dependent on the votes of co-founder Dustin Muskovitz to exercise control over a majority of votes. So Zuckerberg’s advisers cooked up a stock reclassification that effectively created a third non-voting class, that would have solved this problem. Objections and the threat of a lawsuit from investors stopped his plan.

Once the US$12 billion of proceeds from the stock sale is transferred to his foundation, all investment income is tax free. He only needs to donate 5 per cent of principal per year to charity. Most foundations and family investment offices of that magnitude can make investment returns more than 5 per cent per annum. So the principal in the foundation never, ever actually need to be disbursed for charity.

For many foundations, the present value of the tax subsidy to the tycoon personally far exceeds the net disbursement of the principal from the foundation on charity.

New technology wealth seems fixated on funding scalable charity projects with the same model as their companies. Or that which benefit their companies.

Unfortunately, many poverty alleviation projects can’t be scaled, such as finding clean water for poor villages in Africa. It would be more practical and noble if Zuckerberg would simply give away the US$12 billion, rather than playing games with tax planning.

by Peter Guy, South China Morning Post |  Read more:
Image: uncredited
[ed. See also: 2017 Was Bad for Facebook. 2018 Will Be Worse.]

Wednesday, December 13, 2017

The Future is Here – AlphaZero

Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

DeepMind and AlphaZero

About three years ago, DeepMind, a company owned by Google that specializes in AI development, turned its attention to the ancient game of Go. Go had been the one game that had eluded all computer efforts to become world class, and even up until the announcement was deemed a goal that would not be attained for another decade! This was how large the difference was. When a public challenge and match was organized against the legendary player Lee Sedol, a South Korean whose track record had him in the ranks of the greatest ever, everyone thought it would be an interesting spectacle, but a certain win by the human. The question wasn’t even whether the program AlphaGo would win or lose, but how much closer it was to the Holy Grail goal. The result was a crushing 4-1 victory, and a revolution in the Go world. In spite of a ton of second-guessing by the elite, who could not accept the loss, eventually they came to terms with the reality of AlphaGo, a machine that was among the very best, albeit not unbeatable. It had lost a game after all.

The saga did not end there. A year later a new updated version of AlphaGo was pitted against the world number one of Go, Ke Jie, a young Chinese whose genius is not without parallels to Magnus Carlsen in chess. At the age of just 16 he won his first world title and by the age of 17 was the clear world number one. That had been in 2015, and now at age 19, he was even stronger. The new match was held in China itself, and even Ke Jie knew he was most likely a serious underdog. There were no illusions anymore. He played superbly but still lost by a perfect 3-0, a testimony to the amazing capabilities of the new AI.

Many chess players and pundits had wondered how it would do in the noble game of chess. There were serious doubts on just how successful it might be. Go is a huge and long game with a 19x19 grid, in which all pieces are the same, and not one moves. Calculating ahead as in chess is an exercise in futility so pattern recognition is king. Chess is very different. There is no questioning the value of knowledge and pattern recognition in chess, but the royal game is supremely tactical and a lot of knowledge can be compensated for by simply outcalculating the opponent. This has been true not only of computer chess, but humans as well.

However, there were some very startling results in the last few months that need to be understood. DeepMind’s interest in Go did not end with that match against the number one. You might ask yourself what more there was to do after that? Beat him 20-0 and not just 3-0? No, of course not. However, the super Go program became an internal litmus test of a sorts. Its standard was unquestioned and quantified, so if one wanted to test a new self-learning AI, and how good it was, then throwing it at Go and seeing how it compared to the AlphaGo program would be a way to measure it.

A new AI was created called AlphaZero. It had several strikingly different changes. The first was that it was not shown tens of thousands of master games in Go to learn from, instead it was shown none. Not a single one. It was merely shown the rules, without any other information. The result was a shock. Within just three days its completely self-taught Go program was stronger than the version that had beat Lee Sedol, a result the previous AI had needed over a year to achieve. Within three weeks it was beating the strongest AlphaGo that had defeated Ke Jie. What is more: while the Lee Sedol version had used 48 highly specialized processors to create the program, this new version used only four!

Approaching chess might still seem unusual. After all, although DeepMind had already shown near revolutionary breakthroughs thanks to Go, that had been a game that had yet to be ‘solved’. Chess already had its Deep Blue 20 years ago, and today even a good smartphone can beat the world number one. What is there to prove exactly?

It needs to be remembered that Demis Hassabis, the founder of DeepMind has a profound chess connection of his own. He had been a chess prodigy in his own right, and at age 13 was the second highest rated player under 14 in the world, second only to Judit Polgar. He eventually left the chess track to pursue other things, like founding his own PC video game company at age 17, but the link is there. There was still a burning question on everyone’s mind: just how well would AlphaZero do if it was focused on chess? Would it just be very smart, but smashed by the number-crunching engines of today where a single ply is often the difference between winning or losing? Or would something special come of it?

A new paradigm

On December 5 the DeepMind group published a new paper at the site of Cornell University called "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it's worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible? 

The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" at Cornell University.

The paper explains:
“AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search, as originally proposed by Shannon. Figure 2 shows the scalability of each player with respect to thinking time, measured on an Elo scale, relative to Stockfish or Elmo with 40ms thinking time. AlphaZero’s MCTS scaled more effectively with thinking time than either Stockfish or Elmo, calling into question the widely held belief that alpha-beta search is inherently superior in these domains.”
In other words, instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think. A top player may be able to outcalculate a weaker player in both consistency and depth, but it still remains a joke compared to what even the weakest computer programs are doing. It is the human’s sheer knowledge and ability to filter out so many moves that allows them to reach the standard they do. Remember that although Garry Kasparov lost to Deep Blue it is not clear at all that it was genuinely stronger than him even then, and this was despite reaching speeds of 200 million positions per second. If AlphaZero is really able to use its understanding to not only compensate 900 times fewer moves, but surpass them, then we are looking at a major paradigm shift.
How does it play?

Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own. And do recall that this is the result of only 24 hours of self-learning. The team produced fascinating graphs showing the openings it discovered as well as the ones it gradually rejected as it grew stronger!

by Albert Silver, Chess News |  Read more:
Image: uncredited

Tuesday, December 12, 2017

The Transhumanist FAQ

1.1 What is transhumanism?

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows: 

(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities. 

(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”. 

It is not our human shape or the details of our current human biology that define what is valuable about us, but rather our aspirations and ideals, our experiences, and the kinds of lives we lead. To a transhumanist, progress occurs when more people become more able to shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves. Transhumanists seek to create a world in which autonomous individuals may choose to remain unenhanced or choose to be enhanced and in which these choices will be respected. 

Through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the relatively near future, we may face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with interface technology. Molecular nanotechnology has the potential to manufacture abundant resources for everybody and to give us control over the biochemical processes in our bodies, enabling us to eliminate disease and unwanted aging. Technologies such as brain-computer interfaces and neuropharmacology could amplify human intelligence, increase emotional well-being, improve our capacity for steady commitment to life projects or a loved one, and even multiply the range and richness of possible emotions. On the dark side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Seeking to understand the dangers and working to prevent disasters is an essential part of the transhumanist agenda. 

Transhumanism is entering the mainstream culture today, as increasing numbers of scientists, scientifically literate philosophers, and social thinkers are beginning to take seriously the range of possibilities that transhumanism encompasses. A rapidly expanding family of transhumanist groups, differing somewhat in flavor and focus, and a plethora of discussion groups in many countries around the world, are gathered under the umbrella of the World Transhumanist Association, a non-profit democratic membership organization. 

1.2 What is a posthuman?

It is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is “posthuman”. (Care must be taken to avoid misinterpretation. “Posthuman” does not denote just anything that happens to come after the human era, nor does it have anything to do with the “posthumous”. In particular, it does not imply that there are no humans anymore.) 

Many transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence. 

Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques. 

Some authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed. It is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans. 

Posthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to fail.

by Nick Bostrom, Oxford University |  Read more: (pdf)
[ed. Repost]

Naked 9 – 4
blood moon


For the Good of Society - Delete Your Map App

I live on an obnoxiously quaint block in South Berkeley, California, lined with trees and two-story houses. There’s a constant stream of sidewalk joggers before and after work, and plenty of (good) dogs in the yards. Trick-or-treaters from distant regions of the East Bay invade on Halloween.

Once a week, the serenity is interrupted by the sound of a horrific car crash. Sometimes, it’s a tire screech followed by the faint dint of metal on metal. Other times, a boom stirs the neighbors outside to gawk. It’s always at the intersection of Hillegass, my block, and Ashby, one of the city’s thoroughfares. It generally happens around rush hour, when the street is clogged with cars.

It wasn’t always this way. In 2001, the city designated the street as Berkeley’s first “bicycle boulevard,” presumably due to some combination of it being relatively free of traffic and its offer of a direct route from the UC Berkeley campus down into Oakland. But since that designation, another group has discovered the exploit. Here, for the hell of it, are other events that have occurred since 2001:

2005: Google Maps is launched.
2006: Waze is launched.
2009: Uber is founded.
2012: Lyft is founded.

“The phenomenon you’re experiencing is happening all over the U.S.,” says Alexandre Bayen, director of transportation studies at UC Berkeley.

Pull up a simple Google search for “neighborhood” and “Waze,” and you’re bombarded with local news stories about similar once-calm side streets now the host of rush-hour jams and late-night speed demons. It’s not only annoying as hell, it’s a scenario ripe for accidents; among the top causes of accidents are driver distraction (say, by looking at an app), unfamiliarity with the street (say, because an app took you down a new side street), and an increase in overall traffic.

“The root cause is the use of routing apps,” says Bayen, “but over the last two to three years, there’s the second layer of ride-share apps.” (...)

All that extra traffic down previously empty streets has created an odd situation in which cities are constantly playing defense against the algorithms.

“Typically, the city or county, depending on their laws, doesn’t have a way to fight this,” says Bayen, “other than by doing infrastructure upgrades.”

Fremont, California, has lobbed some of the harshest resistance, instituting rush-hour restrictions, and adding stop signs and traffic lights at points of heavy congestion. San Francisco is considering marking designated areas where people can be picked up or dropped off by ride-shares (which, hmm, seems familiar). Los Angeles has tinkered with speed bumps and changing two-way streets into one-ways. (Berkeley has finally decided to play defense on my block by installing a warning system that will slow cars at the crash-laden intersection; it will be funded by taxpayers.) (...)

Perhaps you see the problem. If cities thwart map apps and ride-share services through infrastructure changes with the intent to slow traffic down, it has the effect of slowing down traffic. So, the algorithm may tell drivers to go down another side street, and the residents who’ve been griping to the mayor may be pleased, but traffic, on the city whole, has been negatively affected, making everyone’s travel longer than before. “It’s nuts,” says Bayen, “but this is the reality of urban planning.”

Bayen points out that this is sort of a gigantic version of the prisoner’s dilemma. “If everybody’s doing the selfish thing, it’s bad for society,” says Bayen. “That’s what’s happening here.” Even though the app makes the route quicker for the user, that’s only in relation to other drivers not using the app, not to their previous drives. Now, because everyone is using the app, everyone’s drive-times are longer compared to the past. “These algorithms are not meant to improve traffic, they’re meant to steer motorists to their fastest path,” he says. “They will give hundreds of people the shortest paths, but they won’t compute for the consequences of those shortest paths.”

by Rick Paulas, Select/All | Read more:
Image: Waze

How Email Open Tracking Quietly Took Over the Web

"I just came across this email," began the message, a long overdue reply. But I knew the sender was lying. He’d opened my email nearly six months ago. On a Mac. In Palo Alto. At night.

I knew this because I was running the email tracking service Streak, which notified me as soon as my message had been opened. It told me where, when, and on what kind of device it was read. With Streak enabled, I felt like an inside trader whenever I glanced at my inbox, privy to details that gave me maybe a little too much information. And I certainly wasn’t alone.

There are some 269 billion emails sent and received daily. That’s roughly 35 emails for every person on the planet, every day. Over 40 percent of those emails are tracked, according to a study published last June by OMC, an “email intelligence” company that also builds anti-tracking tools.

The tech is pretty simple. Tracking clients embed a line of code in the body of an email—usually in a 1x1 pixel image, so tiny it's invisible, but also in elements like hyperlinks and custom fonts. When a recipient opens the email, the tracking client recognizes that pixel has been downloaded, as well as where and on what device. Newsletter services, marketers, and advertisers have used the technique for years, to collect data about their open rates; major tech companies like Facebook and Twitter followed suit in their ongoing quest to profile and predict our behavior online.

But lately, a surprising—and growing—number of tracked emails are being sent not from corporations, but acquaintances. “We have been in touch with users that were tracked by their spouses, business partners, competitors,” says Florian Seroussi, the founder of OMC. “It's the wild, wild west out there.”

According to OMC's data, a full 19 percent of all “conversational” email is now tracked. That’s one in five of the emails you get from your friends. And you probably never noticed.

“Surprisingly, while there is a vast literature on web tracking, email tracking has seen little research,” noted an October 2017 paper published by three Princeton computer scientists. All of this means that billions of emails are sent every day to millions of people who have never consented in any way to be tracked, but are being tracked nonetheless. And Seroussi believes that some, at least, are in serious danger as a result. (...)

I stumbled upon the world of email tracking last year, while working on a book about the iPhone and the notoriously secretive company that produces it. I’d reached out to Apple to request some interviews, and the PR team had initially seemed polite and receptive. We exchanged a few emails. Then they went radio silent. Months went by, and my unanswered emails piled up. I started to wonder if anyone was reading them at all.

That’s when, inspired by another journalist who’d been stonewalled by Apple, I installed the email tracker Streak. It was free, and took about 30 seconds. Then, I sent another email to my press contact. A notification popped up on my screen: My email had been opened almost immediately, inside Cupertino, on an iPhone. Then it was opened again, on an iMac, and again, and again. My messages were not only being read, but widely disseminated. It was maddening, watching the grey little notification box—“Someone just viewed ‘Regarding book interviews’—pop up over and over and over, without a reply.

So I decided to go straight to the top. If Apple’s PR team was reading my emails, maybe Tim Cook would, too.

I wrote Cook a lengthy email detailing the reasons he should join me for an interview. When I didn’t hear back, I drafted a brief follow-up, enabled Streak, hit send. Hours later, I got the notification: My email had been read. Yet one glaring detail looked off. According to Streak, the email had been read on a Windows Desktop computer.

Maybe it was a fluke. But after a few weeks, I sent another follow up, and the email was read again. On a Windows machine.

That seemed crazy, so I emailed Streak to ask about the accuracy of its service, disclosing that I was a journalist. In the confusing email exchange with Andrew from Support that followed, I was told that Streak is “very accurate,” as it can let you know what time zone or state your lead is in—but only if you’re a salesperson. Andrew stressed that “if you’re a reporter and wanted to track someone's whereabouts, [it’s] not at all accurate.” It quickly became clear that Andrew had the unenviable task of threading a razor thin needle: maintaining that Streak both supplied very precise data but was also a friendly and non-intrusive product. After all, Streak users want the most accurate information possible, but the public might chafe if it knew just how accurate that data was—and considered what it could be used for besides honing sales pitches. This is the paradox that threatens to pop the email tracking bubble as it grows into ubiquity. No wonder Andrew got Orwellian: “Accuracy is entirely subjective,” he insisted, at one point.

Andrew did, however, unequivocally say that if Streak listed the kind of device used—as opposed to listing unknown—then that info was also “very accurate.” Even if pertained to the CEO of Apple.

by Brian Merchant, Wired |  Read more:
Image: Getty

He Made Masterpieces with Manure

On the acknowledgements page of Traces of Vermeer, Jane Jelley thanks one friend who tracked down pig bladders and another who harvested mussel shells from a freshwater moat. Jelley, a painter, takes her research on the Dutch Golden Age painter Johannes Vermeer (1632–75) out of galleries and archives and into the studio. Her experiments are two parts Professor Branestawm, one part Great British Bake Off. She discovers that she can make yellow ‘lakes’ – pigments produced from dyes of the kind used by Vermeer and his contemporaries to create subtle ‘glazed’ effects – in her kitchen at home. First, you collect some unripe buckthorn berries from a hedgerow or the flowers of the broom shrub. Next, ‘You have to boil up the plants; and then you need some chalk, some alum; some coffee filters; and a large turkey baster.’ She reminds us how fortunate modern artists are to be able to buy their paint in ready-mixed tubes from Winsor & Newton.

Before he laid down even a dot of paint, Vermeer would have weighed, ground, burned, sifted, heated, cooled, kneaded, washed, filtered, dried and oiled his colours. Some pigments – the rare ultramarine blue made from lapis lazuli from Afghanistan, for example – had to be plunged into cold vinegar. Others – such as lead white – needed to be kept in a hut filled with horse manure. The fumes caused the lead to corrode, creating flakes of white carbonate that were scraped off by hand.

Vermeer knew how to soak old leather gloves to extract ‘gluesize’, applied as a coating to artists’ canvas. Or he might have followed the recipe for goat glue in Cennino Cennini’s painters’ manual The Craftsman’s Handbook: boiled clippings of goat muzzles, feet, sinews and skin. This was best made in January or March, in ‘great cold or high winds’, to disperse the goaty smell.

An artist had to be a chemist – and he had to have a strong stomach. He would have known, writes Jelley, ‘the useful qualities of wine, ash, urine, and saliva’. ‘Do not lick your brush or spatter your mouth with paint,’ warned Cennini. Lead white and arsenic yellow were poisonous, goat glue merely unpleasant. The art historian Jan Veth, writing in 1908 about Girl with a Pearl Earring (c 1665–7), fancied that Vermeer had painted with ‘the dust of crushed pearl’. Forensics have since revealed the earthier truth.

by Laura Freeman, Literary Review |  Read more:
Image: Wikipedia

Monday, December 11, 2017

Kimi Werner
[ed. Free diver extraordinaire (and rider of great white sharks).]

Jonas Wood, Scholl Canyon 2, 2017

[ed. Mondays]

What to Make of New Positive NSI-189 Results?

I wanted NSI-189 to be real so badly.

Pharma companies used to love antidepressants. Millions of people are depressed. Millions of people who aren’t depressed think they are. Sell them all a pill per day for their entire lifetime, and you’re looking at a lot of money. So they poured money into antidepressant research, culminating in 80s and 90s with the discovery of selective serotonin reuptake inhibitors (SSRIs) like Prozac. Since then, research has moved into exciting new areas, like “more SSRIs”, “even more SSRIs”, “drugs that claim to be SNRIs but on closer inspection are mostly just SSRIs”, and “drugs that claim to be complicated serotonin modulators but realistically just work as SSRIs”. Some companies still go through the pantomime of inventing new supposedly-not-SSRI drugs, and some psychiatrists still go through the pantomime of pretending to be excited about them, but nobody’s heart is really in it anymore.

How did it come to this? Apparently discovering new antidepressants is really hard. Part of it is that depression has such a high placebo response rate (realistically probably mostly regression to the mean) that it’s hard for even a good medication to separate much from placebo. Another part is that psychopharmacology is just a really difficult field even at the best of times. Pharma companies tried, tried some more, and gave up. All the new no-really-not-SSRIs are the fig leaf to cover their failure. Now people are gradually giving up on even pretending. There are still lots of exciting possibilities coming from the worlds of academia and irresponsible self-experimentation, but the Very Serious People have left the field. This is a disaster, insofar as they’re the only people who can get things through the FDA and into the mass market where anyone besides fringe enthusiasts will use them.

Enter NSI-189. A tiny pharma company called Neuralstem announced that they had a new antidepressant that worked on directly on neurogenesis – a totally new mechanism! nothing at all like SSRIs! – and seemed to be getting miraculous results. Lots of people (including me) suspect neurogenesis is pretty fundamental to depression in a way serotonin isn’t, so the narrative really worked – we’ve finally figured out a way to hit the root cause of depression instead of fiddling around with knobs ten steps away from the actual problem. Irresponsible self-experimenters managed to synthesize and try some of it, and reported miraculous stories of treatment-resistant depressions vanishing overnight. Someone had finally done the thing!

There are many theories about what place our world holds in God’s creation. Here’s one with as much evidence as any other: Earth was created as a Hell for bad psychiatrists. For one thing, it would explain why there are so many of them here. For another, it would explain why – after getting all of our hopes so high – NSI-189 totally flopped in FDA trials.

I don’t think the data have been published anywhere (more evidence for the theory!), but we can read off the important parts of the story from Neuralstem’s press release. In Stage 1, they put 44 patients on 40 mg NSI-189 daily, another 44 patients on 80 mg daily, and 132 patients on placebo for six weeks. In Stage 2, they took the people from the placebo group who hadn’t gotten better in Stage 1 and put half of them on NSI-189, leaving the other half on placebo – I think this was a clever trick to get a group of people pre-selected for not responding to placebo and so avoid the problem where everyone does well on placebo and so it’s a washout. But all of this was for nothing. On the primary endpoint – a depression rating instrument called MADRS – the NSI-189 group failed to significantly outperform placebo during either stage.

Neuralstem’s stock fell 61% on news of the study. Financial blog Seeking Alpha advised readers that Neuralstem Is Doomed. Investors tripped over themselves to withdraw support from a corporation that apparently was unable to handle the absolute bread-and-butter most basic job of a pharma company – fudging clinical trial results so that nobody figures out they were negative until half the US population is on their drug.

From last month’s New York Times:
The first thing you feel when a [drug] trial fails is a sense of shame. You’ve let your patients down. You know, of course, that experimental drugs have a poor track record – but even so, thisdrug had seemed so promising (you cannot erase the image of the cancer cells dying under the microscope). You feel as if you’ve shortchanged the Hippocratic Oath […] 
There’s also a more existential shame. In an era when Big Pharma might have macerated the last drips of wonder out of us, it’s worth reiterating the fact: Medicines are notoriously hard to discover. The cosmos yields human drugs rarely and begrudgingly – and when a promising candidate fails to work, it is as if yet another chemical morsel of the universe has been thrown into the dumpster. The meniscus of disappointment rises inside you: That domain of human biology that the medicine hoped to target may never be breached therapeutically.
And so the rest of us gave a heavy sigh, shed a single tear, and went back to telling ourselves that maybe vortioxetine wasn’t exactly an SSRI, in ways.


But the reason I’m writing about all of this now is that Neuralstem has just put out a new press release saying that actually, good news! NSI-189 works after all! Their stock rose 67%! Investment blogs are writing that Neuralstem Is A Big Winner and boasting about how much Neuralstem stock they were savvy enough to hold on to!

What are these new results? Can we believe them?

I’m still trying to figure out exactly what’s going on; the results themselves were presented at a conference and aren’t directly available. But from what I can gather from the press release, this isn’t a new trial. It’s new secondary endpoints from the first trial, that Neuralstem thinks cast a new light on the results.

What are secondary endpoints? Often during a drug trial, people want to measure whether the drug works in multiple different ways. For depression, these are usually rating scales that ask about depressive symptoms – things like “On a scale of 1 to 5, how sad are you?” or “How many times in the past month have you considered suicide?”. You could give the MADRS, a scale that focuses on emotional symptoms. Or you could give the HAM-D, a scale that focuses more on psychosomatic symptoms. Or since depression makes people think less clearly, you could give them a cognitive battery. Depending on what you want to do, all of these are potentially good choices.

But once you let people start giving a lot of tests, there’s a risk that they’ll just keep giving more and more tests until they find one that gives results they like. Remember, one out of every twenty statistical analyses you do will be positive at the 0.05 level by pure coincidence. So if you give people ten tests, you’ve got a pretty good chance of getting one positive result – at which point, you trumpet that one to the world.

Statisticians try to solve this loophole by demanding researchers pre-identify a primary endpoint. That is, you have to say beforehand which test you want to count. You can do however many tests you want, but the other ones (“secondary endpoints”) are for your own amusement and edification. The primary endpoint is the one that the magical “p = 0.05 means it works” criteria gets applied to.

Neuralstem chose the MADRS scale as their primary endpoint and got a null result. This is what they released in July that had everybody so disappointed. The recently-released data are a bunch of secondary endpoints, some of which are positive. This is the new result that has everybody so excited.

You might be asking “Wait, I thought the whole point of having primary versus secondary endpoints was so people wouldn’t do that?” Well…yes. I’m trying to figure out if there’s any angle here besides “Company does thing that you’re not supposed to do because it can always give you positive results, gets positive results, publishes a press release”. I am not an expert here. But I can’t find one. (...)

Except…why did their stock jump 67%? We just got done talking about the efficient market hypothesis and the theory that the stock market is never wrong in a way detectable by ordinary humans.

First of all, maybe that’s wrong. My dad is a doctor, and he swears that he keeps making a lot of money from medical investments. He just sees some new medical product, says “Yeah, that sounds like the sort of thing that will work and become pretty popular”, and buys it. I keep telling him this cannot possibly work, and he keeps coming to me a year later telling me he made a killing and now has a new car. Maybe all financial theory is a total lie, and if you get a lucky feeling when looking at a company’s logo you should invest in them right away and you will always make a fortune.

Or maybe the it’s that it’s not investors’ job to answer “Does this drug work?” but rather “Will investing in this stock make me money?”. Neuralstem has mentioned that they’ll be bringing these new results in front of the FDA, presumably in the hopes of getting a Phase III trial. FDA standards seem to have gotten looser lately, and maybe a fig leaf of positive results is all they need to give the go ahead for a bigger trial anyway – after all, they wouldn’t be approving the drug, just saying more research is appropriate. Then maybe that trial would come out better. Or it would be big enough that they would discover some alternate use (remember, Viagra was originally developed to lower blood pressure, and only got switched to erectile dysfunction after Phase 1 trials). Or maybe Neuralstem will join the 21st century and hire a competent Obfuscation Department.

I don’t know. I’m beyond caring. The sign of a really deep depression is abandoning hope, and I’ve abandoned hope in NSI-189…

by Scott Alexander, Slate Star Codex |  Read more:
Image: via
[ed. See also: NSI-189: A Nootropic Antidepressant That Promotes Neurogenesis]

Why Corrupt Bankers Avoid Jail

Prosecution of white-collar crime is at a twenty-year low.

In the summer of 2012, a subcommittee of the U.S. Senate released a report so brimming with international intrigue that it read like an airport paperback. Senate investigators had spent a year looking into the London-based banking group HSBC, and discovered that it was awash in skulduggery. According to the three-hundred-and-thirty-four-page report, the bank had laundered billions of dollars for Mexican drug cartels, and violated sanctions by covertly doing business with pariah states. HSBC had helped a Saudi bank with links to Al Qaeda transfer money into the United States. Mexico’s Sinaloa cartel, which is responsible for tens of thousands of murders, deposited so much drug money in the bank that the cartel designed special cash boxes to fit HSBC’s teller windows. On a law-enforcement wiretap, one drug lord extolled the bank as “the place to launder money.”

With four thousand offices in seventy countries and some forty million customers, HSBC is a sprawling organization. But, in the judgment of the Senate investigators, all this wrongdoing was too systemic to be a matter of mere negligence. Senator Carl Levin, who headed the investigation, declared, “This is something that people knew was going on at that bank.” Half a dozen HSBC executives were summoned to Capitol Hill for a ritual display of chastisement. Stuart Gulliver, the bank’s C.E.O., said that he was “profoundly sorry.” Another executive, who had been in charge of compliance, announced during his testimony that he would resign. Few observers would have described the banking sector as a hotbed of ethical compunction, but even by the jaundiced standards of the industry HSBC’s transgressions were extreme. Lanny Breuer, a senior official at the Department of Justice, promised that HSBC would be “held accountable.”

What Breuer delivered, however, was the sort of velvet accountability to which large banks have grown accustomed: no criminal charges were filed, and no executives or employees were prosecuted for trafficking in dirty money. Instead, HSBC pledged to clean up its institutional culture, and to pay a fine of nearly two billion dollars: a penalty that sounded hefty but was only the equivalent of four weeks’ profit for the bank. The U.S. criminal-justice system might be famously unyielding in its prosecution of retail drug crimes and terrorism, but a bank that facilitated such activity could get away with a rap on the knuckles. A headline in the Guardian tartly distilled the absurdity: “HSBC ‘Sorry’ for Aiding Mexican Drug Lords, Rogue States and Terrorists.”

In the years since the mortgage crisis of 2008, it has become common to observe that certain financial institutions and other large corporations may be “too big to jail.” The Financial Crisis Inquiry Commission, which investigated the causes of the meltdown, concluded that the mortgage-lending industry was rife with “predatory and fraudulent practices.” In 2011, Ray Brescia, a professor at Albany Law School who had studied foreclosure procedures, told Reuters, “I think it’s difficult to find a fraud of this size . . . in U.S. history.” Yet federal prosecutors filed no criminal indictments against major banks or senior bankers related to the mortgage crisis. Even when the authorities uncovered less esoteric, easier-to-prosecute crimes—such as those committed by HSBC—they routinely declined to press charges.

This regime, in which corporate executives have essentially been granted immunity, is relatively new. After the savings-and-loan crisis of the nineteen-eighties, prosecutors convicted nearly nine hundred people, and the chief executives of several banks went to jail. When Rudy Giuliani was the top federal prosecutor in the Southern District of New York, he liked to march financiers off the trading floor in handcuffs. If the rules applied to mobsters like Fat Tony Salerno, Giuliani once observed, they should apply “to big shots at Goldman Sachs, too.” As recently as 2006, when Enron imploded, such titans as Jeffrey Skilling and Kenneth Lay were convicted of conspiracy and fraud.

Something has changed in the past decade, however, and federal prosecutions of white-collar crime are now at a twenty-year low. As Jesse Eisinger, a reporter for ProPublica, explains in a new book, “The Chickenshit Club: Why the Justice Department Fails to Prosecute Executives” (Simon & Schuster), a financial crisis has traditionally been followed by a legal crackdown, because a market contraction reveals all the wishful accounting and outright fraud that were hidden when the going was good. In Warren Buffett’s memorable formulation, “You only find out who is swimming naked when the tide goes out.” After the mortgage crisis, people in Washington and on Wall Street expected prosecutions. Eisinger reels off a list of potential candidates for criminal charges: Countrywide, Washington Mutual, Lehman Brothers, Citigroup, A.I.G., Bank of America, Merrill Lynch, Morgan Stanley. Although fines were paid, and the Financial Crisis Inquiry Commission referred dozens of cases to prosecutors, there were no indictments, no trials, no jail time. As Eisinger writes, “Passing on one investigation is understandable; passing on every single one starts to speak to something else.” (...)

The very conception of the modern corporation is that it limits individual liability. Yet, in the decades after the United Brands case, prosecutors often pursued both errant executives and the companies they worked for. When the investment firm Drexel Burnham Lambert was suspected of engaging in stock manipulation and insider trading, in the nineteen-eighties, prosecutors levelled charges not just against financiers at the firm, including Michael Milken, but also against the firm itself. (Drexel Burnham pleaded guilty, and eventually shut down.) After the immense fraud at Enron was exposed, federal authorities pursued its accounting company, Arthur Andersen, for helping to cook the books. Arthur Andersen executives, desperate to cover their tracks, deleted tens of thousands of e-mails and shredded documents by the ton. In 2002, Arthur Andersen was convicted of obstruction of justice, and lost its accounting license. The corporation, which had tens of thousands of employees, was effectively put out of business.

Eisinger describes the demise of Arthur Andersen as a turning point. Many lawyers, particularly in the well-financed realm of white-collar criminal defense, regarded the case as a flagrant instance of government overreach: the problem with convicting a company was that it could have “collateral consequences” that would be borne by employees, shareholders, and other innocent parties. “The Andersen case ushered in an era of prosecutorial timidity,” Eisinger writes. “Andersen had to die so that all other big corporations might live.”

With plenty of encouragement from high-end lobbyists, a new orthodoxy soon took hold that some corporations were so colossal—and so instrumental to the national economy—that even filing criminal charges against them would be reckless. In 2013, Eric Holder, then the Attorney General, acknowledged that decades of deregulation and mergers had left the U.S. economy heavily consolidated. It was therefore “difficult to prosecute” the major banks, because indictments could “have a negative impact on the national economy, perhaps even the world economy.”

Prosecutors came to rely instead on a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture. From 2002 to 2016, the Department of Justice entered into more than four hundred of these arrangements. Having spent a trillion dollars to bail out the banks in 2008 and 2009, the federal government may have been loath to jeopardize the fortunes of those banks by prosecuting them just a few years later. (...)

Numerous explanations have been offered for the failure of the Obama Justice Department to hold the big banks accountable: corporate lobbying in Washington, appeals-court rulings that tightened the definitions of certain types of corporate crime, the redirecting of investigative resources after 9/11. But Eisinger homes in on a subtler factor: the professional psychology of √©lite federal prosecutors. “The Chickenshit Club” is about a specific vocational temperament. When James Comey took over as the U.S. Attorney for the Southern District of New York, in 2002, Eisinger tells us, he summoned his young prosecutors for a pep talk. For graduates of top law schools, a job as a federal prosecutor is a brass ring, and the Southern District of New York, which has jurisdiction over Wall Street, is the most selective office of them all. Addressing this ferociously competitive cohort, Comey asked, “Who here has never had an acquittal or a hung jury?” Several go-getters, proud of their unblemished records, raised their hands.

But Comey, with his trademark altar-boy probity, had a surprise for them. “You are members of what we like to call the Chickenshit Club,” he said.

Most people who go to law school are risk-averse types. With their unalloyed drive to excel, the √©lite young attorneys who ascend to the Southern District have a lifetime of good grades to show for it. Once they become prosecutors, they are invested with extraordinary powers. In a world of limited public resources and unlimited wrongdoing, prosecutors make decisions every day about who should be charged and tried, who should be allowed to plead, and who should be let go. This is the front line of criminal justice, and decisions are made unilaterally, with no review by a judge. Even in the American system of checks and balances, there are few fetters on a prosecutor’s discretion. A perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney. But, as Comey implied, it could also mean that you’re taking only those cases you’re sure you’ll win—the lawyerly equivalent of enrolling in a gut class for the easy A.

You might suppose that the glory of convicting a blue-chip C.E.O. would be irresistible. But taking such a case to trial entails serious risk. In contemporary corporations, the decision-making process is so diffuse that it can be difficult to establish criminal culpability beyond a reasonable doubt. In the United Brands case, Eli Black directly authorized the bribe, but these days the precise author of corporate wrongdoing is seldom so clear. Even after a provision in the Sarbanes-Oxley Act, of 2002, began requiring C.E.O.s and C.F.O.s to certify the accuracy of corporate financial reports, few executives were charged with violating the law, because the companies threw up a thicket of subcertifications to buffer accountability.

As Samuel Buell, who helped prosecute the Enron and Andersen cases and is now a law professor at Duke, points out in his recent book, “Capital Offenses: Business Crime and Punishment in America’s Corporate Age,” an executive’s claim that he believed he was following the rules often poses “a severe, even disabling, obstacle to prosecution.” That is doubly so in instances where the alleged crime is abstruse. Even the professionals who bought and sold the dodgy mortgage-backed instruments that led to the financial crisis often didn’t understand exactly how they worked. How do you explicate such transactions—and prove criminal intent—to a jury?

Even with an airtight case, going to trial is always a gamble. Lose a white-collar criminal trial and you become a symbol of prosecutorial overreach. You might even set back the cause of corporate accountability. Plus, you’ll have a ding on your record. Eisinger quotes one of Lanny Breuer’s deputies in Washington telling a prosecutor, “If you lose this case, Lanny will have egg on his face.” Such fears can deter the most ambitious and scrupulous of young attorneys.

The deferred-prosecution agreement, by contrast, is a sure thing. Companies will happily enter into such an agreement, and even pay an enormous fine, if it means avoiding prosecution. “That rewards laziness,” David Ogden, a Deputy Attorney General in the Obama Administration, tells Eisinger. “The department gets publicity, stats, and big money. But the enormous settlements may or may not reflect that they could actually prove the case.” When companies agree to pay fines for misconduct, the agreements they sign are often conspicuously stinting in details about what they did wrong. Many agreements acknowledge criminal conduct by the corporation but do not name a single executive or officer who was responsible. “The Justice Department argued that the large fines signaled just how tough it had been,” Eisinger writes. “But since these settlements lacked transparency, the public didn’t receive basic information about why the agreement had been reached, how the fine had been determined, what the scale of the wrongdoing was and which cases prosecutors never took up.” These pas de deux between prosecutors and corporate chieftains came to feel “stage-managed, rather than punitive.”

by Patrick Radden Keefe, New Yorker | Read more:
Image: Eiko Ojala


Ever since its discovery in mid-October as it passed by Earth already outbound from our solar system, the mysterious object dubbed ‘Oumuamua (Hawaiian for “first messenger”) has left scientists utterly perplexed. Zooming down almost perpendicularly inside Mercury’s orbit at tens of thousands of kilometers per hour—too fast for our star’s gravity to catch—‘Oumuamua appeared to have been dropped in on our solar system from some great interstellar height, picking up even more speed on a slingshot-like loop around the sun before soaring away for parts unknown. It is now already halfway to Jupiter, too far for a rendezvous mission and rapidly fading from the view of Earth’s most powerful telescopes.

Astronomers scrambling to glimpse the fading object have revealed additional oddities. ‘Oumuamua was never seen to sprout a comet-like tail after getting close to the sun, hinting it is not a relatively fresh bit of icy flotsam from the outskirts of a nearby star system. This plus its deep red coloration—which mirrors that of some cosmic-ray-bombarded objects in our solar system—suggested that ‘Oumuamua could be an asteroid from another star. Yet those same observations also indicate ‘Oumuamua might be shaped rather like a needle, up to 800 meters long and only 80 wide, spinning every seven hours and 20 minutes. That would mean it is like no asteroid ever seen before, instead resembling the collision-minimizing form favored in many designs for notional interstellar probes. What’s more, it is twirling at a rate that could tear a loosely-bound rubble pile apart. Whatever ‘Oumuamua is, it appears to be quite solid—likely composed of rock, or even metal—seemingly tailor-made to weather long journeys between stars. So far there are few if any wholly satisfactory explanations as to how such an extremely elongated solid object could naturally form, let alone endure the forces of a natural high-speed ejection from a star system—a process thought to involve a wrenching encounter with a giant planet.

These bizarre characteristics have raised eyebrows among professional practitioners of SETI, the search for extraterrestrial intelligence, who use large radio telescopes to listen for interstellar radio transmissions from other cosmic civilizations. If ‘Oumuamua is in fact artificial, the reasoning goes, it might be transmitting or at least leaking radio waves.

So far limited observations of ‘Oumuamua, using facilities such as the SETI Institute’s Allen Telescope Array, have turned up nothing. But this Wednesday at 3 p.m. Eastern time, the Breakthrough Listen project will aim the West Virgina-based 100-meter Green Bank Telescope at ‘Oumuamua for 10 hours of observations in a wide range of radio frequencies, scanning the object across its entire rotation in search of any signals. Breakthrough Listen is part of billionaire Yuri Milner’s Breakthrough Initiatives program, a collection of lavishly-funded efforts aiming to uncover evidence of life elsewhere in the universe. Other projects include Breakthrough Starshot, which intends to develop and launch interstellar probes, as well as Breakthrough Watch, which would use large telescopes to study exoplanets for signs of life. (...)

Avi Loeb, an astrophysicist and Breakthrough advisor at Harvard University who helped persuade Milner to pursue the observations, is similarly pessimistic about prospects for uncovering aliens. There are, he says, arguments against its artificial origins. For one thing, its estimated spin rate seems too low to create useful amounts of “artificial gravity” for anything onboard. Furthermore, ‘Oumuamua shows no sign of moving due to rocketry or other technology, instead following an orbit shaped by the gravitational force of the sun. Its speed relative to the solar system (about 20 kilometers per second) also seems rather slow for any interstellar probe, which presumably would cruise at higher speeds for faster trips between stars. But that pace aligns perfectly with those of typical nearby stars—suggesting ‘Oumuamua might be merely a piece of galactic “driftwood” washed up by celestial currents.

Then again, Loeb says, “perhaps the aliens have a mothership that travels fast and releases baby spacecraft that freely fall into planetary system on a reconnaissance mission. In such a case, we might be able to intercept a communication signal between the different spacecraft.”

by Lee Billings, Scientific American | Read more:
Image: ESO/M. Kornmesser

Sunday, December 10, 2017

Code Injection: A New Low for ISPs

Imagine you’re on the phone with your doctor, discussing a very sensitive and private matter that requires your full attention. Suddenly in the middle of a sentence, your mobile phone provider injects a recording saying you’ve used 90 percent of your minutes for the month and to press 1 to contact customer service, and repeats the message until you either hit 1 or hit 2 to cancel.

Or you’re on a call with a buddy, talking about your favorite sports team. Suddenly you get several text messages with “special offers” from companies that sell jerseys and other sporting goods.

Unconscionable, right? Yet both scenarios play out on the Internet, in various degrees of insidiousness.

The first example above happens to an unfortunately large number of U.S. Internet users on a daily basis. Comcast and other ISPs “experimenting” with data caps inject JavaScript code into their customers’ data streams in order to display overlays on Web pages that inform them of data cap thresholds. They’ll even display notices that your cable modem may be eligible for replacement. And you can't opt out.

Think about it for a second: Your cable provider is monitoring your traffic and injecting its own code wherever it likes. This is not only obtrusive, but can cause significant problems with normal Web application function. It’s abhorrent on its face, but that hasn’t stopped companies from developing and deploying code to do it.

The second example is essentially how Google makes its money. You search for something (say, “Red Sox”) and you’ll see search results accompanied by ads for Red Sox tickets and merchandise. Web trackers do the same, which is why, if you searched for widgets on Amazon, you’ll see ads for widgets on completely unrelated websites. Of course, the difference in these examples is that you were purposefully seeking out these items, not merely discussing them with another person. This is an important distinction. (Remember: Gmail notes what you’re talking about in your email and produces ads based on that content; then again, you’re using the Gmail service for free.)

Either example is bad enough, but if we combine the two, we have a monster. We have an ISP that can and does inject its own code into data streams from third-party websites to deliver messages to its users. These could be the aforementioned data cap notifications or ads that hover above the website or even interstitial ads that cover half the page and frustrate the user, but appear to be served by the website that was visited, not the service provider. Of course, the ISP actively snoops on its users’ browsing to display those ads.

by Paul Venezia, InfoWorld | Read more:
Image: Thinkstock
[ed. This article was written in 2015 and Comcast is still at it today (see: Are you aware? Comcast is injecting 400+ lines of JavaScript into web pages.]

The Downloadable Brain

We're Closer Than We Think to Immortality

Two millennia ago, a young carpenter appeared in what is now Israel and, in addition to suggesting some guidelines on personal behavior, offered the gift of eternal life to those who believed in him. This went over well, since the prevailing religion of his people was noticeably weak in that department, lacking clear rewards for the virtuous. His apostle presented the deal in no uncertain terms: “He that heareth my word,” said John, “and believeth on him that sent me, hath everlasting life.” So far nobody has come back to testify to the veracity of this offer on the next plane of existence, but no one has disproved it, either. So that works for some people. It still doesn’t get to the nub of the matter, though. You still have to die in that scenario.

Some have searched for magic poultices, creams and liquids. In the 16th century, it was Ponce de Leon who reportedly searched Florida for waters that would stave off his rapidly approaching old age. Today, people follow in his footsteps, settling down in Boca, Hollywood and Jupiter Beach to achieve the same objective, with much the same lack of results, and in Beverly Hills, gorgons with crimped, distorted mouths and desiccated eyesacks roam Rodeo Drive, tweaking and slicing into themselves as they worship at the shrine of perpetual youth. Some even look okay at a very great distance.

It’s discouraging. Even if one buys into the notion of reincarnation, you are still only preserving the spirit; consciousness doesn’t make the trip from one life to the next. Plus, there is also the possibility that one will return in the next life as a stoat, or a guy whose karma involves the weekly cleaning of portable toilets at construction sites. Not the true vision of eternal life most of us would like, which involves sticking around without ever shuffling off this mortal coil at all, seeing the world change and evolve over generations.

No, for true advancement towards humanity’s most elusive goal, we must turn to the religion that we tend to like now: Technology. And the good news is that in this area we may actually be on the brink of success. For today, enormous gains are being made in the branch of computer science that is working to deliver eternal life to those who can afford it. Those in the hunt are far from snake-oil salesmen or alt-right marketers of nutty fluids. These are distinguished scientists making the prognostications. Nick Bostrom of Oxford University described the concept: “If emulation of particular brains is possible and affordable,” he wrote in a 2008 paper, “and if concerns about individual identity can be met, such emulation would enable back-up copies and ‘digital immortality.’”

Let’s take a moment to consider why this whole idea is not just futurist bushwah. The human brain, while based on an organic platform, is essentially a vast electronic switching station. If such is the case—or even fundamentally the case, with some, as it were, gray matter on the edges—why not work toward a method of emulating the brain-based persona of the individual in its entirety the way you would make a disc image on your laptop and then, when the operations and digital activities are mirrored in this manner, simply backing it up? Once it’s backed up, it can then be stored in a suitable, safe digital warehouse and then, when that receptacle has been created, downloaded into a young, vital living entity and voila. Old mind. Young body. Just what you always wanted. A hundred years later, you can do it again.

There is already significant scientific literature on the issue of personality transfer. Nobody writing about it doubts it can be done. Christof Koch, Chief Scientific Officer of the Allen Institute for Brain Science in Seattle, and Giulio Tononi, who holds a Distinguished Chair in Consciousness Science at the University of Wisconsin, offered this view on the circular of the Institute for Ethics and Emerging Technologies, “Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry and biology; it does not arise from some magical or otherworldly quality.” Once one assumes this sort of materialist view of the mind, it’s not difficult to imagine moving the contents of this mechanical entity from one housing to another.

Now, it is true that the task of performing a digital upload of an entire individual consciousness—its knowledge, earned experience, memories going back to the womb—the tech on that part of the process is in its infancy. But gains are being made. Thoughts and simple commands are now being transmitted over short distances by individuals with gizmos attached to their heads, moving little objects around at a distance by the power of their thoughts. It’s not much. But it shows that brain activity can be digitized and transmitted.

But let’s face it. We’re not going to go around with wires sticking out of our heads. The good news is that this really shouldn’t be necessary, not the way things appear to be going. Within just a very few years, the transporting of the electronic entity that is the human brain and all its contents will be vastly advanced—indeed, made possible—by a tremendous development in digital communications: that is, the widespread implantation of the cell phone and all its many wonderful functions right into your cranium.

Do you doubt it? I don’t. Go to any Starbucks, any airport, hotel lobby, public space, and you will see the entire strolling pageant of humanity with their noses firmly attached to a screen. Couples in restaurants. Kids hanging out at home. Staring into the little device. It’s not sustainable. It’s only a matter of time until a new way of inputting that data will be made available to those who want it and can afford it—driven by that ultimate arbiter of product development—consumer demand.

Tell the truth. Isn’t it a pain to be constantly carrying that thing around all the time? How many times a week do you lose it? Wouldn’t you like to be able to employ its many functions simply by touching your head, or maybe even just thinking about something? How would it be to be in touch with the Cloud 24/7? I propose the mastoid bone behind the ear. It’s unoccupied at the moment, totally unmonetized. It’s near the ocular and auditory systems, not to mention the wetware of the brain. It won’t be messing with your spine, which is complicated enough. The mastoid bone is perfect. And won’t it be nice to have your hands free?

by Stanley Bing, LitHub |  Read more:

Saturday, December 9, 2017

Nick Knight, Altered States

What If Everything You Know About the Suburbs is Wrong?

With 52 essays from 74 authors, Infinite Suburbia’s 732 pages comprehensively analyze the suburbs from the perspectives of architecture, design, landscape, planning, history, demographics, social justice, familial trends, policy, energy, mobility, health, environment, economics, and applied and future technologies. Organized by theme in an index that best resembles a spider’s web, the book is meant to be read in a nonlinear fashion, reminiscent of a choose-your-own-adventure novel. The editors of The Architect’s Newspaper (AN) spoke with the book’s editors, Alan M. Berger and Joel Kotkin, about the future of the suburbs. (...)

The Architect’s Newspaper: What is suburbia and how do you define it for this book?

Joel Kotkin and Alan Berger: Suburbia is generally a lower-density area outside the city core. In our approach, we look for such things as predominance of single-family housing, dependence on automobiles (particularly for non-work trips), age of housing stock, and distance from central core. This is about 80 percent of U.S. metro areas; some cities, like Phoenix and San Antonio, are predominately suburban even within their city boundaries. Within the book we have no fewer than five leading authors who define suburbia using different quantitative methods that are arguably more accurate than the U.S. Census at capturing the activities defining suburbia.

What are some of the myths that surround the architecture and design community’s perception of the suburbs?

Berger: Globally, the vast majority of people are moving to cities not to inhabit their centers, but to suburbanize their peripheries. I’m sure we can all agree that there are many suburban (and urban) models that are wasteful, unsustainable, and inequitable. However, despite having deep historical roots in conceiving suburban environments, the planning and design professions overwhelmingly vilify suburbia and seem disinterested in significantly improving it. Robert Bruegmann’s essay in the book reminds us that those who consider themselves the intellectual elite have a long history of anti-suburban crusades, and they have always been proven wrong. Our book, Infinite Suburbia, is built for an alternative discourse that can open paths to improvement and design agency, rather than condemning suburbia altogether. Our goal? To construct a balanced, alternative discourse to architecture and urban planning orthodoxy of “density fixes all,” and in doing so ask: “Can suburbia become a more sustainable model for rethinking the entire urban enterprise, as a vital fabric of “complete urbanization?”

What were some of the most surprising or counterintuitive things you found about the suburbs when compiling these essays?

Berger: One of the consistent themes in the book, and what gets me most excited as a landscape scholar, is the virtue of low density and the ecological potential of the suburban landscape. Environmentally, suburbs will save cities from themselves. Sarah Jack Hinners’s research in the book really surprised me. It suggests that suburban ecosystems, in general, are more heterogeneous and dynamic over space and time than natural ecosystems. Suburbs, she says, are the loci of novelty and innovation from an ecological and evolutionary perspective because they are a relatively new type of landscape and their ecology is not fixed or static.

Kotkin: Two trends that may seem counterintuitive to urbanists have been the rapid pattern of diversification in suburbs, which now hold most of the nation’s immigrants and minorities, as well as the fact that suburbs are more egalitarian and less divided by class than core cities. (...)

How do you see suburbia changing in the next few decades?

Kotkin: Suburbs will change in many ways. First, they will continue to spread in those regions that have not employed strict growth controls. Denser development seems inevitable—such as The Domain [development] in north Austin—although [the suburbs] will remain largely surrounded by the single family and townhouses most people prefer. Although they already are, they will become more attractive to Millennials, who will demand fewer golf courses and conventional malls, and more hiking/biking trials and open, common landscapes. Suburbs will become more independent from the traditional city centers except for some amenities and central government services.

Berger: Autonomous driving will dramatically change how we live, particularly in suburbia, where the dominant form of mobility is cars. Once there is widespread adoption of electrified autonomous cars, dramatic sustainability dividends will flourish in the suburbs of the future. This may also take the economic strain off metro mass transit systems, which can focus on service improvements within the core areas rather than stretching outward. Shared autonomous vehicles will become the preferred form of mass transit in areas not serviced by traditional buses or rail.

by The Editors, The Architect's Newspaper |  Read more:
Image: Matthew Niederhauser and John Fitzgerald

LL Cool J

[ed. Hmm... I don't think so. Cali is burning.]

So You Married Your Flirty Boss

“My career, at the time, was in his hands,” Allison Benedikt wrote at Slate this week, about the beginning of her relationship with John Cook, her husband of 14 years. They were colleagues at a magazine when they first kissed, and he was her senior. That kiss took place “on the steps of the West 4th subway station,” Benedikt writes, and Cook did it “without first getting [her] consent.” The piece is an intervention into the conversation on office sexual harassment, with Benedikt fearing “the consequences of overcorrection” on this issue. She does not think that “the initial touch, the scooting closer in the booth, the drunken sloppy first kiss, the occasional bad call or failed pass” are necessarily harassment, and has the happy marriage to prove it. Her piece was titled “The Upside of Office Flirtation? I’m Living It.”

Benedikt’s essay was widely shared on social media, praised for its “nuanced” approach to the messy nature of human relationships. Only a day later, however, we were reminded that there is a stark line between office flirtation and abuse. On Wednesday Lorin Stein, who himself is married to a former employee, announced that he is resigning from the editorship of The Paris Review amid an investigation into his behavior towards women in his orbit. Stein’s predation has long been a whisper-network item in literary New York. In a letter of resignation to the board of The Paris Review, Stein apologized for the way he has “blurred the personal and the professional in ways that were ... disrespectful of my colleagues and our contributors.” He said that he has come to realize that his behavior was “hurtful, degrading, and infuriating.”

Benedikt has my sympathy. She is in the tricky position of figuring out how the long-past actions of a man she loves fit into the new political landscape. If she is absolutely sure that she is a feminist, and if she is absolutely sure that she is against the harming of vulnerable people, then she is left with difficult questions: If she was merrily compliant with behaviors that are not acceptable in the workplace today, does that make her complicit with the culture of harassment? How can she defend her husband—and by extension herself—while maintaining that they were right then as well as now?

Ultimately Benedikt suggests that a man should not be condemned for the things that her husband did. But Cook did do something wrong. You shouldn’t kiss a junior colleague without asking. You probably shouldn’t kiss anybody without asking, as a rule of thumb to remember when you’re drunk. Consent is such an easy premise, and Benedikt’s reluctance to acknowledge it seems generational. Fourteen years ago affirmative consent was not such a widespread idea, and perhaps the simple words “Can I kiss you?” didn’t come so easily to a man’s lips. But the world has changed, and affirmative consent is now the standard. All college kids know this. Just ask!

It is not unreasonable to demand that men in workplaces act as if the year were 2017 and not 2003. At the same time, nobody is retrospectively prosecuting a man for acting as if it were 2003 in 2003. Nobody is hauling John Cook into the sex-crimes dock or putting Benedikt on trial for crimes against feminism. Nobody is suggesting that she thinks Stein’s behavior is okay, or that that the beginning of a loving marriage is the same thing as sexual harassment. But in writing her essay, in attempting to draw some universal principles from her specific experiences, Benedikt makes bad arguments with real-world consequences—of the kind that have kept the long-swirling rumors from Stein’s door until now.

I went to university late. I was 20 years old, and jaded from a bad relationship and a bad year at art school. Soon after starting my undergraduate degree at Oxford, I also started a relationship with a man in his thirties whose job it was to teach me. He did not coerce me; we pursued each other. I was very sad at the time and I could tell that he was too. He had moved there from another country and was isolated in the old boys’ club of Oxford. We were lonely and troubled people, and we made each other very happy. Our relationship continued for three years, until I moved to New York to work on my Ph.D. We went to weddings together. I ran up wooden staircases in buildings constructed hundreds of years ago to reach him. I slunk through shadows and took elusive cobbled paths through town to find him.

There was a lot of opportunity for coercion, but that didn’t happen: Once we started sleeping together, I made sure that my boyfriend never graded another paper by me again. I wanted to have my cake and eat it too, to sleep with a professor and keep my intellectual principles intact. I kept the relationship secret from almost all my friends. The whole thing was extremely fun, we traveled together, I loved him a lot. We didn’t get married or have kids, but I don’t regret it at all.

And I still think he did something wrong.

Professors should not have sex with their undergraduate students, even those who are older and more hardheaded and determined than the others. Academics abuse those junior to them all the time, and rely on a combination of tenure and shame to keep them out of trouble. This has also happened to me. I know that those two experiences—of a relationship and of an assault—are totally different. But they were both facilitated by the same permissive culture at universities. The first experience was good, the second was mindbendingly awful. I would have forgone the first to avoid the second.

The flaw in Benedikt’s argument is that it is so narrowly focused. It’s as if she thinks that the #MeToo campaign wants to take her marriage away. If Cook hadn’t kissed her on the steps of West 4th Street station in the light of the Duane Reade, she implies, she wouldn’t be married with those beautiful children. And then what would her life have been like? This is who I am, she seems to say.

When I say that professors shouldn’t sleep with their students, but that I don’t regret the time that my professor and I slept together, I am not contradicting myself. None of us can go back in time to change the past, nor do I have sufficient insight to know what life would have been like if I had never had that relationship. But I do know what I believe is right, right now. Justifying my own past is less important than protecting the vulnerable.

by Josephine Livingstone, New Republic |  Read more:
Image: Hulton Archive/Getty Images
[ed. A little too nuanced for me (it wouldn't be the first time I've failed to comprehend a woman's perspective). Do read the Benedikt article referenced at the top. See also: Sometimes a Stupid Joke is Just a Stupid Joke]