Thursday, December 14, 2017



via: here and here

That Giving Feeling

The central question that private bankers ask their clients is: “What does your money mean to you?” It’s a fundamental moral issue at all levels of wealth. Revealing answers range from the odious (controlling the lives of your family members) to the visionary (saving the world).

Eventually, bankers say, new wealth enjoys the luxury lifestyle for about five years before they start looking for some purpose in their lives.

New and old Asian wealth have confused and conflated the meaning of charity versus philanthropy, and the need to accomplish more with their vast assets. The best analogy is that charity is when you hand money to the Salvation Army in the street, who then decides how to distribute it. Philanthropy is when you stand in the street and decide by yourself who to hand money to.

Living with the obligations and responsibilities of wealth isn’t easy. Big money creates its own gravity, forcing their owners’ lives into an orbit. Gift giving as a form of charity is certainly commendable and flexible, allowing donors to shift the management of charity to established organisations.

But this concept is becoming inadequate, even corrupted, considering the super wealth being created by technology success. And charities are also becoming a source of potential abuse. (...)

Here’s a twist on the spirit of giving. In his recent Facebook post, Mark Zuckerberg said he intended to divest between 35 million and 75 million Facebook shares in the next 18 months to fund his charity. He currently holds 53 per cent of the voting stock. If he sold 35 million shares, his voting stake would be reduced to 50.6 per cent.

But, according to the Financial Times, if he sold 75 million shares, he would be dependent on the votes of co-founder Dustin Muskovitz to exercise control over a majority of votes. So Zuckerberg’s advisers cooked up a stock reclassification that effectively created a third non-voting class, that would have solved this problem. Objections and the threat of a lawsuit from investors stopped his plan.

Once the US$12 billion of proceeds from the stock sale is transferred to his foundation, all investment income is tax free. He only needs to donate 5 per cent of principal per year to charity. Most foundations and family investment offices of that magnitude can make investment returns more than 5 per cent per annum. So the principal in the foundation never, ever actually need to be disbursed for charity.

For many foundations, the present value of the tax subsidy to the tycoon personally far exceeds the net disbursement of the principal from the foundation on charity.

New technology wealth seems fixated on funding scalable charity projects with the same model as their companies. Or that which benefit their companies.

Unfortunately, many poverty alleviation projects can’t be scaled, such as finding clean water for poor villages in Africa. It would be more practical and noble if Zuckerberg would simply give away the US$12 billion, rather than playing games with tax planning.

by Peter Guy, South China Morning Post |  Read more:
Image: uncredited
[ed. See also: 2017 Was Bad for Facebook. 2018 Will Be Worse.]

Wednesday, December 13, 2017

The Future is Here – AlphaZero

Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

DeepMind and AlphaZero

About three years ago, DeepMind, a company owned by Google that specializes in AI development, turned its attention to the ancient game of Go. Go had been the one game that had eluded all computer efforts to become world class, and even up until the announcement was deemed a goal that would not be attained for another decade! This was how large the difference was. When a public challenge and match was organized against the legendary player Lee Sedol, a South Korean whose track record had him in the ranks of the greatest ever, everyone thought it would be an interesting spectacle, but a certain win by the human. The question wasn’t even whether the program AlphaGo would win or lose, but how much closer it was to the Holy Grail goal. The result was a crushing 4-1 victory, and a revolution in the Go world. In spite of a ton of second-guessing by the elite, who could not accept the loss, eventually they came to terms with the reality of AlphaGo, a machine that was among the very best, albeit not unbeatable. It had lost a game after all.

The saga did not end there. A year later a new updated version of AlphaGo was pitted against the world number one of Go, Ke Jie, a young Chinese whose genius is not without parallels to Magnus Carlsen in chess. At the age of just 16 he won his first world title and by the age of 17 was the clear world number one. That had been in 2015, and now at age 19, he was even stronger. The new match was held in China itself, and even Ke Jie knew he was most likely a serious underdog. There were no illusions anymore. He played superbly but still lost by a perfect 3-0, a testimony to the amazing capabilities of the new AI.

Many chess players and pundits had wondered how it would do in the noble game of chess. There were serious doubts on just how successful it might be. Go is a huge and long game with a 19x19 grid, in which all pieces are the same, and not one moves. Calculating ahead as in chess is an exercise in futility so pattern recognition is king. Chess is very different. There is no questioning the value of knowledge and pattern recognition in chess, but the royal game is supremely tactical and a lot of knowledge can be compensated for by simply outcalculating the opponent. This has been true not only of computer chess, but humans as well.

However, there were some very startling results in the last few months that need to be understood. DeepMind’s interest in Go did not end with that match against the number one. You might ask yourself what more there was to do after that? Beat him 20-0 and not just 3-0? No, of course not. However, the super Go program became an internal litmus test of a sorts. Its standard was unquestioned and quantified, so if one wanted to test a new self-learning AI, and how good it was, then throwing it at Go and seeing how it compared to the AlphaGo program would be a way to measure it.

A new AI was created called AlphaZero. It had several strikingly different changes. The first was that it was not shown tens of thousands of master games in Go to learn from, instead it was shown none. Not a single one. It was merely shown the rules, without any other information. The result was a shock. Within just three days its completely self-taught Go program was stronger than the version that had beat Lee Sedol, a result the previous AI had needed over a year to achieve. Within three weeks it was beating the strongest AlphaGo that had defeated Ke Jie. What is more: while the Lee Sedol version had used 48 highly specialized processors to create the program, this new version used only four!

Approaching chess might still seem unusual. After all, although DeepMind had already shown near revolutionary breakthroughs thanks to Go, that had been a game that had yet to be ‘solved’. Chess already had its Deep Blue 20 years ago, and today even a good smartphone can beat the world number one. What is there to prove exactly?

It needs to be remembered that Demis Hassabis, the founder of DeepMind has a profound chess connection of his own. He had been a chess prodigy in his own right, and at age 13 was the second highest rated player under 14 in the world, second only to Judit Polgar. He eventually left the chess track to pursue other things, like founding his own PC video game company at age 17, but the link is there. There was still a burning question on everyone’s mind: just how well would AlphaZero do if it was focused on chess? Would it just be very smart, but smashed by the number-crunching engines of today where a single ply is often the difference between winning or losing? Or would something special come of it?

A new paradigm

On December 5 the DeepMind group published a new paper at the site of Cornell University called "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm", and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it's worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible? 

The paper "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" at Cornell University.

The paper explains:
“AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search, as originally proposed by Shannon. Figure 2 shows the scalability of each player with respect to thinking time, measured on an Elo scale, relative to Stockfish or Elmo with 40ms thinking time. AlphaZero’s MCTS scaled more effectively with thinking time than either Stockfish or Elmo, calling into question the widely held belief that alpha-beta search is inherently superior in these domains.”
In other words, instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think. A top player may be able to outcalculate a weaker player in both consistency and depth, but it still remains a joke compared to what even the weakest computer programs are doing. It is the human’s sheer knowledge and ability to filter out so many moves that allows them to reach the standard they do. Remember that although Garry Kasparov lost to Deep Blue it is not clear at all that it was genuinely stronger than him even then, and this was despite reaching speeds of 200 million positions per second. If AlphaZero is really able to use its understanding to not only compensate 900 times fewer moves, but surpass them, then we are looking at a major paradigm shift.
How does it play?

Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own. And do recall that this is the result of only 24 hours of self-learning. The team produced fascinating graphs showing the openings it discovered as well as the ones it gradually rejected as it grew stronger!

by Albert Silver, Chess News |  Read more:
Image: uncredited

Tuesday, December 12, 2017

The Transhumanist FAQ

1.1 What is transhumanism?

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows: 

(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities. 

(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. Transhumanism can be viewed as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better by promoting rational thinking, freedom, tolerance, democracy, and concern for our fellow human beings. Transhumanists agree with this but also emphasize what we have the potential to become. Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as “human”. 

It is not our human shape or the details of our current human biology that define what is valuable about us, but rather our aspirations and ideals, our experiences, and the kinds of lives we lead. To a transhumanist, progress occurs when more people become more able to shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves. Transhumanists seek to create a world in which autonomous individuals may choose to remain unenhanced or choose to be enhanced and in which these choices will be respected. 

Through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the relatively near future, we may face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with interface technology. Molecular nanotechnology has the potential to manufacture abundant resources for everybody and to give us control over the biochemical processes in our bodies, enabling us to eliminate disease and unwanted aging. Technologies such as brain-computer interfaces and neuropharmacology could amplify human intelligence, increase emotional well-being, improve our capacity for steady commitment to life projects or a loved one, and even multiply the range and richness of possible emotions. On the dark side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Seeking to understand the dangers and working to prevent disasters is an essential part of the transhumanist agenda. 

Transhumanism is entering the mainstream culture today, as increasing numbers of scientists, scientifically literate philosophers, and social thinkers are beginning to take seriously the range of possibilities that transhumanism encompasses. A rapidly expanding family of transhumanist groups, differing somewhat in flavor and focus, and a plethora of discussion groups in many countries around the world, are gathered under the umbrella of the World Transhumanist Association, a non-profit democratic membership organization. 

1.2 What is a posthuman?

It is sometimes useful to talk about possible future beings whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards. The standard word for such beings is “posthuman”. (Care must be taken to avoid misinterpretation. “Posthuman” does not denote just anything that happens to come after the human era, nor does it have anything to do with the “posthumous”. In particular, it does not imply that there are no humans anymore.) 

Many transhumanists wish to follow life paths which would, sooner or later, require growing into posthuman persons: they yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states; to be able to avoid feeling tired, hateful, or irritated about petty things; to have an increased capacity for pleasure, love, artistic appreciation, and serenity; to experience novel states of consciousness that current human brains cannot access. It seems likely that the simple fact of living an indefinitely long, healthy, active life would take anyone to posthumanity if they went on accumulating memories, skills, and intelligence. 

Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques. 

Some authors write as though simply by changing our self-conception, we have become or could become posthuman. This is a confusion or corruption of the original meaning of the term. The changes required to make us posthuman are too profound to be achievable by merely altering some aspect of psychological theory or the way we think about ourselves. Radical technological modifications to our brains and bodies are needed. It is difficult for us to imagine what it would be like to be a posthuman person. Posthumans may have experiences and concerns that we cannot fathom, thoughts that cannot fit into the three-pound lumps of neural tissue that we use for thinking. Some posthumans may find it advantageous to jettison their bodies altogether and live as information patterns on vast super-fast computer networks. Their minds may be not only more powerful than ours but may also employ different cognitive architectures or include new sensory modalities that enable greater participation in their virtual reality settings. Posthuman minds might be able to share memories and experiences directly, greatly increasing the efficiency, quality, and modes in which posthumans could communicate with each other. The boundaries between posthuman minds may not be as sharply defined as those between humans. 

Posthumans might shape themselves and their environment in so many new and profound ways that speculations about the detailed features of posthumans and the posthuman world are likely to fail.

by Nick Bostrom, Oxford University |  Read more: (pdf)
[ed. Repost]

Naked 9 – 4
blood moon

via:

For the Good of Society - Delete Your Map App

I live on an obnoxiously quaint block in South Berkeley, California, lined with trees and two-story houses. There’s a constant stream of sidewalk joggers before and after work, and plenty of (good) dogs in the yards. Trick-or-treaters from distant regions of the East Bay invade on Halloween.

Once a week, the serenity is interrupted by the sound of a horrific car crash. Sometimes, it’s a tire screech followed by the faint dint of metal on metal. Other times, a boom stirs the neighbors outside to gawk. It’s always at the intersection of Hillegass, my block, and Ashby, one of the city’s thoroughfares. It generally happens around rush hour, when the street is clogged with cars.

It wasn’t always this way. In 2001, the city designated the street as Berkeley’s first “bicycle boulevard,” presumably due to some combination of it being relatively free of traffic and its offer of a direct route from the UC Berkeley campus down into Oakland. But since that designation, another group has discovered the exploit. Here, for the hell of it, are other events that have occurred since 2001:

2005: Google Maps is launched.
2006: Waze is launched.
2009: Uber is founded.
2012: Lyft is founded.

“The phenomenon you’re experiencing is happening all over the U.S.,” says Alexandre Bayen, director of transportation studies at UC Berkeley.

Pull up a simple Google search for “neighborhood” and “Waze,” and you’re bombarded with local news stories about similar once-calm side streets now the host of rush-hour jams and late-night speed demons. It’s not only annoying as hell, it’s a scenario ripe for accidents; among the top causes of accidents are driver distraction (say, by looking at an app), unfamiliarity with the street (say, because an app took you down a new side street), and an increase in overall traffic.

“The root cause is the use of routing apps,” says Bayen, “but over the last two to three years, there’s the second layer of ride-share apps.” (...)

All that extra traffic down previously empty streets has created an odd situation in which cities are constantly playing defense against the algorithms.

“Typically, the city or county, depending on their laws, doesn’t have a way to fight this,” says Bayen, “other than by doing infrastructure upgrades.”

Fremont, California, has lobbed some of the harshest resistance, instituting rush-hour restrictions, and adding stop signs and traffic lights at points of heavy congestion. San Francisco is considering marking designated areas where people can be picked up or dropped off by ride-shares (which, hmm, seems familiar). Los Angeles has tinkered with speed bumps and changing two-way streets into one-ways. (Berkeley has finally decided to play defense on my block by installing a warning system that will slow cars at the crash-laden intersection; it will be funded by taxpayers.) (...)

Perhaps you see the problem. If cities thwart map apps and ride-share services through infrastructure changes with the intent to slow traffic down, it has the effect of slowing down traffic. So, the algorithm may tell drivers to go down another side street, and the residents who’ve been griping to the mayor may be pleased, but traffic, on the city whole, has been negatively affected, making everyone’s travel longer than before. “It’s nuts,” says Bayen, “but this is the reality of urban planning.”

Bayen points out that this is sort of a gigantic version of the prisoner’s dilemma. “If everybody’s doing the selfish thing, it’s bad for society,” says Bayen. “That’s what’s happening here.” Even though the app makes the route quicker for the user, that’s only in relation to other drivers not using the app, not to their previous drives. Now, because everyone is using the app, everyone’s drive-times are longer compared to the past. “These algorithms are not meant to improve traffic, they’re meant to steer motorists to their fastest path,” he says. “They will give hundreds of people the shortest paths, but they won’t compute for the consequences of those shortest paths.”

by Rick Paulas, Select/All | Read more:
Image: Waze

How Email Open Tracking Quietly Took Over the Web

"I just came across this email," began the message, a long overdue reply. But I knew the sender was lying. He’d opened my email nearly six months ago. On a Mac. In Palo Alto. At night.

I knew this because I was running the email tracking service Streak, which notified me as soon as my message had been opened. It told me where, when, and on what kind of device it was read. With Streak enabled, I felt like an inside trader whenever I glanced at my inbox, privy to details that gave me maybe a little too much information. And I certainly wasn’t alone.

There are some 269 billion emails sent and received daily. That’s roughly 35 emails for every person on the planet, every day. Over 40 percent of those emails are tracked, according to a study published last June by OMC, an “email intelligence” company that also builds anti-tracking tools.

The tech is pretty simple. Tracking clients embed a line of code in the body of an email—usually in a 1x1 pixel image, so tiny it's invisible, but also in elements like hyperlinks and custom fonts. When a recipient opens the email, the tracking client recognizes that pixel has been downloaded, as well as where and on what device. Newsletter services, marketers, and advertisers have used the technique for years, to collect data about their open rates; major tech companies like Facebook and Twitter followed suit in their ongoing quest to profile and predict our behavior online.

But lately, a surprising—and growing—number of tracked emails are being sent not from corporations, but acquaintances. “We have been in touch with users that were tracked by their spouses, business partners, competitors,” says Florian Seroussi, the founder of OMC. “It's the wild, wild west out there.”

According to OMC's data, a full 19 percent of all “conversational” email is now tracked. That’s one in five of the emails you get from your friends. And you probably never noticed.

“Surprisingly, while there is a vast literature on web tracking, email tracking has seen little research,” noted an October 2017 paper published by three Princeton computer scientists. All of this means that billions of emails are sent every day to millions of people who have never consented in any way to be tracked, but are being tracked nonetheless. And Seroussi believes that some, at least, are in serious danger as a result. (...)

I stumbled upon the world of email tracking last year, while working on a book about the iPhone and the notoriously secretive company that produces it. I’d reached out to Apple to request some interviews, and the PR team had initially seemed polite and receptive. We exchanged a few emails. Then they went radio silent. Months went by, and my unanswered emails piled up. I started to wonder if anyone was reading them at all.

That’s when, inspired by another journalist who’d been stonewalled by Apple, I installed the email tracker Streak. It was free, and took about 30 seconds. Then, I sent another email to my press contact. A notification popped up on my screen: My email had been opened almost immediately, inside Cupertino, on an iPhone. Then it was opened again, on an iMac, and again, and again. My messages were not only being read, but widely disseminated. It was maddening, watching the grey little notification box—“Someone just viewed ‘Regarding book interviews’—pop up over and over and over, without a reply.

So I decided to go straight to the top. If Apple’s PR team was reading my emails, maybe Tim Cook would, too.

I wrote Cook a lengthy email detailing the reasons he should join me for an interview. When I didn’t hear back, I drafted a brief follow-up, enabled Streak, hit send. Hours later, I got the notification: My email had been read. Yet one glaring detail looked off. According to Streak, the email had been read on a Windows Desktop computer.

Maybe it was a fluke. But after a few weeks, I sent another follow up, and the email was read again. On a Windows machine.

That seemed crazy, so I emailed Streak to ask about the accuracy of its service, disclosing that I was a journalist. In the confusing email exchange with Andrew from Support that followed, I was told that Streak is “very accurate,” as it can let you know what time zone or state your lead is in—but only if you’re a salesperson. Andrew stressed that “if you’re a reporter and wanted to track someone's whereabouts, [it’s] not at all accurate.” It quickly became clear that Andrew had the unenviable task of threading a razor thin needle: maintaining that Streak both supplied very precise data but was also a friendly and non-intrusive product. After all, Streak users want the most accurate information possible, but the public might chafe if it knew just how accurate that data was—and considered what it could be used for besides honing sales pitches. This is the paradox that threatens to pop the email tracking bubble as it grows into ubiquity. No wonder Andrew got Orwellian: “Accuracy is entirely subjective,” he insisted, at one point.

Andrew did, however, unequivocally say that if Streak listed the kind of device used—as opposed to listing unknown—then that info was also “very accurate.” Even if pertained to the CEO of Apple.

by Brian Merchant, Wired |  Read more:
Image: Getty

He Made Masterpieces with Manure

On the acknowledgements page of Traces of Vermeer, Jane Jelley thanks one friend who tracked down pig bladders and another who harvested mussel shells from a freshwater moat. Jelley, a painter, takes her research on the Dutch Golden Age painter Johannes Vermeer (1632–75) out of galleries and archives and into the studio. Her experiments are two parts Professor Branestawm, one part Great British Bake Off. She discovers that she can make yellow ‘lakes’ – pigments produced from dyes of the kind used by Vermeer and his contemporaries to create subtle ‘glazed’ effects – in her kitchen at home. First, you collect some unripe buckthorn berries from a hedgerow or the flowers of the broom shrub. Next, ‘You have to boil up the plants; and then you need some chalk, some alum; some coffee filters; and a large turkey baster.’ She reminds us how fortunate modern artists are to be able to buy their paint in ready-mixed tubes from Winsor & Newton.

Before he laid down even a dot of paint, Vermeer would have weighed, ground, burned, sifted, heated, cooled, kneaded, washed, filtered, dried and oiled his colours. Some pigments – the rare ultramarine blue made from lapis lazuli from Afghanistan, for example – had to be plunged into cold vinegar. Others – such as lead white – needed to be kept in a hut filled with horse manure. The fumes caused the lead to corrode, creating flakes of white carbonate that were scraped off by hand.

Vermeer knew how to soak old leather gloves to extract ‘gluesize’, applied as a coating to artists’ canvas. Or he might have followed the recipe for goat glue in Cennino Cennini’s painters’ manual The Craftsman’s Handbook: boiled clippings of goat muzzles, feet, sinews and skin. This was best made in January or March, in ‘great cold or high winds’, to disperse the goaty smell.

An artist had to be a chemist – and he had to have a strong stomach. He would have known, writes Jelley, ‘the useful qualities of wine, ash, urine, and saliva’. ‘Do not lick your brush or spatter your mouth with paint,’ warned Cennini. Lead white and arsenic yellow were poisonous, goat glue merely unpleasant. The art historian Jan Veth, writing in 1908 about Girl with a Pearl Earring (c 1665–7), fancied that Vermeer had painted with ‘the dust of crushed pearl’. Forensics have since revealed the earthier truth.

by Laura Freeman, Literary Review |  Read more:
Image: Wikipedia

Monday, December 11, 2017


Kimi Werner
via:
[ed. Free diver extraordinaire (and rider of great white sharks).]

Jonas Wood, Scholl Canyon 2, 2017
via:

via:
[ed. Mondays]

What to Make of New Positive NSI-189 Results?

I wanted NSI-189 to be real so badly.

Pharma companies used to love antidepressants. Millions of people are depressed. Millions of people who aren’t depressed think they are. Sell them all a pill per day for their entire lifetime, and you’re looking at a lot of money. So they poured money into antidepressant research, culminating in 80s and 90s with the discovery of selective serotonin reuptake inhibitors (SSRIs) like Prozac. Since then, research has moved into exciting new areas, like “more SSRIs”, “even more SSRIs”, “drugs that claim to be SNRIs but on closer inspection are mostly just SSRIs”, and “drugs that claim to be complicated serotonin modulators but realistically just work as SSRIs”. Some companies still go through the pantomime of inventing new supposedly-not-SSRI drugs, and some psychiatrists still go through the pantomime of pretending to be excited about them, but nobody’s heart is really in it anymore.

How did it come to this? Apparently discovering new antidepressants is really hard. Part of it is that depression has such a high placebo response rate (realistically probably mostly regression to the mean) that it’s hard for even a good medication to separate much from placebo. Another part is that psychopharmacology is just a really difficult field even at the best of times. Pharma companies tried, tried some more, and gave up. All the new no-really-not-SSRIs are the fig leaf to cover their failure. Now people are gradually giving up on even pretending. There are still lots of exciting possibilities coming from the worlds of academia and irresponsible self-experimentation, but the Very Serious People have left the field. This is a disaster, insofar as they’re the only people who can get things through the FDA and into the mass market where anyone besides fringe enthusiasts will use them.

Enter NSI-189. A tiny pharma company called Neuralstem announced that they had a new antidepressant that worked on directly on neurogenesis – a totally new mechanism! nothing at all like SSRIs! – and seemed to be getting miraculous results. Lots of people (including me) suspect neurogenesis is pretty fundamental to depression in a way serotonin isn’t, so the narrative really worked – we’ve finally figured out a way to hit the root cause of depression instead of fiddling around with knobs ten steps away from the actual problem. Irresponsible self-experimenters managed to synthesize and try some of it, and reported miraculous stories of treatment-resistant depressions vanishing overnight. Someone had finally done the thing!

There are many theories about what place our world holds in God’s creation. Here’s one with as much evidence as any other: Earth was created as a Hell for bad psychiatrists. For one thing, it would explain why there are so many of them here. For another, it would explain why – after getting all of our hopes so high – NSI-189 totally flopped in FDA trials.

I don’t think the data have been published anywhere (more evidence for the theory!), but we can read off the important parts of the story from Neuralstem’s press release. In Stage 1, they put 44 patients on 40 mg NSI-189 daily, another 44 patients on 80 mg daily, and 132 patients on placebo for six weeks. In Stage 2, they took the people from the placebo group who hadn’t gotten better in Stage 1 and put half of them on NSI-189, leaving the other half on placebo – I think this was a clever trick to get a group of people pre-selected for not responding to placebo and so avoid the problem where everyone does well on placebo and so it’s a washout. But all of this was for nothing. On the primary endpoint – a depression rating instrument called MADRS – the NSI-189 group failed to significantly outperform placebo during either stage.

Neuralstem’s stock fell 61% on news of the study. Financial blog Seeking Alpha advised readers that Neuralstem Is Doomed. Investors tripped over themselves to withdraw support from a corporation that apparently was unable to handle the absolute bread-and-butter most basic job of a pharma company – fudging clinical trial results so that nobody figures out they were negative until half the US population is on their drug.

From last month’s New York Times:
The first thing you feel when a [drug] trial fails is a sense of shame. You’ve let your patients down. You know, of course, that experimental drugs have a poor track record – but even so, thisdrug had seemed so promising (you cannot erase the image of the cancer cells dying under the microscope). You feel as if you’ve shortchanged the Hippocratic Oath […] 
There’s also a more existential shame. In an era when Big Pharma might have macerated the last drips of wonder out of us, it’s worth reiterating the fact: Medicines are notoriously hard to discover. The cosmos yields human drugs rarely and begrudgingly – and when a promising candidate fails to work, it is as if yet another chemical morsel of the universe has been thrown into the dumpster. The meniscus of disappointment rises inside you: That domain of human biology that the medicine hoped to target may never be breached therapeutically.
And so the rest of us gave a heavy sigh, shed a single tear, and went back to telling ourselves that maybe vortioxetine wasn’t exactly an SSRI, in ways.

II.

But the reason I’m writing about all of this now is that Neuralstem has just put out a new press release saying that actually, good news! NSI-189 works after all! Their stock rose 67%! Investment blogs are writing that Neuralstem Is A Big Winner and boasting about how much Neuralstem stock they were savvy enough to hold on to!

What are these new results? Can we believe them?

I’m still trying to figure out exactly what’s going on; the results themselves were presented at a conference and aren’t directly available. But from what I can gather from the press release, this isn’t a new trial. It’s new secondary endpoints from the first trial, that Neuralstem thinks cast a new light on the results.

What are secondary endpoints? Often during a drug trial, people want to measure whether the drug works in multiple different ways. For depression, these are usually rating scales that ask about depressive symptoms – things like “On a scale of 1 to 5, how sad are you?” or “How many times in the past month have you considered suicide?”. You could give the MADRS, a scale that focuses on emotional symptoms. Or you could give the HAM-D, a scale that focuses more on psychosomatic symptoms. Or since depression makes people think less clearly, you could give them a cognitive battery. Depending on what you want to do, all of these are potentially good choices.

But once you let people start giving a lot of tests, there’s a risk that they’ll just keep giving more and more tests until they find one that gives results they like. Remember, one out of every twenty statistical analyses you do will be positive at the 0.05 level by pure coincidence. So if you give people ten tests, you’ve got a pretty good chance of getting one positive result – at which point, you trumpet that one to the world.

Statisticians try to solve this loophole by demanding researchers pre-identify a primary endpoint. That is, you have to say beforehand which test you want to count. You can do however many tests you want, but the other ones (“secondary endpoints”) are for your own amusement and edification. The primary endpoint is the one that the magical “p = 0.05 means it works” criteria gets applied to.

Neuralstem chose the MADRS scale as their primary endpoint and got a null result. This is what they released in July that had everybody so disappointed. The recently-released data are a bunch of secondary endpoints, some of which are positive. This is the new result that has everybody so excited.

You might be asking “Wait, I thought the whole point of having primary versus secondary endpoints was so people wouldn’t do that?” Well…yes. I’m trying to figure out if there’s any angle here besides “Company does thing that you’re not supposed to do because it can always give you positive results, gets positive results, publishes a press release”. I am not an expert here. But I can’t find one. (...)

Except…why did their stock jump 67%? We just got done talking about the efficient market hypothesis and the theory that the stock market is never wrong in a way detectable by ordinary humans.

First of all, maybe that’s wrong. My dad is a doctor, and he swears that he keeps making a lot of money from medical investments. He just sees some new medical product, says “Yeah, that sounds like the sort of thing that will work and become pretty popular”, and buys it. I keep telling him this cannot possibly work, and he keeps coming to me a year later telling me he made a killing and now has a new car. Maybe all financial theory is a total lie, and if you get a lucky feeling when looking at a company’s logo you should invest in them right away and you will always make a fortune.

Or maybe the it’s that it’s not investors’ job to answer “Does this drug work?” but rather “Will investing in this stock make me money?”. Neuralstem has mentioned that they’ll be bringing these new results in front of the FDA, presumably in the hopes of getting a Phase III trial. FDA standards seem to have gotten looser lately, and maybe a fig leaf of positive results is all they need to give the go ahead for a bigger trial anyway – after all, they wouldn’t be approving the drug, just saying more research is appropriate. Then maybe that trial would come out better. Or it would be big enough that they would discover some alternate use (remember, Viagra was originally developed to lower blood pressure, and only got switched to erectile dysfunction after Phase 1 trials). Or maybe Neuralstem will join the 21st century and hire a competent Obfuscation Department.

I don’t know. I’m beyond caring. The sign of a really deep depression is abandoning hope, and I’ve abandoned hope in NSI-189…

by Scott Alexander, Slate Star Codex |  Read more:
Image: via
[ed. See also: NSI-189: A Nootropic Antidepressant That Promotes Neurogenesis]

Why Corrupt Bankers Avoid Jail

Prosecution of white-collar crime is at a twenty-year low.

In the summer of 2012, a subcommittee of the U.S. Senate released a report so brimming with international intrigue that it read like an airport paperback. Senate investigators had spent a year looking into the London-based banking group HSBC, and discovered that it was awash in skulduggery. According to the three-hundred-and-thirty-four-page report, the bank had laundered billions of dollars for Mexican drug cartels, and violated sanctions by covertly doing business with pariah states. HSBC had helped a Saudi bank with links to Al Qaeda transfer money into the United States. Mexico’s Sinaloa cartel, which is responsible for tens of thousands of murders, deposited so much drug money in the bank that the cartel designed special cash boxes to fit HSBC’s teller windows. On a law-enforcement wiretap, one drug lord extolled the bank as “the place to launder money.”

With four thousand offices in seventy countries and some forty million customers, HSBC is a sprawling organization. But, in the judgment of the Senate investigators, all this wrongdoing was too systemic to be a matter of mere negligence. Senator Carl Levin, who headed the investigation, declared, “This is something that people knew was going on at that bank.” Half a dozen HSBC executives were summoned to Capitol Hill for a ritual display of chastisement. Stuart Gulliver, the bank’s C.E.O., said that he was “profoundly sorry.” Another executive, who had been in charge of compliance, announced during his testimony that he would resign. Few observers would have described the banking sector as a hotbed of ethical compunction, but even by the jaundiced standards of the industry HSBC’s transgressions were extreme. Lanny Breuer, a senior official at the Department of Justice, promised that HSBC would be “held accountable.”

What Breuer delivered, however, was the sort of velvet accountability to which large banks have grown accustomed: no criminal charges were filed, and no executives or employees were prosecuted for trafficking in dirty money. Instead, HSBC pledged to clean up its institutional culture, and to pay a fine of nearly two billion dollars: a penalty that sounded hefty but was only the equivalent of four weeks’ profit for the bank. The U.S. criminal-justice system might be famously unyielding in its prosecution of retail drug crimes and terrorism, but a bank that facilitated such activity could get away with a rap on the knuckles. A headline in the Guardian tartly distilled the absurdity: “HSBC ‘Sorry’ for Aiding Mexican Drug Lords, Rogue States and Terrorists.”

In the years since the mortgage crisis of 2008, it has become common to observe that certain financial institutions and other large corporations may be “too big to jail.” The Financial Crisis Inquiry Commission, which investigated the causes of the meltdown, concluded that the mortgage-lending industry was rife with “predatory and fraudulent practices.” In 2011, Ray Brescia, a professor at Albany Law School who had studied foreclosure procedures, told Reuters, “I think it’s difficult to find a fraud of this size . . . in U.S. history.” Yet federal prosecutors filed no criminal indictments against major banks or senior bankers related to the mortgage crisis. Even when the authorities uncovered less esoteric, easier-to-prosecute crimes—such as those committed by HSBC—they routinely declined to press charges.

This regime, in which corporate executives have essentially been granted immunity, is relatively new. After the savings-and-loan crisis of the nineteen-eighties, prosecutors convicted nearly nine hundred people, and the chief executives of several banks went to jail. When Rudy Giuliani was the top federal prosecutor in the Southern District of New York, he liked to march financiers off the trading floor in handcuffs. If the rules applied to mobsters like Fat Tony Salerno, Giuliani once observed, they should apply “to big shots at Goldman Sachs, too.” As recently as 2006, when Enron imploded, such titans as Jeffrey Skilling and Kenneth Lay were convicted of conspiracy and fraud.

Something has changed in the past decade, however, and federal prosecutions of white-collar crime are now at a twenty-year low. As Jesse Eisinger, a reporter for ProPublica, explains in a new book, “The Chickenshit Club: Why the Justice Department Fails to Prosecute Executives” (Simon & Schuster), a financial crisis has traditionally been followed by a legal crackdown, because a market contraction reveals all the wishful accounting and outright fraud that were hidden when the going was good. In Warren Buffett’s memorable formulation, “You only find out who is swimming naked when the tide goes out.” After the mortgage crisis, people in Washington and on Wall Street expected prosecutions. Eisinger reels off a list of potential candidates for criminal charges: Countrywide, Washington Mutual, Lehman Brothers, Citigroup, A.I.G., Bank of America, Merrill Lynch, Morgan Stanley. Although fines were paid, and the Financial Crisis Inquiry Commission referred dozens of cases to prosecutors, there were no indictments, no trials, no jail time. As Eisinger writes, “Passing on one investigation is understandable; passing on every single one starts to speak to something else.” (...)

The very conception of the modern corporation is that it limits individual liability. Yet, in the decades after the United Brands case, prosecutors often pursued both errant executives and the companies they worked for. When the investment firm Drexel Burnham Lambert was suspected of engaging in stock manipulation and insider trading, in the nineteen-eighties, prosecutors levelled charges not just against financiers at the firm, including Michael Milken, but also against the firm itself. (Drexel Burnham pleaded guilty, and eventually shut down.) After the immense fraud at Enron was exposed, federal authorities pursued its accounting company, Arthur Andersen, for helping to cook the books. Arthur Andersen executives, desperate to cover their tracks, deleted tens of thousands of e-mails and shredded documents by the ton. In 2002, Arthur Andersen was convicted of obstruction of justice, and lost its accounting license. The corporation, which had tens of thousands of employees, was effectively put out of business.

Eisinger describes the demise of Arthur Andersen as a turning point. Many lawyers, particularly in the well-financed realm of white-collar criminal defense, regarded the case as a flagrant instance of government overreach: the problem with convicting a company was that it could have “collateral consequences” that would be borne by employees, shareholders, and other innocent parties. “The Andersen case ushered in an era of prosecutorial timidity,” Eisinger writes. “Andersen had to die so that all other big corporations might live.”

With plenty of encouragement from high-end lobbyists, a new orthodoxy soon took hold that some corporations were so colossal—and so instrumental to the national economy—that even filing criminal charges against them would be reckless. In 2013, Eric Holder, then the Attorney General, acknowledged that decades of deregulation and mergers had left the U.S. economy heavily consolidated. It was therefore “difficult to prosecute” the major banks, because indictments could “have a negative impact on the national economy, perhaps even the world economy.”

Prosecutors came to rely instead on a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture. From 2002 to 2016, the Department of Justice entered into more than four hundred of these arrangements. Having spent a trillion dollars to bail out the banks in 2008 and 2009, the federal government may have been loath to jeopardize the fortunes of those banks by prosecuting them just a few years later. (...)

Numerous explanations have been offered for the failure of the Obama Justice Department to hold the big banks accountable: corporate lobbying in Washington, appeals-court rulings that tightened the definitions of certain types of corporate crime, the redirecting of investigative resources after 9/11. But Eisinger homes in on a subtler factor: the professional psychology of √©lite federal prosecutors. “The Chickenshit Club” is about a specific vocational temperament. When James Comey took over as the U.S. Attorney for the Southern District of New York, in 2002, Eisinger tells us, he summoned his young prosecutors for a pep talk. For graduates of top law schools, a job as a federal prosecutor is a brass ring, and the Southern District of New York, which has jurisdiction over Wall Street, is the most selective office of them all. Addressing this ferociously competitive cohort, Comey asked, “Who here has never had an acquittal or a hung jury?” Several go-getters, proud of their unblemished records, raised their hands.

But Comey, with his trademark altar-boy probity, had a surprise for them. “You are members of what we like to call the Chickenshit Club,” he said.

Most people who go to law school are risk-averse types. With their unalloyed drive to excel, the √©lite young attorneys who ascend to the Southern District have a lifetime of good grades to show for it. Once they become prosecutors, they are invested with extraordinary powers. In a world of limited public resources and unlimited wrongdoing, prosecutors make decisions every day about who should be charged and tried, who should be allowed to plead, and who should be let go. This is the front line of criminal justice, and decisions are made unilaterally, with no review by a judge. Even in the American system of checks and balances, there are few fetters on a prosecutor’s discretion. A perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney. But, as Comey implied, it could also mean that you’re taking only those cases you’re sure you’ll win—the lawyerly equivalent of enrolling in a gut class for the easy A.

You might suppose that the glory of convicting a blue-chip C.E.O. would be irresistible. But taking such a case to trial entails serious risk. In contemporary corporations, the decision-making process is so diffuse that it can be difficult to establish criminal culpability beyond a reasonable doubt. In the United Brands case, Eli Black directly authorized the bribe, but these days the precise author of corporate wrongdoing is seldom so clear. Even after a provision in the Sarbanes-Oxley Act, of 2002, began requiring C.E.O.s and C.F.O.s to certify the accuracy of corporate financial reports, few executives were charged with violating the law, because the companies threw up a thicket of subcertifications to buffer accountability.

As Samuel Buell, who helped prosecute the Enron and Andersen cases and is now a law professor at Duke, points out in his recent book, “Capital Offenses: Business Crime and Punishment in America’s Corporate Age,” an executive’s claim that he believed he was following the rules often poses “a severe, even disabling, obstacle to prosecution.” That is doubly so in instances where the alleged crime is abstruse. Even the professionals who bought and sold the dodgy mortgage-backed instruments that led to the financial crisis often didn’t understand exactly how they worked. How do you explicate such transactions—and prove criminal intent—to a jury?

Even with an airtight case, going to trial is always a gamble. Lose a white-collar criminal trial and you become a symbol of prosecutorial overreach. You might even set back the cause of corporate accountability. Plus, you’ll have a ding on your record. Eisinger quotes one of Lanny Breuer’s deputies in Washington telling a prosecutor, “If you lose this case, Lanny will have egg on his face.” Such fears can deter the most ambitious and scrupulous of young attorneys.

The deferred-prosecution agreement, by contrast, is a sure thing. Companies will happily enter into such an agreement, and even pay an enormous fine, if it means avoiding prosecution. “That rewards laziness,” David Ogden, a Deputy Attorney General in the Obama Administration, tells Eisinger. “The department gets publicity, stats, and big money. But the enormous settlements may or may not reflect that they could actually prove the case.” When companies agree to pay fines for misconduct, the agreements they sign are often conspicuously stinting in details about what they did wrong. Many agreements acknowledge criminal conduct by the corporation but do not name a single executive or officer who was responsible. “The Justice Department argued that the large fines signaled just how tough it had been,” Eisinger writes. “But since these settlements lacked transparency, the public didn’t receive basic information about why the agreement had been reached, how the fine had been determined, what the scale of the wrongdoing was and which cases prosecutors never took up.” These pas de deux between prosecutors and corporate chieftains came to feel “stage-managed, rather than punitive.”

by Patrick Radden Keefe, New Yorker | Read more:
Image: Eiko Ojala