Tuesday, February 20, 2018

The Singular Pursuit of Comrade Bezos

It was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by way of stepping up output each year, every year, year after year. Nothing else mattered: not profit, not the rate of industrial accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced.
— Francis Spufford,
Red Plenty

But isn’t a business’s goal to turn a profit? Not at Amazon, at least in the traditional sense. Jeff Bezos knows that operating cash flow gives the company the money it needs to invest in all the things that keep it ahead of its competitors, and recover from flops like the Fire Phone. Up and to the right.
— Recode, “Amazon’s Epic 20-Year Run as a Public Company, Explained in Five Charts

From a financial point of view, Amazon doesn’t behave much like a successful 21st-century company. Amazon has not bought back its own stock since 2012. Amazon has never offered its shareholders a dividend. Unlike its peers Google, Apple, and Facebook, Amazon does not hoard cash. It has only recently started to record small, predictable profits. Instead, whenever it has resources, Amazon invests in capacity, which results in growth at a ridiculous clip. When the company found itself with $13.8 billion lying around, it bought a grocery chain for $13.7 billion. As the Recode story referenced above summarizes in one of the graphs: “It took Amazon 18 years as a public company to catch Walmart in market cap, but only two more years to double it.” More than a profit-seeking corporation, Amazon is behaving like a planned economy.

If there is one story on Americans who grew up after the fall of the Berlin Wall know about planned economies, I’d wager it’s the one about Boris Yeltsin in a Texas supermarket.

In 1989, recently elected to the Supreme Soviet, Yeltsin came to America, in part to see Johnson Space Center in Houston. On an unscheduled jaunt, the Soviet delegation visited a local supermarket. Photos from the Houston Chronicle capture the day: Yeltsin, overcome by a display of Jell-O Pudding Pops; Yeltsin inspecting the onions; Yeltsin staring down a full display of shiny produce like a line of enemy soldiers. Planning could never master the countless variables that capitalism calculated using the tireless machine of self-interest. According to the story, the overflowing shelves filled Yeltsin with despair for the Soviet system, turned him into an economic reformer, and spelled the end for state socialism as a global force. We’re taught this lesson in public schools, along with Animal Farm: Planned economies do not work.

It’s almost 30 years later, but if Comrade Yeltsin had visited today’s most-advanced American grocery stores, he might not have felt so bad. Journalist Hayley Peterson summarized her findings in the title of her investigative piece, “‘Seeing Someone Cry at Work Is Becoming Normal’: Employees Say Whole Foods Is Using ‘Scorecards’ to Punish Them.” The scorecard in question measures compliance with the (Amazon subsidiary) Whole Foods OTS, or “on-the-shelf” inventory management. OTS is exhaustive, replacing a previously decentralized system with inch-by-inch centralized standards. Those standards include delivering food from trucks straight to the shelves, skipping the expense of stockrooms. This has resulted in produce displays that couldn’t bring down North Korea. Has Bezos stumbled into the problems with planning?

Although OTS was in play before Amazon purchased Whole Foods last August, stories about enforcement to tears fit with the Bezos ethos and reputation. Amazon is famous for pursuing growth and large-scale efficiencies, even when workers find the experiments torturous and when they don’t make a lot of sense to customers, either. If you receive a tiny item in a giant Amazon box, don’t worry. Your order is just one small piece in an efficiency jigsaw that’s too big and fast for any individual human to comprehend. If we view Amazon as a planned economy rather than just another market player, it all starts to make more sense: We’ll thank Jeff later, when the plan works. And indeed, with our dollars, we have.

In fact, to think of Amazon as a “market player” is a mischaracterization. The world’s biggest store doesn’t use suggested retail pricing; it sets its own. Book authors (to use a personal example) receive a distinctly lower royalty for Amazon sales because the site has the power to demand lower prices from publishers, who in turn pass on the tighter margins to writers. But for consumers, it works! Not only are books significantly cheaper on Amazon, the site also features a giant stock that can be shipped to you within two days, for free with Amazon Prime citizensh…er, membership. All 10 or so bookstores I frequented as a high school and college student have closed, yet our access to books has improved — at least as far as we seem to be able to measure. It’s hard to expect consumers to feel bad enough about that to change our behavior.

Although they attempt to grow in a single direction, planned economies always destroy as well as build. In the 1930s, the Soviet Union compelled the collectivization of kulaks, or prosperous peasants. Small farms were incorporated into a larger collective agricultural system. Depending on who you ask, dekulakization was literal genocide, comparable to the Holocaust, and/or it catapulted what had been a continent-sized expanse of peasants into a modern superpower. Amazon’s decimation of small businesses (bookstores in particular) is a similar sort of collectivization, purging small proprietors or driving them onto Amazon platforms. The process is decentralized and executed by the market rather than the state, but don’t get confused: Whether or not Bezos is banging on his desk, demanding the extermination of independent booksellers — though he probably is — these are top-down decisions to eliminate particular ways of life. (...)

Amazon has succeeded in large part because of the company’s uncommon drive to invest in growth. And today, not only are other companies slow to spend, so are governments. Austerity politics and decades of privatization put Amazon in a place to take over state functions. If localities can’t or won’t invest in jobs, then Bezos can get them to forgo tax dollars (and dignity) to host HQ2. There’s no reason governments couldn’t offer on-demand cloud computing services as a public utility, but instead the feds pay Amazon Web Services to host their sites. And if the government outsources health care for its population to insurers who insist on making profits, well, stay tuned. There’s no near-term natural end to Amazon’s growth, and by next year the company’s annual revenue should surpass the GDP of Vietnam. I don’t see any reason why Amazon won’t start building its own cities in the near future.

America never had to find out whether capitalism could compete with the Soviets plus 21st-century technology. Regardless, the idea that market competition can better set prices than algorithms and planning is now passé. Our economists used to scoff at the Soviets’ market-distorting subsidies; now Uber subsidizes every ride. Compared to the capitalists who are making their money by stripping the copper wiring from the American economy, the Bezos plan is efficient. So, with the exception of small business owners and managers, why wouldn’t we want to turn an increasing amount of our life-world over to Amazon? I have little doubt the company could, from a consumer perspective, improve upon the current public-private mess that is Obamacare, for example. Between the patchwork quilt of public- and private-sector scammers that run America today and “up and to the right,” life in the Amazon with Lex Luthor doesn’t look so bad. At least he has a plan, unlike some people.

From the perspective of the average consumer, it’s hard to beat Amazon. The single-minded focus on efficiency and growth has worked, and delivery convenience is perhaps the one area of American life that has kept up with our past expectations for the future. However, we do not make the passage from cradle to grave as mere average consumers. Take a look at package delivery, for example: Amazon’s latest disruptive announcement is “Shipping with Amazon,” a challenge to the USPS, from which Amazon has been conniving preferential rates. As a government agency bound to serve everyone, the Postal Service has had to accept all sorts of inefficiencies, like free delivery for rural customers or subsidized media distribution to realize freedom of the press. Amazon, on the other hand, is a private company that doesn’t really have to do anything it doesn’t want to do. In aggregate, as average consumers, we should be cheering. Maybe we are. But as members of a national community, I hope we stop to ask if efficiency is all we want from our delivery infrastructure. Lowering costs as far as possible sounds good until you remember that one of those costs is labor. One of those costs is us.

Earlier this month, Amazon was awarded two patents for a wristband system that would track the movement of warehouse employees’ hands in real time. It’s easy to see how this is a gain in efficiency: If the company can optimize employee movements, everything can be done faster and cheaper. It’s also easy to see how, for those workers, this is a significant step down the path into a dystopian hellworld. Amazon is a notoriously brutal, draining place to work, even at the executive levels. The fear used to be that if Amazon could elbow out all its competitors with low prices, it would then jack them up, Martin Shkreli style. That’s not what happened. Instead, Amazon and other monopsonists have used their power to drive wages and the labor share of production down. If you follow the Bezos strategy all the way, it doesn’t end in fully automated luxury communism or even Wall-E. It ends in The Matrix, with workers swaddled in a pod of perfect convenience and perfect exploitation. Central planning in its capitalist form turns people into another cost to be reduced as low as possible.

Just because a plan is efficient doesn’t mean it’s good. Postal Service employees are unionized; they have higher wages, paths for advancement, job stability, negotiated grievance procedures, health benefits, vacation time, etc. Amazon delivery drivers are not and do not. That difference counts as efficiency when we measure by price, and that is, to my mind, a very good argument for not handing the world over to the king of efficiency.

by Malcolm Harris, Medium |  Read more:

Salon to Ad Blockers: Can We Use Your Browser to Mine Cryptocurrency?

Salon.com has a new, cryptocurrency-driven strategy for making money when readers block ads. If you want to read Salon without seeing ads, you can do so—as long as you let the website use your spare computing power to mine some coins.

If you visit Salon with an ad blocker enabled, you might see a pop-up that asks you to disable the ad blocker or "Block ads by allowing Salon to use your unused computing power."

Salon explains what's going on in a new FAQ. "How does Salon make money by using my processing power?" the FAQ says. "We intend to use a small percentage of your spare processing power to contribute to the advancement of technological discovery, evolution, and innovation. For our beta program, we'll start by applying your processing power to help support the evolution and growth of blockchain technology and cryptocurrencies."

While that's a bit vague, a second Salon.com pop-up says that Salon is using Coinhive for "calculations [that] are securely executed in your browser's sandbox." The Coinhive pop-up on Salon.com provides the option to cancel or allow the mining to occur for one browser session. Clicking "more info" brings you to a Coinhive page.

We wrote about Coinhive in October 2017. Coinhive "harnesses the CPUs of millions of PCs to mine the Monero crypto currency. In turn, Coinhive gives participating sites a tiny cut of the relatively small proceeds."

It really does use a lot of CPU power

I enabled the mining on Salon.com today in order to see how much computing power it used. In Chrome's task manager, I got CPU readings of 426.7 and higher for a Salon tab:

The Chrome helper's CPU use shot up to 499 on my 2016 MacBook Pro, a highly unusual total on my computer even for the Chrome browser. That's out of a total of 800%, which accounts for four cores that each run two threads:

The bottom of my laptop started heating up a little, but the computer still worked normally otherwise. With that high Chrome usage, the Mac Activity Monitor said I had about 24 percent of my CPU power still in idle. After I disabled Salon's cryptocurrency mining, my idle CPU power went back up to a more typical 70 to 80 percent.

The computer I used for this experiment has a quad-core, Intel Core i7 Skylake processor. People with different computers will obviously get different results. While Salon's mining might not lock your computer up, I still wouldn't want it running in the background, especially if I were away from a power outlet.

Salon: No risk to security

On Salon, readers aren't forced into cryptocurrency mining because of the site's opt-in system. But in other cases, users have been unaware that Coinhive was being used on their systems. Researchers "from security firm Sucuri warned that at least 500 websites running the WordPress content management system alone had been hacked to run the Coinhive mining scripts," we wrote in the October 2017 article.

Cryptojacking continues to be a problem, as we've detailed in several additional articles, including one yesterday.

Users being caught unaware shouldn't happen at Salon, which makes it clear that readers don't have to opt in to the mining and says that users' security isn't compromised.

"This happens only when you are browsing Salon.com," the site's FAQ says. "Nothing is ever installed on your computer, and Salon never has access to your personal information or files."

Salon notes that ads allow the site to make money from readers without requiring them to pay for subscriptions.

by Jon Brodkin, Ars Technica |  Read more:
Images: Jon Brodkin

Monday, February 19, 2018

Chuck Berry

The Politics of Shame

Donald Trump is a bad president. But that’s not why we loathe him.

Indifference to the environment, the human cost of a tattered social safety net, and the risks attendant to reckless nuclear threats are hardly unique aspects of Trump’s presidency: they’re the American way. It’s certainly alarming that Trump has repealed common-sense environmental regulations, threatened social services, and withdrawn from the Iran deal. But those acts, which would also feature in a hypothetical Ted Cruz presidency, don’t explain the scale of the reaction to Trump. They don’t account for the existence of neologisms like “Trumpocalypse,” or tell us why late-night hosts and satirists are constantly inventing new, creative ways to mock POTUS’s weave.

The feature that makes Trump unique, and the focus of a particular kind of outrage and contempt, is not his policy prescriptions or even his several hundred thousand character failings. God knows plenty of presidents have been horrible people. What sets Trump apart is his shamelessness.

For example, Trump is not the first president or popular public figure to be accused of sexual assault—it’s a crowded field these days. But he was the first to adopt a “takes one to know one” defense—using his political opponent’s husband’s accusers as a human shield to deflect personal responsibility.

Instead of following the prescribed political ritual for making amends after being caught in flagrante, namely, a contrite press conference featuring a stiff if loyal wife, Trump chose to go on the offensive, even insisting that the infamous “pussy tape” must have been a fabrication. Trump established the pattern of “doubling down” early on when he refused to walk back his comment that John McCain’s capture and subsequent torture during the Vietnam War disqualified him from being considered a war hero. It seems attempts to shame Trump only provoke more shameful acts which fail to faze him.

Where other presidents have been cagey, Trump is brazen. He did not invent the Southern Strategy, but he was the first to employ it with so little discretion that the term “dog whistling” now feels too subtle. (Remember, the tiki-marchers shouting “Jew[s] will not replace us” contained among them some “very fine people.”) There is no shortage of vain politicians, but while John Edwards felt compelled to apologize for his $400 haircut, Trump flaunts his saffron pompadour and matching face. Nepotism may be as old as the Borgias, but the boldness with which Trump has appointed family members and their agents to positions of authority still manages to stun. And while nuclear brinksmanship was a defining feature of 20th century presidencies, never before has the “leader of the free world” literally bragged about the size of his big red button and attempted to fat-shame the leader of a rival nuclear power.

Even when it appears as if Trump is on the verge of an apology or admission, he quickly lapses back into shamelessness. When Trump was criticized for lamenting violence on “many sides” following Heather Heyer’s murder in Charlottesville, Trump was pressured by advisers into releasing a statement explicitly condemning neo-Nazis. But he soon walked it back, once again blaming “both sides” and the “alt-left” for being “very violent.” (Again, remember which side featured a white supremacist who killed a woman.)

This impudence, this shamelessness, is essentially Trump’s calling card. And those who object to it have often sought to restore the balance by trying even harder to shame him, or, in the alternative, by trying to shame his followers into acting like reasonable human beings. “Shame on all of us for making Donald Trump a Thing,” wrote conservative writer Pascal-Emmanuel Gobry back in 2015. Throngs of protestors chanted “shame, shame, shame” along Trump’s motorcade route after his “fine people” remark. The Guardian’s Jessica Valenti wrote that shaming is both justified and “necessary” because “there are people right now who should be made to feel uncomfortable” because “what they have done is shameful.”

MSNBC host Joy Ann Reid has also sought to shame Trump voters, for example by tweeting: “Last November, 63 million of you voted to pretty much hand this country over to a few uber wealthy families and the religious far right. Well done.” Washington Monthly contributor David Atkins has echoed this sentiment, tweeting: “Good news white working class! Your taxes will go up, your Medicare will be cut and your kid’s student loans will be more expensive. But at least Don Jr can bring back elephant trunks on his tax deductible private jet, so it’s all good.”

“How could he/they!?” is a popular way to start sentences about either Trump or his supporters. The statistic that 53% of white women voted for Trump (how could they??) is a useful tool both for shaming others and the self-flagellation-cum-virtue-signaling characteristic of some white women who “knew better.” Even suggesting that politicians talk to Trump voters is grounds for ridicule. Forget scarlet letters—nothing short of community expulsion will do. They’re “irredeemable” after all. So why bother “reaching out”?

Believe me, I empathize. Trump’s policies hurt people, and the people who voted for him did so willingly. Given the easy-to-anticipate consequences of their votes, Trump voters do seem like bad people who should be ashamed. We’re often encouraged to engage more civilly with “people who disagree with us,” but the divergent value systems reflected by America’s two major political parties cut to the core of who we are. They are not necessarily mere disagreements, but deep moral schisms, which is why commentators like Valenti insist that a high level of outrage is appropriate to the circumstances. If you’re not outraged, you’re not taking seriously enough the harm done to the immigrant families torn apart by ICE. Mere fact-based criticisms of various policy positions feel inadequate, as if they trivialize the moral issues involved. It seems important to add that various beliefs, themselves, are shameful. No wonder, then, that the shared impulse isn’t just to disagree, but to “drag,” destroy, and decimate.

Given what’s at stake, I understand why shaming feels not only appropriate, but compulsory. It’s an inclination I share and sympathize with.

But in practice, I think it’s a mistake.

by Briahna Joy Gray, Current Affairs |  Read more:
Image: Tyler Rubenfeld

Freestyle Skier's Complex Path Offers Olympic Rorschach Test

The words left Liz Swaney's lips without an ounce of irony. No telling curl of the lips. No wink. Nothing. She meant them. All of them.

"I didn't qualify for finals so I'm really disappointed," the 33-year-old Californian said after coming in last in the 24-woman field during Olympic women's halfpipe qualifying on Monday.

She seemed ... surprised.

Even though her score of 31.40 was more than 40 points behind France's Anais Caradeux, whose 72.80 marked the lowest of the 12 skiers to move on to Tuesday's medal round.

Even though Swaney finished in about the same position in each of the dozen events she competed across the globe over the last four years in the run-up to the Pyeongchang Games.

Even though her two qualifying runs at Phoenix Snow Park featured little more than Swaney riding up the halfpipe wall before turning around in the air and skiing to the other side. It was a sequence she repeated a handful of times before capping her final trip with a pair of "alley oops", basically inward 180 degree turns more fitting for the local slopes than the world's largest sporting event.

Halfpipe is judged on a 100-point scale. Swaney has yet to break 40 in an FIS-sanctioned competition, not because she regularly wipes out trying to throw difficult tricks but because she doesn't even try them.

Yet she's here in South Korea anyway as part of the Hungarian delegation, the latest in a series of quixotic pursuits that include running for governor of California as a 19-year-old student at Berkeley to trying out for the Oakland Raiders cheerleading team to mounting a push to reach the Olympics as skeleton racer for Venezuela. She only started skiing eight years ago and only got serious about it after the skeleton thing didn't take.

"I still want to inspire people to get involved with athletics or a new sport or a new challenge at any age in life," she said.

A tale that's hardly new, though Swaney's unusual path offers a Rorschach test of sorts on what the Olympics actually mean.

The games have long trafficked in the soft-focus narrative of plucky dreamers with no shot. Think Eddie "The Eagle" Edwards, the bespectacled British ski jumper; the Jamaican bobsled team or relentlessly shirtless Tongan Olympian Pita Taufatofua , who came in 114th out of 116 skiers in the 15-kilometer cross-country race last week.

Korea entered 19-year-old Kyoungeun Kim in women's aerials earlier in the Games, the first skier from the host country to take on the event at the Olympics. Kim came in dead last while going off the smallest of the five "kicker" ramps used at the Games, the back flip she completed in the second round akin to something a teenager might do off a diving board at the neighborhood pool.

Taufatofua and Edwards embraced the uniqueness of their stories, fully allowing they were simply happy to be at the Olympics and nothing more. Kim represented Korea's first foray into the sport. All three of them competed for the countries they were born in.

Swaney's story is more complicated.

Let's get this out of the way early: she did nothing illegal to get here. She racked up the required FIS points to reach the Olympic standard. She went through the necessary hoops to join Team Hungary, the connection coming from her Hungarian maternal grandfather, who she said would have turned 100 on Tuesday. She's spent more than her fair share of money hopscotching continents chasing a dream she says was hatched watching the 1992 Games.

It was not easy and it was not cheap. Yet she kept at it. Keeping at it is kind of her thing. No matter how you try to frame the questions, the answers come back the same. She swears this isn't a publicity stunt. This is real.

"I'm trying to soak in the Olympic experience but also focusing on the halfpipe here and trying to go higher each time and getting more spins in," said Swaney, who wore bib No. 23 and stars-and-stripes goggles not as some sort of statement but basically because they were the least expensive in the athlete's store.

One problem. Swaney doesn't go very high. She doesn't spin very much.

Canadian Cassie Sharpe packed more twists into the first two tricks of her qualifying-topping run than Swaney did all day. In an event making its second Olympic appearance, one focused on progression and pushing the edge, Swaney's tentative, decidedly grounded trips down the pipe play in stark contrast to everyone else.

The English language announcer stayed largely quiet during Swaney's second run because, well, there wasn't much to describe. As Lady Gaga blared over the speakers, the crowd watched in silence for sparse applause at the end. The judges awarded her for doing back-to-back "alley oops," with the American judge even giving her a 33. It was better than her first, but it was nowhere near world class.

Swaney is here because she earned her way in.

Still, it leads to the inevitable question: should she be?

by Will Graves, AP |  Read more:
Image: Getty via Yahoo

Sunday, February 18, 2018

Nicole McCormick Santiago, Blue Room

“Fuck You, I Like Guns.”

America, can we talk? Let’s just cut the shit for once and actually talk about what’s going on without blustering and pretending we’re actually doing a good job at adulting as a country right now. We’re not. We’re really screwing this whole society thing up, and we have to do better. We don’t have a choice. People are dying. At this rate, it’s not if your kids, or mine, are involved in a school shooting, it’s when. One of these happens every 60 hours on average in the US. If you think it can’t affect you, you’re wrong. Dead wrong. So let’s talk.

I’ll start. I’m an Army veteran. I like M-4’s, which are, for all practical purposes, an AR-15, just with a few extra features that people almost never use anyway. I’d say at least 70% of my formal weapons training is on that exact rifle, with the other 30% being split between various and sundry machineguns and grenade launchers. My experience is pretty representative of soldiers of my era. Most of us are really good with an M-4, and most of us like it at least reasonably well, because it is an objectively good rifle. I was good with an M-4, really good. I earned the Expert badge every time I went to the range, starting in Basic Training. This isn’t uncommon. I can name dozens of other soldiers/veterans I know personally who can say the exact same thing. This rifle is surprisingly easy to use, completely idiot-proof really, has next to no recoil, comes apart and cleans up like a dream, and is light to carry around. I’m probably more accurate with it than I would be with pretty much any other weapon in existence. I like this rifle a lot. I like marksmanship as a sport. When I was in the military, I enjoyed combining these two things as often as they’d let me.

With all that said, enough is enough. My knee jerk reaction is to consider weapons like the AR-15 no big deal because it is my default setting. It’s where my training lies. It is my normal, because I learned how to fire a rifle IN THE ARMY. You know, while I may only have shot plastic targets on the ranges of Texas, Georgia, and Missouri, that’s not what those weapons were designed for, and those targets weren’t shaped like deer. They were shaped like people. Sometimes we even put little hats on them. You learn to take a gut shot, “center mass”, because it’s a bigger target than the head, and also because if you maim the enemy soldier rather than killing him cleanly, more of his buddies will come out and get him, and you can shoot them, too. He’ll die of those injuries, but it’ll take him a while, giving you the chance to pick off as many of his compadres as you can. That’s how my Drill Sergeant explained it anyway. I’m sure there are many schools of thought on it. The fact is, though, when I went through my marksmanship training in the US Army, I was not learning how to be a competition shooter in the Olympics, or a good hunter. I was being taught how to kill people as efficiently as possible, and that was never a secret.

As an avowed pacifist now, it turns my stomach to even type the above words, but can you refute them? I can’t. Every weapon that a US Army soldier uses has the express purpose of killing human beings. That is what they are made for. The choice rifle for years has been some variant of what civilians are sold as an AR-15. Whether it was an M-4 or an M-16 matters little. The function is the same, and so is the purpose. These are not deer rifles. They are not target rifles. They are people killing rifles. Let’s stop pretending they’re not.

With this in mind, is anybody surprised that nearly every mass shooter in recent US history has used an AR-15 to commit their crime? And why wouldn’t they? High capacity magazine, ease of loading and unloading, almost no recoil, really accurate even without a scope, but numerous scopes available for high precision, great from a distance or up close, easy to carry, and readily available. You can buy one at Wal-Mart, or just about any sports store, and since they’re long guns, I don’t believe you have to be any more than 18 years old with a valid ID. This rifle was made for the modern mass shooter, especially the young one. If he could custom design a weapon to suit his sinister purposes, he couldn’t do a better job than Armalite did with this one already.

This rifle is so deadly and so easy to use that no civilian should be able to get their hands on one. We simply don’t need these things in society at large. I always find it interesting that when I was in the Army, and part of my job was to be incredibly proficient with this exact weapon, I never carried one at any point in garrison other than at the range. Our rifles lived in the arms room, cleaned and oiled, ready for the next range day or deployment. We didn’t carry them around just because we liked them. We didn’t bluster on about barracks defense and our second amendment rights. We tucked our rifles away in the arms room until the next time we needed them, just as it had been done since the Army’s inception. The military police protected us from threats in garrison. They had 9 mm Berettas to carry. They were the only soldiers who carry weapons in garrison. We trusted them to protect us, and they delivered. With notably rare exceptions, this system has worked well. There are fewer shootings on Army posts than in society in general, probably because soldiers are actively discouraged from walking around with rifles, despite being impeccably well trained with them. Perchance, we could have the largely untrained civilian population take a page from that book?

I understand that people want to be able to own guns. That’s ok. We just need to really think about how we’re managing this. Yes, we have to manage it, just as we manage car ownership. People have to get a license to operate a car, and if you operate a car without a license, you’re going to get in trouble for that. We manage all things in society that can pose a danger to other people by their misuse. In addition to cars, we manage drugs, alcohol, exotic animals (there are certain zip codes where you can’t own Serval cats, for example), and fireworks, among other things. We restrict what types of businesses can operate in which zones of the city or county. We have a whole system of permitting for just about any activity a person wants to conduct since those activities could affect others, and we realize, as a society, that we need to try to minimize the risk to other people that comes from the chosen activities of those around them in which they have no say. Gun ownership is the one thing our country collectively refuses to manage, and the result is a lot of dead people.

I can’t drive a Formula One car to work. It would be really cool to be able to do that, and I could probably cut my commute time by a lot. Hey, I’m a good driver, a responsible Formula One owner. You shouldn’t be scared to be on the freeway next to me as I zip around you at 140 MPH, leaving your Mazda in a cloud of dust! Why are you scared? Cars don’t kill people. People kill people. Doesn’t this sound like bullshit? It is bullshit, and everybody knows. Not one person I know would argue non-ironically that Formula One cars on the freeway are a good idea. Yet, these same people will say it’s totally ok to own the firearm equivalent because, in the words of comedian Jim Jeffries, “fuck you, I like guns”.

Yes, yes, I hear you now. We have a second amendment to the constitution, which must be held sacrosanct over all other amendments. Dude. No. The constitution was made to be a malleable document. It’s intentionally vague. We can enact gun control without infringing on the right to bear arms. You can have your deer rifle. You can have your shotgun that you love to shoot clay pigeons with. You can have your target pistol. Get a license. Get a training course. Recertify at a predetermined interval. You do not need a military grade rifle. You don’t. There’s no excuse.

“But we’re supposed to protect against tyranny! I need the same weapons the military would come at me with!” Dude. You know where I can get an Apache helicopter and a Paladin?! Hook a girl up! Seriously, though, do you really think you’d be able to hold off the government with an individual level weapon? Because you wouldn’t. One grenade, and you’re toast. Don’t have these illusions of standing up to the government, and needing military style rifles for that purpose. You’re not going to stand up to the government with this thing. They’d take you out in about half a second.

Let’s be honest. You just want a cool toy, and for the vast majority of people, that’s all an AR-15 is.

by Anna, EPSAAS |  Read more:
Image: uncredited
[ed. See also: America is Under Attack and the President Doesn't Care... nor do Republicans if it'll help in the next election.]

The Tyranny of Convenience

Convenience is the most underestimated and least understood force in the world today. As a driver of human decisions, it may not offer the illicit thrill of Freud’s unconscious sexual desires or the mathematical elegance of the economist’s incentives. Convenience is boring. But boring is not the same thing as trivial.

In the developed nations of the 21st century, convenience — that is, more efficient and easier ways of doing personal tasks — has emerged as perhaps the most powerful force shaping our individual lives and our economies. This is particularly true in America, where, despite all the paeans to freedom and individuality, one sometimes wonders whether convenience is in fact the supreme value.

As Evan Williams, a co-founder of Twitter, recently put it, “Convenience decides everything.” Convenience seems to make our decisions for us, trumping what we like to imagine are our true preferences. (I prefer to brew my coffee, but Starbucks instant is so convenient I hardly ever do what I “prefer.”) Easy is better, easiest is best.

Convenience has the ability to make other options unthinkable. Once you have used a washing machine, laundering clothes by hand seems irrational, even if it might be cheaper. After you have experienced streaming television, waiting to see a show at a prescribed hour seems silly, even a little undignified. To resist convenience — not to own a cellphone, not to use Google — has come to require a special kind of dedication that is often taken for eccentricity, if not fanaticism.

For all its influence as a shaper of individual decisions, the greater power of convenience may arise from decisions made in aggregate, where it is doing so much to structure the modern economy. Particularly in tech-related industries, the battle for convenience is the battle for industry dominance.

Americans say they prize competition, a proliferation of choices, the little guy. Yet our taste for convenience begets more convenience, through a combination of the economics of scale and the power of habit. The easier it is to use Amazon, the more powerful Amazon becomes — and thus the easier it becomes to use Amazon. Convenience and monopoly seem to be natural bedfellows.

Given the growth of convenience — as an ideal, as a value, as a way of life — it is worth asking what our fixation with it is doing to us and to our country. I don’t want to suggest that convenience is a force for evil. Making things easier isn’t wicked. On the contrary, it often opens up possibilities that once seemed too onerous to contemplate, and it typically makes life less arduous, especially for those most vulnerable to life’s drudgeries.

But we err in presuming convenience is always good, for it has a complex relationship with other ideals that we hold dear. Though understood and promoted as an instrument of liberation, convenience has a dark side. With its promise of smooth, effortless efficiency, it threatens to erase the sort of struggles and challenges that help give meaning to life. Created to free us, it can become a constraint on what we are willing to do, and thus in a subtle way it can enslave us.

It would be perverse to embrace inconvenience as a general rule. But when we let convenience decide everything, we surrender too much.

Convenience as we now know it is a product of the late 19th and early 20th centuries, when labor-saving devices for the home were invented and marketed. Milestones include the invention of the first “convenience foods,” such as canned pork and beans and Quaker Quick Oats; the first electric clothes-washing machines; cleaning products like Old Dutch scouring powder; and other marvels including the electric vacuum cleaner, instant cake mix and the microwave oven.

Convenience was the household version of another late-19th-century idea, industrial efficiency, and its accompanying “scientific management.” It represented the adaptation of the ethos of the factory to domestic life.

However mundane it seems now, convenience, the great liberator of humankind from labor, was a utopian ideal. By saving time and eliminating drudgery, it would create the possibility of leisure. And with leisure would come the possibility of devoting time to learning, hobbies or whatever else might really matter to us. Convenience would make available to the general population the kind of freedom for self-cultivation once available only to the aristocracy. In this way convenience would also be the great leveler.

This idea — convenience as liberation — could be intoxicating. Its headiest depictions are in the science fiction and futurist imaginings of the mid-20th century. From serious magazines like Popular Mechanics and from goofy entertainments like “The Jetsons” we learned that life in the future would be perfectly convenient. Food would be prepared with the push of a button. Moving sidewalks would do away with the annoyance of walking. Clothes would clean themselves or perhaps self-destruct after a day’s wearing. The end of the struggle for existence could at last be contemplated.

The dream of convenience is premised on the nightmare of physical work. But is physical work always a nightmare? Do we really want to be emancipated from all of it? Perhaps our humanity is sometimes expressed in inconvenient actions and time-consuming pursuits. Perhaps this is why, with every advance of convenience, there have always been those who resist it. They resist out of stubbornness, yes (and because they have the luxury to do so), but also because they see a threat to their sense of who they are, to their feeling of control over things that matter to them.

By the late 1960s, the first convenience revolution had begun to sputter. The prospect of total convenience no longer seemed like society’s greatest aspiration. Convenience meant conformity. The counterculture was about people’s need to express themselves, to fulfill their individual potential, to live in harmony with nature rather than constantly seeking to overcome its nuisances. Playing the guitar was not convenient. Neither was growing one’s own vegetables or fixing one’s own motorcycle. But such things were seen to have value nevertheless — or rather, as a result. People were looking for individuality again.

Perhaps it was inevitable, then, that the second wave of convenience technologies — the period we are living in — would co-opt this ideal. It would conveniencize individuality.

You might date the beginning of this period to the advent of the Sony Walkman in 1979. With the Walkman we can see a subtle but fundamental shift in the ideology of convenience. If the first convenience revolution promised to make life and work easier for you, the second promised to make it easier to be you. The new technologies were catalysts of selfhood. They conferred efficiency on self-expression.

Consider the man of the early 1980s, strolling down the street with his Walkman and earphones. He is enclosed in an acoustic environment of his choosing. He is enjoying, out in public, the kind of self-expression he once could experience only in his private den. A new technology is making it easier for him to show who he is, if only to himself. He struts around the world, the star of his own movie.

So alluring is this vision that it has come to dominate our existence. Most of the powerful and important technologies created over the past few decades deliver convenience in the service of personalization and individuality. Think of the VCR, the playlist, the Facebook page, the Instagram account. This kind of convenience is no longer about saving physical labor — many of us don’t do much of that anyway. It is about minimizing the mental resources, the mental exertion, required to choose among the options that express ourselves. Convenience is one-click, one-stop shopping, the seamless experience of “plug and play.” The ideal is personal preference with no effort. (...)

I do not want to deny that making things easier can serve us in important ways, giving us many choices (of restaurants, taxi services, open-source encyclopedias) where we used to have only a few or none. But being a person is only partly about having and exercising choices. It is also about how we face up to situations that are thrust upon us, about overcoming worthy challenges and finishing difficult tasks — the struggles that help make us who we are. What happens to human experience when so many obstacles and impediments and requirements and preparations have been removed?

Today’s cult of convenience fails to acknowledge that difficulty is a constitutive feature of human experience. Convenience is all destination and no journey. But climbing a mountain is different from taking the tram to the top, even if you end up at the same place. We are becoming people who care mainly or only about outcomes. We are at risk of making most of our life experiences a series of trolley rides.

by Tim Wu, NY Times |  Read more:
Image: Hudson Christie
[ed. This reminds me of an earlier post (The Philosophy of the Midlife Crisis) and the concept of "telic" and "atelic" activities: the "distinction between “incomplete” and “complete” activities. Building yourself a house is an incomplete activity, because its end goal—living in the finished house—is not something you can experience while you are building it. Building a house and living in it are fundamentally different things. By contrast, taking a walk in the woods is a complete activity: by walking, you are doing the very thing you wish to do. The first kind of activity is “telic”—that is, directed toward an end, or telos. The second kind is “atelic”: something you do for its own sake."]

Saturday, February 17, 2018

The State of Informed Bewilderment

The question that I’ve been asking myself for a long time is, what kind of framing should we have for the dilemmas posed by the technology we’re living through at the moment? I’m interested in information technology, ranging widely from digital technology and the Internet on one hand to artificial intelligence, both weak and strong, on the other hand. As we live through the changes and the disturbances that this technology brings, we’re in a state of mind that was once admirably characterized by Manuel Castells as "informed bewilderment," which was an expression I liked.

We’re informed because we are intensely curious about what’s going on. We're not short of information about it. We endlessly speculate and investigate it in various ways. Manuel’s point was that we actually don’t understand what it means—that’s what he meant by bewilderment. That’s a very good way of describing where we are. The question I have constantly on my mind is, are there frames that would help us to make sense of this in some way?

One of the frames that I’ve explored for a long time is the idea of trying to take a long view of these things. My feeling is that one of our besetting sins at the moment, in relation for example to digital technology, is what Michael Mann once described as the sociology of the last five minutes. I’m constantly trying to escape from that. I write a newspaper column every week, and I've written a couple of books about this stuff. If you wanted to find a way of describing what I try to do, it is trying to escape from the sociology of the last five minutes.

In relation to the Internet and the changes it has already brought in our society, my feeling is that although we don’t know really where it’s heading because it’s too early in the change, we’ve had one stroke of luck. The stroke of luck was that, as a species, we’ve conducted this experiment once before. We’re living through a transformation of our information environment. This happened once before, and we know quite a lot about it. It was kicked off in 1455 by Johannes Gutenberg and his invention of printing by movable type.

In the centuries that followed, that invention not only transformed humanity’s information environment, it also led to colossal changes in society and the world. You could say that what Gutenberg kicked off was a world in which we were all born. Even now, it’s the world in which most of us were shaped. That’s changing for younger generations, but that’s the case for people like me.

Why is Gutenberg useful? He’s useful because he instills in us a sense of humility. The way I’ve come to explain that is with a thought experiment which I often use in talks and lectures. The thought experiment goes like this:
I want you to imagine that we’re back in Mainz, the small town on the Rhine where Gutenberg's press was established. The date is around 1476 or ’78, and you’re working for the medieval version of Gallup or MORI Pollsters. You’ve got a clip slate in your hand and you’re stopping people and saying, "Excuse me, madam, would you mind if I asked you some questions?" And here’s question four: "On a scale of 1 to 5, where 1 is definitely yes and 5 is definitely no, do you think that the invention of printing by movable type will A) undermine the authority of the Catholic Church, B) trigger and fuel a Protestant Reformation, C) enable the rise of something called modern science, D) enable the creation of entirely undreamed of and unprecedented professions, occupations, industries, and E) change our conception of childhood?"
That’s a thought experiment, and the reason you want to do it is because nobody in Mainz in, say, 1478 had any idea that what Gutenberg had done in his workshop would have these effects, and yet we know now that it had all of those effects and many more. The point of the thought experiment is, as I said, to induce a sense of humility. I chose that day in 1478 because we’re about the same distance into the revolution we’re now living through. And for anybody therefore to claim confidently that they know what it means and where it’s heading, I think that’s foolish. That’s my idea of trying to get some kind of perspective on it. It makes sense to take the long view of the present in which we are enmeshed. (...)

I’m obsessed with the idea of longer views of things. In the area I know, which is information technology, the speed with which stuff appears to change has clearly outdistanced the capacity of our social institutions to adapt. They need longer and they’re not getting it.

A historian will say that’s always been the case, and maybe that’s true. I just don’t know. If you’re a cybernetician looking at this, cybernetics had an idea of a viable system. A viable system is one that can handle the complexity of its environment. For a system to be viable, there are only two strategies. One is to reduce the complexity of the environment that the system has to deal with, and that, broadly speaking, has been the way we’ve managed it in the past.

For example, mass production—the standardization of objects and production processes—was a way of reducing the infinite variety of human tastes. Henry Ford started it with the Model T by saying, "You can have any color as long as it’s black." As manufacturing technology—the business of making physical things—became more and more sophisticated, then the industrial system became quite good at widening the range of choice available, and therefore coping with greater levels of variety.

How many different models does Mercedes make? I don't know. Every time I see a Mercedes car, it’s got a different number on it. I used to think Mercedes made maybe twenty cars. My hunch is that they make probably several hundred varieties of particular cars. The same is true for Volkswagen, etc. Because manufacturing became so efficient, it was able to widen the range of choice.

Fundamentally, mass production was a way of coping with reducing the variety that the system had to deal with. Universities are the same. The way they coped with the infinite range of things that people might want to learn about was to essentially say, “You can do this course or you can do that course. We have a curriculum. We have a set of options. We have majors and minor subjects.” We then compress the infinite variety that they might have to deal with into much smaller amounts.

Most of our institutions, the ones that still govern our societies and indeed our industries, evolved in an era when the variety of their information environment was much smaller than it is now. Because of the Internet and related technologies, our information environment is orders of magnitude more complex than institutions had to deal with even fifty years ago, certainly seventy years ago. And what that means in effect is that in this new environment, a lot of our institutions are probably not viable in the cybernetic sense. They simply can’t manage the complexity they have to deal with now.

The question for society and for everybody else is, what happened? What will happen then? How will they evolve? Will they evolve? One metaphor that I have used for thinking about this is that of ecosystems. In other words, we now live in an information ecosystem. If you’re a scientist who studies natural ecosystems, then you can rank them in terms of complexity.

For example, at one level you could say that we have moved from an information environment, which was a simple ecosystem, rather like a desert, and is much closer to something that’s now like a rainforest. It's characterized by much more diversity, by much higher density of publishers and free agents, and of the interactions between them and the speed with which they evolve and change. Most of our social institutions have not evolved to deal with this metaphorical rainforest, in which case we can expect painful changes in institutions over the next fifty to 100 years as they have to reshape in order to stay viable. Universities are suffering from that already.​

by John Naughton, Edge | Read more:
Image: uncredited

Image: via

Mark Lanegan

Michel Keck