As the Trump administration prepares for an economic conference next week in Bahrain, the first leg of its Middle East peace plan, it is exerting immense pressure on two of America’s closest Arab allies to take part in a process seen as toxic by their own publics. Rather than advocates for the administration’s undisclosed “ultimate deal,” Jordan and Egypt are reluctant guests at the conference. For their part, Palestinians are also applying pressure to Arab states to boycott the economic workshop, which many Arabs fear will offer investment projects to Palestinians in return for recognizing Israeli sovereignty over Jerusalem and the West Bank – a “selling off” of Palestinian statehood.
Last month after the British government dropped a proposed definition of Islamophobia, the Muslim Council of Britain (MCB), the largest Islamic organization in the United Kingdom, called for the ruling Conservative Party to be investigated for Islamophobia.
A year-long study just completed and released Monday in London by an independent international tribunal concludes that China is killing prisoners in order to harvest their organs. Most of the victims are detainees from the Falun Gong religious movement.
Get ready for summer in the city, TechCrunch -style. We just released a fresh batch of tickets to the 14th Annual TechCrunch Summer Party. Available on a first-come, first-served basis, tickets to our popular event sell out quickly, and they’ll be gone before you know it. Don’t wait — buy your ticket today.
Join us for TechCrunch’s fabulous summer fete at Park Chalet — San Francisco’s coastal beer garden — where you can enjoy ocean views, refreshing drinks and delicious appetizers. It’s a wonderful way to relax and celebrate the entrepreneurial spirit with more than 1,000 members of the startup community.
It’s also a wonderful way to meet your next investor, co-founder or — who knows? You’ll find startup magic in between the drinks, the games, the food and the fun. Opportunity happens at TechCrunch parties.
Check out the party particulars:
- When: July 25 from 5:30 p.m. – 9:00 p.m.
- Where: Park Chalet in San Francisco
- How much: $95
Come and join the summer fun. Connect with community and opportunity. As always, you’ll have a chance to win great door prizes — like TechCrunch swag, Amazon Echos and tickets to Disrupt San Francisco 2019.
Tickets sell out quickly, so don’t wait. Buy your 14th Annual Summer Party ticket today.
Did you try to buy a ticket and come up empty? We release tickets to the Summer Party on a rolling basis. Sign up here, and we’ll let you know when the next batch goes on sale.
Is your company interested in sponsoring or exhibiting at the TechCrunch 14th Annual Summer Party? Contact our sponsorship sales team by filling out this form.
“As long as she’s working for the state of Alaska, people support her,” says Kathryn Bowerman, taking a break from crocheting in Bethel’s public library. The support from Bethel’s voters is a window into Senator Murkowski’s political viability – and her vulnerability. In fact, Senator Murkowski’s record in Washington as a moderate Republican who at times defies her party – and President Donald Trump – infuriates many Alaska Republicans.
Artificial intelligence is allowing us all to consider surprising new ways to simplify the lives of our customers. As a product developer, your central focus is always on the customer. But new problems can arise when the specific solution under development helps one customer while alienating others.
We tend to think of AI as an incredible dream assistant to our lives and business operations, when that’s not always the case. Designers of new AI services should consider in what ways and for whom might these services be annoying, burdensome or problematic, and whether it involves the direct customer or others who are intertwined with the customer. When we apply AI services to make tasks easier for our customers that end up making things more difficult for others, that outcome can ultimately cause real harm to our brand perception.
Let’s consider one personal example taken from my own use of Amy.ai, a service (from x.ai) that provides AI assistants named Amy and Andrew Ingram. Amy and Andrew are AI assistants that help schedule meetings for up to four people. This service solves the very relatable problem of scheduling meetings over email, at least for the person who is trying to do the scheduling.
After all, who doesn’t want a personal assistant to whom you can simply say, “Amy, please find the time next week to meet with Tom, Mary, Anushya and Shiveesh.” In this way, you don’t have to arrange a meeting room, send the email, and go back and forth managing everyone’s replies. My own experience showed that while it was easier for me to use Amy to find a good time to meet with my four colleagues, it soon became a headache for those other four people. They resented me for it after being bombarded by countless emails trying to find some mutually agreeable time and place for everyone involved.
Automotive designers are another group that’s incorporating all kinds of new AI systems to enhance the driving experience. For instance, Tesla recently updated its autopilot software to allow a car to change lanes automatically when it sees fit, presumably when the system interprets that the next lane’s traffic is going faster.
In concept, this idea seems advantageous to the driver who can make a safe entrance into faster traffic, while relieving any cognitive burden of having to change lanes manually. Furthermore, by allowing the Tesla system to change lanes, it takes away the desire to play Speed Racer or edge toward competitiveness that one may feel on the highway.
However, for the drivers in other lanes who are forced to react to the Tesla autopilot, they may be annoyed if the Tesla jerks, slows down or behaves outside the normal realm of what people expect on the freeway. Moreover, if they are driving very fast and the autopilot did not recognize they were operating at a high rate of speed when the car decided to make the lane change, then that other driver can get annoyed. We can all relate to driving 75 mph in the fast lane, only to have someone suddenly pull in front of us at 70 as if they were clueless that the lane was moving at 75.
For two-lane traffic highways that are not busy, the Tesla software might work reasonably well. However, in my experience of driving around the congested freeways of the Bay Area, the system performed horribly whenever I changed crowded lanes, and I knew that it was angering other drivers most of the time. Even without knowing those irate drivers personally, I care enough about driving etiquette to politely change lanes without getting the finger from them for doing so.
Another example from the internet world involves Google Duplex, a clever feature for Android phone users that allows AI to make restaurant reservations. From the consumer point of view, having an automated system to make a dinner reservation on one’s behalf sounds excellent. It is advantageous to the person making the reservation because, theoretically, it will save the burden of calling when the restaurant is open and the hassle of dealing with busy signals and callbacks.
However, this tool is also potentially problematic for the restaurant worker who answers the phone. Even though the system may introduce itself as artificial, the burden shifts to the restaurant employee to adapt and master a new and more limited interaction to achieve the same goal — making a simple reservation.
On the one hand, Duplex is bringing customers to the restaurant, but on the other hand, the system is narrowing the scope of interaction between the restaurant and its customer. The restaurant may have other tables on different days, or it may be able to squeeze you in if you leave early, but the system might not handle exceptions like this. Even the idea of an AI bot bothering the host who answers the phone doesn’t seem quite right.
As you think about making the lives of your customers easier, consider how the assistance you are dreaming about might be more of a nightmare for everyone else associated with your primary customer. If there is a question regarding the negative experience of anyone related to your AI product, explore that experience further to determine if there is another better way to still delight them without angering their neighbors.
From a user-experience perspective, developing a customer journey map can be a helpful way to explore the actions, thoughts and emotional experiences of your primary customer or “buyer persona.” Identify the touchpoints in which your system interacts with innocent bystanders who are not your direct customers. For those people unaware of your product, explore their interaction with your buyer persona, specifically their emotional experience.
An aspirational goal should be to delight this adjacent group of people enough that they would move toward being prospects and, eventually, becoming your customers as well. Also, you can use participant ethnography to analyze the innocent bystander in relation to your product. This is a research method that combines the observations of people as they interact with processes and the product.
A guiding design inspiration for this research could be, “How can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?”
That’s just human intelligence, and it’s not artificial.
The world just celebrated the 75th anniversary of the D-day landings. But in the lifetimes of those who survived the Second World War, anti-Semitism is again a threat to Jews in Germany.
On Sunday, more than a quarter of Hong Kong’s residents, or about 2 million people, were out on the streets to defend the territory’s much-cherished rule of law. It was the third protest in eight days against a proposed extradition treaty sought by China. The color was made popular in December when churchgoers in Hong Kong wore black over two Sundays in solidarity with fellow Christians in the mainland suffering a government crackdown on religion.
Nearly 8,000 Amazon employees, many in prestigious engineering and design roles, have recently signed a petition calling on Jeff Bezos and the Amazon Board of Directors to dramatically shift the giant company’s approach to climate change.
By deploying a kind of corporate social disobedience such as speaking out dramatically at shareholders meetings, and by engaging in a variety of community organizing tactics, the “Amazon Employees for Climate Justice” group has quickly become a leading example of a growing trend in the tech world: tech employees banding together to take strong ethical stances in defiance of their powerful employers.
The public actions taken by these employees and groups have been covered widely by the news media. For my TechCrunch series on the ethics of technology, however, I wanted to better understand what participating actively in this campaign has been like some of the individuals involved.
How are employees in high-pressure jobs balancing their professional roles and responsibilities with being actively, publicly in defiance of their employers on a high-profile issue? How do leaders in these efforts explain the philosophy underlying their ethical stance? And how likely are their ideas to spread throughout Amazon and beyond – perhaps particularly among younger tech workers?
I recently spoke with a handful of the Amazon employees most actively involved in the Employees for Climate Justice campaign, all of whom inspired me– in similar and different ways. Below is the first of two interviews I’ll publish here. This one is with Rajit Iftikhar, a young software engineer from New York who moved to Seattle to work for Amazon after earning his Bachelor’s of Engineering in Computer Science from Cornell in 2016.
Rajit struck me as a humble and precociously wise young man who could be a role model — though he seems to have little interest in singling himself out that way — for thousands of other software engineers and technologists at Amazon and beyond.
Greg Epstein: Your personal story has been key to your organizing with Amazon Employees for Climate Justice. Can you start by saying a bit about why?
Rajit Iftikhar: A lot of why I care about climate justice is informed by me having parents from another country that is going to be very adversely affected by [climate change]. Countries like Bangladesh are going to suffer some of the worst consequences from climate change, because of where the country’s located, and the fact that it doesn’t have the resources to adapt.
Bangladesh is already feeling the effects of climate crisis; it is much harder for people to live in the rural areas, [people are] being forced into the cities. Then you have the cyclones that the climate crisis is going to bring, and rising sea levels and flooding.
So, my background [emphasizes, for me] how unjust our emissions are in causing all these problems for people in other countries. And even for communities of color within our country who are going to be disproportionately impacted by the emissions that largely richer people [cause].
I played Pokémon GO this weekend, because I was babysitting my nephew, and I couldn’t help but be reminded what a cultural force it was when it launched three years ago. Hundreds massed near San Francisco’s Ocean Beach every day to hunt. Huge crowds sprinted through Central Park to catch a Vaporeon. Disapproving finger-pointers penned whiny moral panics and sermons about how it encouraged crime and provoking danger.
One thing that was not controversial, though, was the belief that it was a harbinger, the thin edge of the AR wedge, only the first of many crossover games and universes. If you had told anyone then that, three who years later, Pokémon GO would remain the only real example of a widely publicly successful AR / VR app, you would have been laughed out of most rooms.
And yet, here we are. Pokémon GO is still a hit (and remains fun!) but was not the vanguard of an AR/VR onslaught. Magic Leap — which by 2016 had already raised $1.4 billion! — remains at best a disappointment. Which is almost too kind a word for Oculus. AR as an industry has, to oversimplify, largely pivoted to business / work / industrial uses, in the hopes an actual market appears there. What happened?
Note that this isn’t unique to augmented / mixed / virtual reality. 2016 was also that Meerkat, 2015’s hottest app, died, because livestreaming video, while it has its valid niche, was not the future of communications. It was also, at the same time, the year that chatbots were going to take over the world. You may have noticed that in fact they did not.
Looking back, is it really that surprising that Pokémon GO was a one-off, rather than the first ripple of a massive wave of change? Or that AR/VR have faltered and failed to meet expectations? Or that Meerkat and chatbots did not define how we would communicate in the future?
Of course it’s not. The history of innovation is a history of throwing new things at the wall and seeing if they stick — or, more accurately, throwing them into a crowd and seeing how the crowd reacts. Most bets on the big, household-name tech startups of the last two decades weren’t bets on their technologies but on how people would react to them. This especially applies applies to this year’s crop of IPOs — Uber, Lyft, Slack, Pinterest — but also to Twitter and Facebook, and even, to a lesser extent, Apple and Amazon. (Though interestingly not so much to Google, beyond the insight “people will use the Internet to search for stuff.”)
Of course sometimes the crowd ignores the offering flung into its midst. Or they choose one from an apparently similar array and turns its collective back on the rest. Are we really so surprised by this aspect of human nature?
We shouldn’t be. But to an extent we are — because, at least until 2016, the Valley’s techno-optimism had pervaded the rest of the world as well, journalists and politicians and the like. It was based on two pillars:
- the genuine belief that technology was transforming erything around the world, including politics, culture, and finance, and these changes were almost invariably net positive
- the surprisingly hard-headed financial analysis of venture capitalism, whose business model consists of being maximally optimistic about 100 different things while knowing that only 10 will actually succeed and 1 will succeed wildly, because in tech that one wild success more than pays for the 90 abject failures.
I don’t need to tell you that 1) is, at best way way more complicated that it seemed, and at worst horrifyingly wrong, while the worst aspects of politics / culture / finance as we knew them turned out to be ferociously intransigent and as able to infect the tech industry right back; meanwhile, the world has wised up to 2), now correctly recognizing VC optimism as a business model rather than a prophecy.
That doesn’t mean technology has lost its potential to be transformative in a positive way. But it means we’ve all grown more skeptical, more judicious, less reflexively optimistic. This is no bad thing. It means, for instance, if and when the next AR/VR hit finally arrives, we should all be better able to distinguish between silly moral panics and truly worrying consequences. At least let’s hope so. Because while the former are very real, so are the latter.