Live from Apple’s media special event

You know Apple’s got something big in the works when it went ahead and launched new iPads, iMacs and AirPods in one week with little fanfare. And while this week’s event was a bit of a surprise, Apple’s clearly been working to it for a long time.

The company sent out invites for “Show Time” a few weeks back, and since then we’ve been piecing together the clues of what we expect will be announced. A new video streaming service will most likely be the centerpiece of the big event. The company has budgeted at least $1 billion on content for the Netflix competitor. A new news service and even a credit card also appear to be on tap for what looks to be a packed show.

Editors Matthew Panzarino and Brian Heater are on the ground in Cupertino today, set to bring you live updates from the Steve Jobs Theater. Bookmark this space. Things kick off at 10AM PT/1PM ET.

Hackers dropped a secret backdoor in Asus’ update software

Hackers targeted and compromised “hundreds of thousands” of Asus computer owners by pushing a backdoored update software tool from the company’s own servers.

The bombshell claims, first reported by Motherboard, said the hackers digitally signed the Asus Live Update tool with one of the company’s own code-signing certificates before pushing it to Asus’ download servers, which hosted the backdoored tool for months last year. The malicious updates were pushed to Asus computers, which has the software installed by default.

TechCrunch can confirm much of Motherboard’s reporting after we learned of the attack some weeks ago from a source with direct knowledge of the incident.

Kaspersky, which first found the backdoored software, said the malicious update tool could affect over a million users. The backdoor would scan a device for a target’s unique MAC address and pulls a malicious payload from a command and control server.

Motherboard’s reporting said the backdoor was scanning for some 600 MAC addresses, matching what TechCrunch has learned, and was likely targeted to infect only a small number of victims rather than cause infections on a large scale.

Symantec confirmed Kaspersky’s findings, describing it to us as a software supply chain attack. “Our findings suggest the trojanized version of the software were sent to ASUS customers between June and October,” spokesperson Jennifer Duffourg told TechCrunch.

The compromised file with Asus’ certificate. (Image: Kaspersky)

It’s believed the hackers had access to Asus’ own certificates to sign the malware through Asus’ sprawling supply chain, a factor line of developers and vendors from around the world trusted to develop software and provide components for Asus’ computers. These so-called supply chain attacks are particularly difficult to detect because it often involves targeting a company insider or infiltrating the company directly.

One of the backdoored files used a certificate created in mid-2018 but which was different from Asus’ regularly used certificates.

According to Motherboard’s report the certificates are still active and have not been revoked, posing a continued risk to Asus customers.

It’s not known exactly what payload was delivered to victims, however.

The backdoor bears a resemblance to CCleaner, which similarly used a code-signing certificate to hide any malicious component. Some 2.3 million customers were affected by that backdoor, blamed on hackers who reportedly targeted tech giants.

Asus has not informed customers of the vulnerability after it was discovered earlier this year.

Motherboard said Kaspersky reported the backdoored software on January 31. Taiwan-based Asus is said to have around a 6 percent share of the computer market, according to Gartner, shipping tens of millions of computers each year.

When reached last week about the claims, Asus spokesperson Gary Key had no immediate answer to several questions we had and referred comment to its headquarters.

Kaspersky’s Sarah Kitsos did not comment on the findings.

Fortnite, copyright and the legal precedent that could still mean trouble for Epic Games

A new US Supreme Court decision is pitting entertainers and video game developers against one another in a high-stakes battle royale.

The decision in Fourth Estate Public Benefit Corp. v. Wall-Street.com LLC raises interesting questions about several lawsuits brought against Epic Games, the publisher of popular multiplayer game Fortnite.

In Fortnite, players may make in-game purchases, allowing player avatars to perform popular dance moves (called emotes), such as the Carlton, the Floss, and the Milly Rock.

Five performers, all represented by the same law firm, recently filed separate lawsuits against Epic Games in the Central District of California, each alleging: (i) the performer created a dance; (ii) the dance is uniquely identified with the performer; (iii) an Epic emote is a copy of the dance; and (iv) Epic’s use of the dance infringes the plaintiff’s copyright in the dance move and the dancer’s right to publicity under California statutory and common law.

In short, the dance creators argue that Epic Games used their copyrightable dance moves in violation of existing law.

The building battle

What do these Fortnite lawsuits in California have to do with the US Supreme Court?  US copyright law says that a copyright owner can’t sue for copyright infringement until “registration of the copyright claim has been made” with the US Copyright Office.  Prior to the recent Supreme Court decision in Fourth Estate, lower federal courts split over what this language means.

Some (including the federal courts in California) concluded that a copyright claimant could sue an alleged infringer upon delivering a completed copyright application to the Copyright Office.  Other lower federal courts held that the suit could not be brought until the Copyright Office issued a registration, meaning that the Office viewed the work to be copyrightable.

Because the Copyright Office now takes over seven months to process a copyright application and issue a registration, claimants often chose to sue in California federal courts, which had adopted the quicker “application approach.”  This was the route chosen by the plaintiffs in all five Fortnite cases.

Down (but not out)

On March 4, 2019, in Fourth Estate, the Supreme Court ruled that California federal courts and others following the application approach were wrong, and that a plaintiff cannot sue for copyright infringement unless the Copyright Office has issued a copyright registration.

This had an immediate impact on the Fortnite lawsuits because the Copyright Office had not yet registered any of the dances and, indeed, had found two of the plaintiffs’ dances uncopyrightable.  Recognizing their vulnerability, plaintiffs preemptively withdrew these lawsuits, announcing they would refile the complaints once the Copyright Office issued registrations.

Epic question #1: are the emote dances copyrightable?

The central question is whether the dances used in Fortnite emotes are copyrightable material  protected under US law. If not, then Epic Games’ use of the dances is not copyright infringement, and in-game sales of the particular dances may continue unfettered.

Dance moves fall into a gray area in copyright law.  Copyright law does protect “choreographic works,” but the Copyright Office says that “social dance steps and simple routines” are not protected. What’s the difference between the two? The Copyright Office says that choreography commonly involves “the composition and arrangement of a related series of dance movements and patterns organized into a coherent whole” and “a story, theme, or abstract composition conveyed through movement.”  Dances that don’t meet this standard can’t be copyrighted, even if they are “novel and distinctive.”

So are the Fortnite plaintiffs’ dances “choreographic works” in the eyes of the Copyright Office?  Herein lies a clash of cultures. The performer-plaintiffs undoubtedly feel they have created something not just unique, but a work entitled to protection for which they are owed damages.  But the buttoned-down Copyright Office may not agree.

The Copyright Office has already denied Alfonso Ribeiro a copyright registration for the Carlton, a widely recognized dance popularized by Ribeiro during his days as Carlton Banks on the show Fresh Prince of Bel Air.  The Office stated that the Carlton was “a simple routine made up of three dance steps” and “is not registrable as a choreographic work.”

The plaintiffs’ lawyer in the Epic Games cases has disclosed that 2 Milly’s application for copyright in the Milly Rock was also rejected, but that a long “variant” of Backpack Kid’s Floss dance was accepted for registration.  The Copyright Office’s view on the other two plaintiffs’ dances has not yet been reported.

If a registration is denied

Denial of a copyright registration is not necessarily a dead end for these lawsuits.  The Copyright Act allows a plaintiff who has been refused a copyright registration by the Copyright Office to still sue a potentially offending party for copyright infringement.  However, the Copyright Office can then join the lawsuit by asserting that the plaintiff’s work is not entitled to copyright protection.

Historically, the federal courts have usually followed the Copyright Office’s view that a work is uncopyrightable.  If the other Fortnite plaintiffs are denied registration, as Ribeiro and 2 Milly were, they will all face an uphill fight on their copyright claims.

Other issues to overcome

Even if the plaintiffs’ copyright claims survive, they face other problems, including originality, which is a requirement of copyright.  If their dances are composed of moves contained in dances previously created by others, the plaintiffs may fail to convince the court that their dances are sufficiently original to warrant their own copyright.  For example, Ribeiro has stated in interviews that moves by Eddie Murphy, Courtney Cox and Bruce Springsteen inspired him when he created the Carlton.

Ownership of the dance can also be at issue if the dance was created in the course of employment (such as while working as an actor on a television show), as the law may hold that the employer owns the copyright.

Epic question #2: the right to publicity

The plaintiffs’ right to publicity arguments could go further than their copyright infringement claims. The right to publicity claims were based on the assertion that plaintiffs’ dances are uniquely associated with them and that Epic Games digitally copied the plaintiffs performing the dances, then created a code that allows avatars to identically perform the dances.  Some side-by-side comparisons of the original dance performances and the Epic emote versions (speed adjusted) look strikingly similar for the few seconds the emote lasts. According to plaintiffs, this use misappropriated their “identity.”

Their assertion is not as far-fetched as it may seem, given the broad reading courts in California have given to the state’s common law and statutory publicity law.  For example, the Ninth Circuit has previously ruled that an ad featuring a robot with a wig that turned letters on a board wrongfully took Vanna White’s identity, and that animatronic robots sitting at airport bars vaguely resembling “Norm” and “Cliff,” characters from the popular TV show Cheers, misappropriated the identities of the actors who played the roles, George Wendt and John Ratzenberger.

There remains an open question on whether the courts will be willing to take another step and find that a game avatar having no physical resemblance to a performer misappropriates the performer’s publicity rights just because the avatar does a dance popularly associated with the performer.

Once the Copyright Office announces its decisions on the outstanding copyright applications, the Fortnite plaintiffs may choose to re-file their cases; and this question could eventually be decided.

Scalyr launches PowerQueries for advanced log management

Log management service Scalyr today announced the beta launch of PowerQueries, its new tools for letting its users create advanced search operations as they manage their log files and troubleshoot potential issues. The new service allows users to perform complex actions to group, transform, filter and sort their large data sets, as well as to create table lookups and joins. The company promises that these queries will happen just as fast as Scalyr’s standard queries and that getting started with these more advanced queries is pretty straightforward.

Scalyr founder and chairman Steve Newman argues that the company’s competitors may offer similar tools, but that ” their query languages are too complex, hard-to-learn and hard-to-use.” He also stressed that Scalyr made a conscious decision not to use any machine learning tools to power this and its other services to help admins and developers prioritize issues and instead decided to focus on its query language and making it easier for its users to manage their logs that way.

“So we thought about how we could leverage our strengths — real-time performance, ease-of-use and scalability — to provide similar but better functionality,” he said in today’s announcement. “As a result, we came up with a set of simple but powerful queries that address advanced use cases while improving the user experience dramatically. Like the rest of our solution, our PowerQueries are fast, easy-to-learn and easy-to-use.”

Current Scalyr customers cover a wide range of verticals. They include the likes of NBC Universal, Barracuda Networks, Spiceworks, John Hopkins University, Giphy, OKCupid and Flexport. Currently, Scalyr has over 300 paying customers. As Newman stressed, more than 4,500 employees from these customers regularly use the service. He attributes this to the fact that it’s relatively easy to use, thank’s to Scalyr’s focus on usability.

The company raised it’s last funding round — a $20 million Series A round — back in 2017. As Scalyr’s newly minted CEO Christine Heckart told me, though, the company is currently seeing rapid growth and has quickly added headcount in recent months to capitalize on this opportunity. Given this, I wouldn’t be surprised if we saw Scalyr raise another round in the not-so-distant future, especially considering that the log management market itself is also rapidly growing (and has changed quite a bit since Scalyr launched back in 2011) as more companies start their own digital transformation projects, which often allows them to replace some of their legacy IT tools with more modern systems.

 

Telegram adds ‘delete everywhere’ nuclear option — killing chat history

Telegram has added a feature that lets a user delete messages in one-to-one and/or group private chats, after the fact, and not only from their own inbox.

The new ‘nuclear option’ delete feature allows a user to selectively delete their own messages and/or messages sent by any/all others in the chat. They don’t even have to have composed the original message or begun the thread to do so. They can just decide it’s time.

Let that sink in.

All it now takes is a few taps to wipe all trace of a historical communication — from both your own inbox and the inbox(es) of whoever else you were chatting with (assuming they’re running the latest version of Telegram’s app).

Just over a year ago Facebook’s founder Mark Zuckerberg was criticized for silently and selectively testing a similar feature by deleting messages he’d sent from his interlocutors’ inboxes — leaving absurdly one-sided conversations. The episode was dubbed yet another Facebook breach of user trust.

Facebook later rolled out a much diluted Unsend feature — giving all users the ability to recall a message they’d sent but only within the first 10 minutes.

Telegram has gone much, much further. This is a perpetual, universal unsend of anything in a private chat.

The “delete any message in both ends in any private chat, anytime” feature has been added in an update to version 5.5 of Telegram — which the messaging app bills as offering “more privacy”, among a slate of other updates including search enhancements and more granular controls.

To delete a message from both ends a user taps on the message, selects ‘delete’ and then they’re offered a choice of ‘delete for [the name of the other person in the chat or for ‘everyone’] or ‘delete for me’. Selecting the former deletes the message everywhere, while the later just removes it from your own inbox.

Explaining the rational for adding such a nuclear option via a post to his public Telegram channel yesterday, founder Pavel Durov argues the feature is necessary because of the risk of old messages being taken out of context — suggesting the problem is getting worse as the volume of private data stored by chat partners continues to grow exponentially.

“Over the last 10-20 years, each of us exchanged millions of messages with thousands of people. Most of those communication logs are stored somewhere in other people’s inboxes, outside of our reach. Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever,” he writes.

“An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriends in school can come haunt you in 2030 when you decide to run for mayor.”

Durov goes on to claim that the new wholesale delete gives users “complete control” over messages, regardless of who sent them.

However that’s not really what it does. More accurately it removes control from everyone in any private chat, and opens the door to the most paranoid; lowest common denominator; and/or a sort of general entropy/anarchy — allowing anyone in any private thread to choose to edit or even completely nuke the chat history if they so wish at any moment in time.

The feature could allow for self-servingly and selectively silent and/or malicious edits that are intended to gaslight/screw with others, such as by making them look mad or bad. (A quick screengrab later and a ‘post-truth’ version of a chat thread is ready for sharing elsewhere, where it could be passed off a genuine conversation even though it’s manipulated and therefore fake.)

Or else the motivation for editing chat history could be a genuine concern over privacy, such as to be able to remove sensitive or intimate stuff — say after a relationship breaks down.

Or just for kicks/the lolz between friends.

Either way, whoever deletes first seizes control of the chat history — taking control away from everyone else in the process. RIP consent. This is possible because Telegram’s implementation of the super delete feature covers all messages, not just your own, and literally removes all trace of the deleted comms.

So unlike rival messaging app WhatsApp, which also lets users delete a message for everyone in a chat after the fact of sending it (though in that case the delete everywhere feature is strictly limited to messages a person sent themselves), there is no notification automatically baked into the chat history to record that a message was deleted.

There’s no record, period. The ‘record’ is purged. There’s no sign at all there was ever a message in the first place.

We tested this — and, well, wow.

It’s hard to think of a good reason not to create at very least a record that a message was deleted which would offer a check on misuse.

But Telegram has not offered anything. Anyone can secretly and silently purge the private record.

Again, wow.

There’s also no way for a user to recall a deleted message after deleting it (even the person who hit the delete button). At face value it appears to be gone for good. (A security audit would be required to determine whether a copy lingers anywhere on Telegram’s servers for standard chats; only its ‘secret chats’ feature uses end-to-end encryption which it claims “leave no trace on our servers”.)

In our tests on iOS we also found that no notifications is sent when a message is deleted from a Telegram private chat so other people in an old convo might simply never notice changes have been made, or not until long after. After all human memory is far from perfect and old chat threads are exactly the sort of fast-flowing communication medium where it’s really easy to forget exact details of what was said.

Durov makes that point himself in defence of enabling the feature, arguing in favor of it so that silly stuff you once said can’t be dredged back up to haunt you.

But it cuts both ways. (The other way being the ability for the sender of an abusive message to delete it and pretend it never existed, for example, or for a flasher to send and subsequently delete dick pics.)

The feature is so powerful there’s clearly massive potential for abuse. Whether that’s by criminals using Telegram to sell drugs or traffic other stuff illegally, and hitting the delete everywhere button to cover their tracks and purge any record of their nefarious activity; or by coercive/abusive individuals seeking to screw with a former friend or partner.

The best way to think of Telegram now is that all private communications in the app are essentially ephemeral.

Anyone you’ve ever chatted to could decide to delete everything you said (or they said) and go ahead without your knowledge let alone your consent.

The lack of any notification that a message has been deleted will certainly open Telegram to accusations it’s being irresponsible by offering such a nuclear delete option with zero guard rails. (And, indeed, there’s no shortage of angry comments on its tweet announcing the feature.)

Though the company is no stranger to controversy and has structured its business intentionally to minimize the risk of it being subject to any kind of regulatory and/or state control, with servers spread opaquely all over the world, and a nomadic development operation which sees its coders regularly switch the country they’re working out of for months at a time.

Durov himself acknowledges there is a risk of misuse of the feature in his channel post, where he writes: “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.”

Again, though, that’s a one-sided interpretation of what’s actually being enabled here. Because the feature inherently removes control from anyone it’s applied to. So it only offers ‘control’ to the person who first thinks to exercise it. Which is in itself a form of massive power asymmetry.

For historical chats the person who deletes first might be someone with something bad to hide. Or it might be the most paranoid person with the best threat awareness and personal privacy hygiene.

But suggesting the feature universally hands control to everyone simply isn’t true.

It’s an argument in line with a libertarian way of thinking that lauds the individual as having agency — and therefore seeks to empower the person who exercises it. (And Durov is a long time advocate for libertarianism so the design choice meshes with his personal philosophy.)

On a practical level, the presence of such a nuclear delete on Telegram’s platform arguably means the only sensible option for all users that don’t want to abandon the platform is to proactive delete all private chats on a regular and rolling basis — to minimize the risk of potential future misuse and/or manipulation of their chat history. (Albeit, what doing that will do to your friendships is a whole other question.)

Users may also wish to backup their own chats because they can no longer rely on Telegram to do that for them.

While, at the other end of the spectrum — for those really wanting to be really sure they totally nuke all message trace — there are a couple of practical pitfalls that could throw a spanner in the works.  

In our tests we found Telegram’s implementation did not delete push notifications. So with recently sent and deleted messages it was still possible to view the content of a deleted message via a persisting push notification even after the message itself had been deleted within the app.

Though of course, for historical chats — which is where this feature is being aimed; aka rewriting chat history — there’s not likely to be any push notifications still floating around months or even years later to cause a headache.

The other major issue is the feature is unlikely to function properly on earlier versions of Telegram. So if you go ahead and ‘delete everywhere’ there’s no way back to try and delete a message again if it was not successfully purged everywhere because someone in the chat was still running an older version of Telegram.

Plus of course if anyone has screengrabbed your chats already there’s nothing you can do about that.

In terms of wider impact, the nuclear delete might also have the effect of encouraging more screengrabbing (or other backups) — as users hedge against future message manipulation and/or purging. Or to make sure they have a record of abuse.

Which would just create more copies of your private messages in places you can’t at all control and where they could potentially leak if the person creating the backups doesn’t secure them properly so the whole thing risks being counterproductive to privacy and security, really.

Durov claims he’s comfortable with the contents of his own Telegram inbox, writing on his channel that “there’s not much I would want to delete for both sides” — while simultaneously claiming that “for the first time in 23 years of private messaging, I feel truly free and in control”.

The truth is the sensation of control he’s feeling is fleeting and relative.

In another test we performed we were able to delete private messages from Durov’s own inbox, including missives we’d sent to him in a private chat and one he’d sent us. (At least, in so far as we could tell — not having access to Telegram servers to confirm. But the delete option was certainly offered and content (both ours and his) disappeared from our end after we hit the relevant purge button.)

Only Durov could confirm for sure that the messages have gone from his end too. And most probably he’d have trouble doing so as it would require incredible memory for minor detail.

But the point is if the deletion functioned as Telegram claims it does, purging equally at both ends, then Durov was not in control at all because we reached right into his inbox and selectively rubbed some stuff out. He got no say at all.

That’s a funny kind of agency and a funny kind of control.

One thing certainly remains in Telegram users’ control: The ability to choose your friends — and choose who you talk to privately.

Turns out you need to exercise that power very wisely.

Otherwise, well, other encrypted messaging apps are available.