The flight back from San Francisco to Europe was rather tedious. I used the time to write a summary of the third day of the Usenix Enigma 2017 conference to complete the previous coverages of day 1 and day 2.
Matt Jones from WhatsApp was the first to talk on the third and final day of the Usenix Enigma conference. He had a presentation about spam reduction at WhatsApp with him. One would think that the end to end encryption hiding the message content from a messaging app was making things much worse spamwise. But fret not: Matt demostrated that WhatsApp in fact reduced spam by 75% with a project that ran in parallel to the encryption project. The inherent cost in creating a WhatsApp account (the phone number linked to the account) tilted the balance in their favor. Strict monitoring of accounts and constantly adopted classifiers let them identify spamming accounts extremely fast and before they could do much harm which made the spamming business unprofitable. Outside of the interesting topic of his talk, it has to be noted, that Matt is a most gifted speaker that left a lasting impression on me.
I was curious to learn what Damian Menscher, head of the Google DDoS prevention team, would be talking about since the program announced a presentation about a DDoS honeypot. Who would set up a site in order to attract a DDoS attack if not Google? As it turned out, the honeypot turned out to be the KrebsOnSecurity website, which had been attacked by the first appearance of the Mirai botnet drawing over 620 Gbit of traffic per second. Brian Krebs had been forced to leave the umbrella of his former free DDoS shield during the attack. He seeked refuge with Google’s project Shield which aims to protect news sites and journalists from DDoS attacks. After brief internal discussions, Damian’s team welcomed Brian and sought to move his site behind the shield. But the attack was ongoing (control of the website on the domain, which was necessary in order to follow due process, could thus not factually be proven), the DNS record of KrebsOnSecurity was deep frozen, the hosting provider was off for the weekend and it must have been very hectic times for Damian’s team. Great war stories ensued and the general impression was, that when DDoS attacks shift towards the application layer, Google is very well prepared to protect its network. After the talk, I took he microphone and asked wether he saw a future where everybody attempting to be protected from application layer DDoS attacks would be forced to hand over his encryption keys to one of two or three global DDoS defense networks. He confirmed that this was the future he saw.
Unlike everybody else at the conference, Lorrie Faith Cranor reads privacy notices. She has read, studied and classified a great many of them. In fact, despite their incomprehensible legalese, they are actually important and once a company writes something down in such a document, it becomes binding. She talked about notable examples, presented data on when people actually read summaries of privacy notices (depending on the moment they are displayed) and demonstrated ways to depict privacy notices in tabular form or as logos leading to better awareness of users about the information they were giving a way.
Franzi Roesner from the University of Washington returned to the pending problems of journalists (and lawyers) with online security, namely privacy. She works closely with journalists and lawyers to learn how they work and how they interact with their sources (and clients). Journalists want to make sure, the source has only a minimal barrier in his or her way when submitting information. Telling a source to install a complicated messaging app or to learn PGP is out of question. Franzi attempted to research a solution to this problem and she had her team help her design and test an email addon that handles the encryption keys in a very easy way via keybase.io and was really welcomed by the journalists who tested the prototype. One went as far as to state that PGP was usually a ritual of passage and her tool simplified the process so much it felt almost a bit too easy. My impression was that Franzi had gone from a identifying a problem with a systematic gap analysis to solving that problem in an exemplary way.
Sunny Consolvo from Google’s Privacy and User Experience team talked about security practices of people exposed to abuse by their intimate partner. Roughly a quarter of US women and about one in ten men make this experience during their lifetime and it takes seven attempts to escape the situation on average. Sunny performed a qualitative study with abuse victims. She has met a group of people that adopted use two factor authentication very, very fast as it allowed them to protect their contacts and frends from the abusers. Other practices used by the victims were the cleaning of browsing histories, caches, new online accounts for social networks, etc. The presentation left the audience baffled for all the work into privacy and security was clearly helping people and I got the feeling people were actively brainstorming during lunch what else could be done to improve the situation of these poor souls.
Andrea Little Limbago is the chief social scientist at Endgame. Is not it impressive that a tech company employs a chief social scientist these days? As an Arts and Humanities scholar I welcome that development with all my heart. Andrea talked about formal and informal norms being applied to the internet on a global scale. She pointed out where the laws defining these norms need to be rewritten (namely in the US) and areas where US legislation could help shape better norms on a global level.
Tudor Dumitras and his team at the University of Maryland do not think it feasible to have humans read the vast research literature on Android malware anymore. So they developed a tool called FeatureSmith that would use AI capable of semantic analysis and let it swallow a body of several hundreds of research papers. Said tool would interpret the research and develop classifiers based on the papers which they then put to test competing with a state of the art malware detector with manually created classifiers. FeatureSmith performed equally well in their tests which seemed to impress the general audience.
Professor David Evans from the University of Virginia demonstrated the fragility of machine learning systems that can easily be betrayed by adversaries feeding slightly manipulated data into the necessary self-learning process. He demonstrated some typical evasion methods and proposed ideas to counter the known evasion attempts.
John Launchbury, director of the Information Innovation Office (I2O) at DARPA, started the last block of talks which focused on developments and initiatives within government bodies. John presented a DARPA perspective on the state of cyber security. He had barely started his talk when tweets popped up noting the stark contrast of the use of the word “Cyber” among the public and the private sector. But that did not really matter, for his work at DARPA gave him a most impressive insight into bleeding edge technology. John also talked about the DARPA Cyber Grand Challenge where one AI system had found a hitherto unknown weakness in a competitor’s system, abused it with an exploit and a third system observing the attack developed a patch for the weakness in order to protect itself. And this all happened within minutes. Breathtaking.
The UK National Cyber Security Center was represented by technical director Ian Levy. He described his new central organisation that aims to be the go-to point for anything cyber security in the UK. This is obviously a centralized setup and seems to mirror the new National Cyber Strategy of the UK; a document he named a recommended reading with a lot of interesting policies. I know the Swiss National Cyber Strategy a bit and where we focus on resilience through federal bodies protecting themselves, the UK strategy seems to go in the opposite direction. On the other hand, it has been clearly visible that NCSC is very active in the education field and the posters and guides, public theme days and open question rounds online make a very good impression. When Ian continued to name various other new initiatives, he also touched on the idea of a DNS blacklist, that would be opt-in for British ISPs. This did not go down well with the audience who clearly saw this as an additional tool to help the UK with censorship of the internet. Ian Levy refused to see any problem with the way the existing national porn filter was implemented by the ISPs and the discussion grew very heated. It was the only unfriendly discussion in the whole three days and it had to be cut short finally.
Lisa Wiswell from the Defense Digital Services fought giants and she won. She introduced the first federal bug bounty programs in the US in the form of the Hack the Pentagon and the Hack the Army campaigns. Her talk reflected on that significant achievement and shed some light on the experience. It all started with the need to overcome the huge internal resistance that opposed the idea to allow hackers to attack the Department of Defense. The Department of Defense had instructed several red teams to go over the services before they were subjected to the bug bounty program. She made it clear they had protected the sites in question to the best of their ability. But when they opened the bounty, they received a report of the first newly discovered weakness within five minutes. In the end, they had between 100 and 150 unique vulnerabilities in both programs, 90% of them ignored by their own red teams. They are estimating that they saved over a million dollar in pen testing with much better coverage. Working with the bounty hunters has been great. She confirmed that the rules were kept, nobody destroyed anything. Instead she saw a keen interest to help to improve the security of the Pentagon and the army. Lisa also laid out several plans for the future. When I read that they are aiming to make Open Source the default, I was so excited that I forgot to write down the rest of Lisa’s plans.
Before this fine conference came to an end, it was Tim Booher‘s turn to explain the lessons learnt at the DARPA Cyber Grand Challenge (CGC). The B52s will fly for another century and they need to be secured. Securing them, the other weapon systems of the US military and the billions of lines of code by manual patching is out of question. That’s why the CGC set out the push the automated discovery of weaknesses and their patching to its limits and (as explained above) a fair deal behind the hitherto assumed limits. Tim’s goal is now to move these prototypes into production to really help to secure their weapon systems. His talk showed this goal to be quite feasible.
Christian Folini Follow @ChrFolini Tweet