On Friday June 10th took place the first edition of AD&D, organized by my colleague Merve Sahin. It was not about ‘Advanced Dungeons & Dragons’, but the content was just as cool.
(cyber-)defense is about collecting and analyzing information to detect and remedy attacks. It can be seen as ‘passive’ as the information reflects attacks which already took place in your system. All you can do is react to these attacks.
Active defense is about outsmarting your adversary, to make attacks more difficult to carry out. Deception is a way to implement active defense: by spreading decoys, you can collect information even before the attack is successful. By tricking adversaries to attack the wrong target, there is nothing to remediate anymore.
The 2022 program was really insightful. The attendance was small – which was not surprising as ‘deception’ remains a relatively ‘new’ and multi-disciplinary topic – but really dedicated. I would not be surprised if next year’s edition would garner more people just from word of mouth.
It started with a talk from Kimberly Ferguson-Walter on the psychological impact of letting attackers know deception is being used. The counter-intuitive discovery is that not only deception still works when attackers are aware of its presence, announcing its presence severely impacts the ability of attackers to further progress. My favorite tidbit was about the characterization of the Red Team population from a psychological standpoint. Red teamers are not malicious hackers but are probably the closest population we can safely observe. It is interesting to note for example that red teamers are less spontaneous but also less indecisive than the standard population; such traits may prove useful in building effective deception strategies.
The next talk was given by Shreyas Srinivasa. They built a network of honeypots in the wake of the log4j attack wave and compared their results with the ones observed by the Honeynet Project. The conclusion was that all honeypots were attacked in a relatively homogeneous way. This tended to prove that at least at that time, most log4j attacks were performed by automated scripts and not by humans (anymore), but also that machines looking like active directory servers are still aggressively sought after by these same scripts. I will look forward to the next steps Srinivasa wants to research: a classifier able to tell apart automated attacks from human-based ones, by computing a ‘novelty’ score of a detected attack.
Then, Savvas Zannettou told us about state-sponsored trolls. They accessed Twitter and Reddit data about user accounts tagged by these platforms as trolls. With this data, they could identify which agenda such groups are pushing. Namely it appeared clearly that Russian trolls were pushing pro-Trump content while Iranian trolls were pushing anti-Trump content. Maybe in the future we will have the means to expose the agenda of trolls in real time, helping to expose their manipulation? The second take-away is that Twitter trolls tend to regularly change their persona and goals, at which time they simply delete their old content. The interesting point is that when this happens, they retain most of their followers. It might be nice to get a notification when such a pattern occurs, to give users a chance to stop following the account then?
After a break where we talked about what to do and where to eat in Genoa, Italy – the place where the workshop was held, we went back to the remaining four sessions.
The first talk was given by Tolga Unlu. He conducted a survey about how to make applications attack-aware, using existing development frameworks as a starting point. The results were… mixed. Unlu identified that each framework has its strengths and weaknesses, and that using any of them to start building an attack-aware application will require extra work. This left the question I have been thinking about for a while already: how much deception should be part of the application versus added ‘around’? Our work currently focuses on the latter, as it makes it easier to deploy and maintain, but are there deception strategies that could only be implemented in the application code itself? Let us know your opinion in the comments!
The next talk, from Rodolfo Valentim, was about a clever idea called ‘sound squatting’. This takes the assumption that voice assistants will be used more and more to browse the Internet. An adversary might then register a website such as whatsup.com to attract traffic from users wishing to visit whatsapp.com. They built a system which transforms words into phonemes, then reconstructs words from phonemes. The trick is that they add some noise to both transformations, enabling the creation of domain names sounding like the original domain name or close enough to capture requests the voice assistant may mis-recognize. Velentim found that some sound-squatting websites already exist, not as blatant phishing websites but clearly to steal traffic. More worrisome is that many ‘dormant’ domains were found as well: these would be the perfect place to mount a phishing campaign: create a deceptive copy, launch the campaign, profit, then remove the copy to remain hidden.
Palvi Aggarwal told us next about how to implement a masking strategy, that is, how to best disguise network attributes to hide the real state of a network from adversaries. I took away two elements from this presentation. First, about how humans attack. The assumption that attackers try to optimize the gain seems reasonable, but what is observed in practice is that humans tend to avoid risk (they will attack in priority systems which may be lower reward, but where the guarantee of success is higher) rather than behaving in a rational way (that is, by putting the focus on the systems where the gain/risk ratio is the highest). The second element I took from this presentation is that the authors built a model able to learn and behave like a human attacker. This opens the way for generalizing experimentation!
The last talk, from Robert Gutzwiller, was (to me) fascinating: while we have studied for years how to make an application as usable as possible, the idea is to turn this idea on its head to instead make hacking a process as unusable as possible. This concept of ‘oppositional human factors’ takes the theory about the two brains of humans: the fast brain, that we use to make quick decisions, and the slow brain, that we turn on when we need to solve problems. While the fast brain is, well, fast, it relies on heuristics which may or may not be good. And the slow brain, while often accurate, is slow, and may fall victim of reasoning errors known as fallacies. One idea now is to force shifts from fast to slow brain to make attackers slow-down, and vice-versa to encourage attackers to take reckless (wrong) decisions. One example was about an experiment where honeypots were put into a network where machines were named after employees. Attackers assumed that machines whose name was present in the active directories were the real ones. Too mad for them, the defenders added a few fake employees to the directory…
Overall, AD&D was a great event. It was humbling to meet and talk to the speakers and participants, and we agreed to meet again and grow the community. Deception as a means of defense remains a new topic which draws as much from information security as from psychology and social sciences. The journey is not over, and there will be much to explore and enjoy on the way.