Improve automated analysis of logs

Grafik zur Arbeit von CISPA-Forscher Alexander Dax
(c) Lea Mosbach / CISPA

Alexander Dax received two Distinguished Paper Awards at this year’s USENIX cybersecurity conference for research papers he has worked on. One of the two coveted awards in the research community honored him for his paper “Hash gone bad: Automated discovery of protocol attacks that exploit hash function weaknesses”. In the work, the Saarland native shows that automated security analyses of Internet protocols are often inaccurate because they are based on false assumptions – in this case, perfect hash functions. He explains what hash functions are, what they are used for, and how he intends to use his research to improve the automated analysis of protocols.

To ensure that data can be sent back and forth securely on the Internet, various protocols are used. They regulate who can send what to whom, when, and in what form. One of the best-known Internet protocols in continuous use is TLS, short for Transport Layer Security. TLS primarily regulates how communication between web applications is encrypted. For example, browsers such as Google Chrome and Mozilla Firefox communicate with a web server every time a website is called up. To ensure that this communication cannot be infiltrated by attackers, a secure connection must be established in the first step before the actual communication. This ensures that the communication partners are who they claim to be and that no third party can intervene. Once this has been clarified, cryptographic keys can be exchanged securely, thus enabling confidential communication. But how can this be done securely?

Hash functions as a security guarantee

“Almost every security protocol uses hash functions,” explains Dax. This allows a check value and thus a kind of digital fingerprint to be created. This can be used to check whether data has been manipulated on its way from A to B. “These functions take any value, of any size, and turn it into a smaller value with a fixed size,” Dax explains. That alone doesn’t do much; the functions must also have certain properties. “These include the fact that specific data contents, such as passwords, must always result in the same value when calculated with the same hash function. Conversely, however, it must not be possible to infer the data content from the hash value.” Another important property of hash functions is that different source data must not be converted to the same hash value. “We talk about collisions when that happens,” Dax says. And this is where theory and practice get in each other’s way. “In reality, there are no perfect hash functions. It’s always just a matter of time before there are collisions. In addition, the state of technology has changed. With old hash functions, it is now possible, with enough computing power, to try out different values until the original value for a hash value is found. This is called a brute force attack,” says Dax.

Networks must be future-proof

Such attacks are very costly for modern hash functions, according to Dax, so they have not been a common problem so far. “However, technology is evolving very quickly, and we need to make sure our networks are future-proof as well.” And that’s where Dax’s research around tools for automated protocol security analysis comes in. “It’s not enough to say that a protocol is secure. We also need to be able to prove it formally. That means we need precise mathematical definitions of how the protocol behaves, to enable us to then calculate how secure it is.” These testing procedures are enormously costly, which is why they have now been automated. “There are tools for this, such as Tamarin Prover or Proverif, that can do the work for us. The problem is: so far, these tools often only work with model representations of hash functions, which are perfect in this form. But in practice, we know that they often just aren’t.”

Too perfect isn’t any good either

Acknowledging these imperfections is the first step to improving tools. And it has another benefit: “We have modeled different variants of weak hash functions and built them into Tamarin Prover and the Proverif tool. By doing this, we also want to find out how big the impact of different weaknesses in the hash functions is on the overall security of the protocol.” Formal security proofs of protocols are not some dorky researcher’s stuff, but have long since arrived in the world’s major tech companies as well. “Many large companies, such as Google, employ cryptographers to check how secure the protocols they use are. This is very costly to do manually, and even checking automated security analysis currently requires a lot of effort. We want to make the tools better so that in the future this will require significantly less manpower and effort, and automated checking can guarantee real protocol security.”

At the source

Dax works in the group of CISPA Faculty Prof. Cas Cremers, and thus sits at the source of research questions related to automated protocol auditing. Cremers and colleagues developed the aforementioned Tamarin Prover several years ago, which is used by companies such as Mozilla and Amazon. “My research is part of a larger project to improve automated security analysis. I’ve been working on it for years. That my research on hash functions has now resulted in an excellent paper is great,” Dax says. He is now a CISPA veteran of sorts. “I’ve been involved since 2016, was first a Hiwi with Michael Backes, then in Robert Künnemann’s group, and now I’m a PhD student with Cas. Somehow, I’ve grown along with CISPA.”

The paper was written in collaboration with CISPA faculty Cas Cremers, Vincent Cheval and Charlie Jacomme from INRIA Paris, Lucca Hirschi from LORIA and Inria, and Steve Kremer from Université de Lorraine (LORIA and Inria Nancy Grand-Est).

Originalpublikation:

https://publications.cispa.saarland/3862/

https://www.cispa.de/

Media Contact

Annabelle Theobald Unternehmenskommunikation
CISPA Helmholtz Center for Information Security

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Super-precise spectrometer enabled by latent information carried by photons

Two researchers at the University of Warsaw developed a quantum-inspired super-resolving spectrometer for short pulses of light. The device designed in the Quantum Optical Devices Lab at the Centre for…

New method for fingerprint analysis holds great promise

Overlapping and weak fingerprints pose challenges in criminal cases. A new study offers a solution and brings hope for using chemical residues in fingerprints for personal profiling. A groundbreaking study…

Neoself-antigens induce autoimmunity in lupus

Reactivation of Epstein–Barr virus infection increases the production of neoself-antigens, which induce an autoimmune response, in patients with lupus. Autoimmune diseases are widespread and notoriously difficult to treat. In part,…

Partners & Sponsors