Developing Benchmarks for Web Application Security(1)

A Review of Defining a Set of Common Benchmarks for Web Application Security (Livshits CS Department, Stanford University)

Claudia Zhu
2 min readMar 6, 2019
Sorry for the cliche, nonsensical binary photo but I couldn’t resist. Image from here

The Background

An increasing number of applications vulnerabilities such as SQL injections (where attackers uses application code to gain access to database content) or cross-site scripting (XSS) (where attackers target the user-side Javascript) or Denial of Service (DoS) (an attack on some website so that it denies service to legitimate users) account for 60% of digital security exploits. This means that attackers generally gain access or attack services through their web applications, which implies that there is a growing need for protection against these attacks and before that, detection for the vulnerabilities.

While there are a few commercially available tools for testing the security and vulnerabilities of web applications, there is no standard for these tools to see whether their claims are true. This is where Livshits’s paper comes in. In his paper, Livshits proposes a set of “universally accepted benchmarks” that can be used as a “validation test bed for security tools”.

The Terms

  • Real-life programs: Real-life programs are just as they sound, programs that exist in real life and more importantly have some sort of client-base. In other words, there are real people using these programs and web applications. While this ups the stakes of providing better security for these programs, it also provides extremely practical data and models for researchers and security companies.
  • Artificial benchmarks: Artificial benchmarks are benchmarks that are arbitrarily created and identified as some standard to evaluating the success of aforementioned tools. The opposite would be organically created benchmarks, or those that are generated from identifying exploits and vulnerabilities — not the other way around.

The Benchmarks

Livshits’s group has developed Stanford SecuriBench, a suite of benchmarks 8 real-life, Web-based, Java J2EE (platform for developing, building and deploying Web-based enterprise applications online) applications. You can find high-level descriptions of them here. Note that all these benchmarks are open source as Livshits hopes to foster “collaboration between researchers” :).

The Challenge

The greatest challenge in creating benchmarks are to develop them for real-life programs instead of from artificial benchmarks. Livshits strongly supports the statement that real-life programs are more desirable than artificial benchmarks and for good reason.

Suppose that we have created some sort of standard as Livshits aims to do from artificial benchmarks. There is no way to actually validate the success or validity of these benchmarks. So we must generate benchmarks from real-life programs with unintentional bugs and exploits. This is the true challenge of creating such benchmarks and remains an open problem in the field.

--

--

Claudia Zhu
Claudia Zhu

Written by Claudia Zhu

Works, Observations, and Thoughts | Student at UPenn linkedin.com/in/claudiazhu

No responses yet