This is our second post on hunting data races in real-world applications using RV-Predict. As promised in our previous post, we are now going after the big-name projects! This time our test subject is Apache Tomcat, which is arguably the most popular Java application server at the time being.
Preparation
In this experiment, we will be targeting Tomcat 8.x, which is currently the latest stable version. First of all, let's clone the github repository at https://github.com/apache/tomcat80. Tomcat 8 uses the Ant build system. You need to install Ant version 1.9.5 or later to build the source code. The more detailed build instructions can be found at http://tomcat.apache.org/tomcat-8.0-doc/BUILDING.txt. The entire documentation is a bit long but the workflow is pretty standard so there shouldn't be any surprise. For the sake of our experiment, section 7 is the most relevant part. It talks about the JUnit tests that come with the distribution and how to run them. In order to integrate RV-Predict into the build cycle, we need to modify the configuration file build.xml
so that the RV-Predict agent will be started along with the tests. But don't worry, the change is as simple as a one-liner:
just add <jvmarg value="-javaagent:${rvPath}/rv-predict.jar=--base-log-dir log" />
at around line#1433 inside the "junit" tag. Here, ${rvPath}
is the installation path of RV-Predict on your machine and the option --base-log-dir log
tells RV-Predict to store its logs and results under directory log
. Of course, more options are available to tune the behavior of RV-Predict but that's basically all we need to get the things up and running. Please consult RV-Predict's documentation for further information.
Running JUnit Tests
Now we can simply run ant test
to start the test. It's going to take a while to run the entire test suite so you probably want to work on other things in the meantime. A normal run, without the modification above (i.e., without RV-Predict), of the entire test suite takes about 50 minutes to finish on my machine. With RV-Predict in effect, it increases to around 260 minutes. So the overhead in this case is around 5x. Not that bad, huh? :) Once the tests are finished, the results will be stored under directory log
as we instruct. For each JUnit test, there is going to be a directory holding the test result. To facilitate the process of gathering and examining the race reports, we have written a python script aggregate-reports.py
to help us put together the race reports of each JUnit test into one file and filter out the duplicates. The script is included at the end of this post. On my Linux machine, the typical usage of the script is to run the following command under the log
directory:
find -iname "result.txt" -exec cat "{}" \; | python aggregate-reports.py >races.txt 2>info.txt
Now all the race reports are written to the races.txt
file ready to be examined.
Findings
At the time when the experiment was performed, Tomcat 8.026 was the latest version. RV-Predict successfully found nearly 40 distinct data races in a few runs and they were reported to the developers using Bugzilla. The Tomcat developers reacted promptly and fixed them in version 8.027. The changelog of version 8.027 can be found here. Notice that many races detected by RV-Predict are marked as "rare" by the developers in the changelog. Usually these are the races that happen at the places where the developers didn't expect to have memory shared between threads. That is partly why data races are so hard to find manually in any complex project. Fortunately, we now have RV-Predict to help with this task!
Summary
As the closing remarks, it is worth mentioning that more and more large projects are now using static analysis tools to scan their code bases constantly. For example, Tomcat is periodically scanning its source code using the Coverity Scan service, which is based on the successful Coverity static analysis tool that is capable of detecting many programming errors including data race. While it would be interesting to compare the results of Coverity Scan with RV-Predict and discuss static analysis vs. dynamic analysis in detecting data races, the Coverity Scan user agreement unfortunately forbids us from disclosing or discussing any of its results. However, you may sign up for an account and check it out yourself at https://scan.coverity.com/projects/apache-tomcat if you want. Anyway, that's all for today's data race hunting journal. See you next time!
#!/usr/bin/env python2 # # RV-Predict script for aggregating race reports # # Python version: 2.7 # # Prerequisite packages: enum34 # # Sample usage: # find -iname "result.txt" -exec cat "{}" \; | python aggregate-reports.py >races.txt 2>info.txt from __future__ import print_function import re import sys from enum import Enum from collections import deque, OrderedDict def info(*objs): print("INFO: ", objs, file=sys.stderr) class State(Enum): initial = 1 leg_initial = 2 leg_stack_initial = 3 leg_stack = 4 leg_thread = 5 leg_thread_location = 6 leg_final = 7 final = 8 class BasicThreadInfo: def __init__(self, name): self.name = name class MainThreadInfo(BasicThreadInfo): def __str__(self): return " " + self.name + " is the main thread\n" class CreatedThreadInfo(BasicThreadInfo): def __init__(self, parent, location, name): BasicThreadInfo.__init__(self, name) self.parent = parent self.location = location def __str__(self): return " {name} is created by {parent}\n" \ " {location}\n".format(name = self.name, parent = self.parent, location = self.location) class Race: def __str__(self): return "Data race on {locSig}: {{{{{{\n" \ "{left}\n" \ "{right}\n" \ "}}}}}}".\ format(locSig = self.locSig, left = self.left, right = self.right) class LocSig: def __init__(self, locSig): self.locSig = locSig class Field(LocSig): def __str__(self): return "field {locSig}".format(locSig = self.locSig) class Array(LocSig): def __str__(self): return "array element {locSig}".format(locSig = self.locSig) class Access: def __str__(self): return " Concurrent {type} in thread {thread} (locks held: {{{locks}}})\n" \ " ----> {locations}\n" \ "{thread_info}".\ format(type = self.type, thread = self.thread, locks = self.locks, locations = "\n ".join([str(x) for x in self.stack]), thread_info = self.thread_info) class Location: def __init__(self,class_name,file_name,file_line): self.class_name = class_name self.file_name = file_name self.file_line = file_line def __str__(self): return "at {cname}({fname}:{fline})".\ format(cname = self.class_name, fname = self.file_name, fline = self.file_line) class LockLocation(Location): def __init__(self, lock_name, class_name, file_name, file_line): Location.__init__(self, class_name, file_name, file_line) self.lock_name = lock_name def __str__(self): return "- locked " + self.lock_name + " " + Location.__str__(self) state = State.initial field_race_start_re = re.compile("^Data race on field ([^:]*): [{][{][{]") array_race_start_re = re.compile("^Data race on array element ([^:]*): [{][{][{]") race_op_re = re.compile("\s*Concurrent ([^ ]*) in thread ([^ ]*) [(]locks held: [{]([^}]*)[}][)]") location = "at ([^(]*)[(]([^:]*):([^)]*)[)]" first_stack_re = re.compile(" ----> " + location) next_stack_re = re.compile("\s*" + location) lock_stack_re = re.compile("\s*- locked ([^ ]*) " + location) thread_create_re = re.compile("\s*([^ ]*) is created by ([^\s]*)") thread_main_re = re.compile("\s*([^ ]*) is the main thread"); race_end_re = re.compile("[}][}][}]") races = {} def finalize_leg(): global state if race.left is None: race.left = access state = State.leg_final else: race.right = access races[race_key] = race state = State.final for line in sys.stdin: info(str(state) + "\t" + line) if state == State.initial: match = field_race_start_re.match(line) if match: race = Race() race.locSig = Field(match.group(1)) else: match = array_race_start_re.match(line) if match: race = Race() race.locSig = Array(match.group(1)) if match: race_key = match.group(1) race.left = None state = State.leg_initial else: info("Skipping line '{}'".format(line)) continue if state == State.leg_initial: match = race_op_re.match(line) assert match, line access = Access() access.type = match.group(1) access.thread = match.group(2) access.locks = match.group(3) state = State.leg_stack_initial continue if state == State.leg_stack_initial: match = first_stack_re.match(line) assert match, line access.stack = deque(); access.stack.append(Location(*match.groups())) race_key+= ":"+match.group(3) state = State.leg_stack continue if state == State.leg_stack: match = next_stack_re.match(line) if match: access.stack.append(Location(*match.groups())) continue match = lock_stack_re.match(line) if match: access.stack.append(LockLocation(*match.groups())) continue state = State.leg_thread if state == State.leg_thread: match = thread_create_re.match(line) if match: assert access.thread == match.group(1) parent_thread = match.group(2) state = State.leg_thread_location else: match = thread_main_re.match(line) assert match, line assert access.thread == match.group(1) access.thread_info = MainThreadInfo(access.thread) finalize_leg() continue if state == State.leg_thread_location: match = next_stack_re.match(line) assert match, line creation_location = Location(*match.groups()) access.thread_info = CreatedThreadInfo(parent_thread, creation_location, access.thread) finalize_leg() continue if state == State.leg_final: assert line.isspace(), line state = State.leg_initial continue if (state == State.final): assert race_end_re.match(line), line state = State.initial oraces = OrderedDict(sorted(races.items())) for race in oraces.itervalues(): print(race,"\n")