Benchmarks for Event-driven SJ Sessions

Section 5 of the paper presented preliminary performance results comparing a simple multithreaded SJ implementation of a simple SMTP server against an event-driven version, implemented using the new SJSelector API. This online appendix presents the extended micro and macro benchmarks and full details omitted from the above paper, with the results obtained for the latest version of SJ. See below for the full micro and macro benchmark source code and scripts. The raw results from each benchmark can also be found below.


Benchmark Environment

To simulate large numbers of concurrently active clients, all of the benchmarks were executed in the following cluster environment: each node is a Sun Fire x4100 with two 64-bit Opteron 275 processors at 2 GHz and 8 GB of RAM, running 64-bit Mandrakelinux 10.2 (kernel 2.6.11) and connected via gigabit Ethernet. Latency between each node was measured to be 0.5 ms on average (ping 64 bytes). The benchmark applications were compiled and executed using the Sun Java SE compiler and Runtime versions 1.6.0. Each experiment features a single Server and a single Timer Client (where needed) running by themselves in separate nodes (Servers are bound to a single core); the Load Clients are distributed evenly across the remaining nodes of the cluster. For each parameter configuration in each experiment, we discarded a fixed number of warm-up runs before the results were recorded; the following presents the mean values.


We give the complete results for the following micro and macro benchmarks.

The following abbreviations are used for each benchmark application version:


Micro Benchmark 1. Response time performance of the Java and SJ multithreaded and event-driven microbenchmark Servers.

In this benchmark, all Load Clients engange in non-terminating, i.e. repeatedly looping, microbenchmark sessions with the Server, sending fixed messages of size: (a) 100 Bytes and (b) 1 KB. The time taken to complete a session with the Server is measured using a Timer Client whilst the Server is under load from 10, 100, 300, 500, 700 and 900 Load Clients. 100 measurements, i.e. 100 Timer Client sessions, were taken for each Server instance, and the entire experiment was repeated 10 times.

The raw results from this benchmark are given below.

Micro 1-1. Mean response time and response time standard deviation (millis) of the JT vs. JE vs. ST vs. SE microbenchmark Servers for message size 100 Bytes.
Server Response Time (mean).

Server Response Time (standard deviation).

Micro 1-2. Mean response time and response time standard deviation (millis) of the JT vs. JE vs. ST vs. SE microbenchmark Servers for message size 1 KB.

Server Response Time (mean).

Server Response Time (standard deviation).


The complete results from Micro Benchmark 1 can be downloaded from here: zip. This includes the raw data output (.txt) by the benchmark applications, a parser for the raw results (awk script), and the parsed results (.csv).

Back to the top (for the benchmark environment description) or here for the raw data files for this benchmark).


Macro Benchmark 1. Throughput performance of the multithreaded SJ SMTP Server against the event-driven SJ SMTP Server.

In this benchmark, all Load Clients engange in non-terminating, i.e. repeatedly looping, SMTP sessions with the Server, sending fixed messages of size 1 KB. No Timer Clients are involved. The throughput at the Server was measured within a series of 100 "windows" of length 15 millis, under load from 10, 100, 300, 500, 700 and 900 Load Clients. The entire experiment was repeated 10 times.

The raw results from this benchmark are given below.

Macro 1-1. Mean throughput performance and standard deviations (messages handled per second) of the multithreaded SJ SMTP Server against the event-driven SJ SMTP Server.
Throughput (mean).

Macro 1-2. Standard deviation in the throughput performance (messages handled per second) of the multithreaded and event-driven SJ SMTP Servers.
Throughput (standard deviation).

The complete results from Macro Benchmark 1 can be downloaded from here: tar.gz. This includes the raw data output (.txt) by the benchmark applications, a parser for the raw results (awk script), and the parsed results (.csv).

Back to the top (for the benchmark environment description) or here for the raw data files for this benchmark).


Benchmark Source Code

For the complete benchmark source code and execution scripts: The benchmark scripts are written in Python. The macro benchmarks presented in this online appendix use the same SJ SMTP server implementations as described in the above paper.


Back to the top or to the main page.