Runtime verification is the research field of how to monitor your application at runtime. In this tutorial you will learn how to use TeSSLa's C code transformation tools to instrument your C code so that you can get events from the running application and how to verify such an event stream with TeSSLa.
We assume that you are already familiar with TeSSLa, e.g. from reading the TeSSLa tutorial.
The following steps are required in order to verify your C program at runtime with TeSSLa:
The TeSSLa standard library contains several annotations that can be used to annotate input streams. These annotations are then used to instrument the C code of your application under test.
As a first example let's consider the following very basic C program:
void foo() {
int x = 42;
}
int main() {
for (int i = 0; i < 5; i++) {
foo();
}
return 0;
}
We can now declare calls of the function foo
as events of interest by annotating an input stream of a TeSSLa specification as follows:
@InstFunctionCall("foo")
in foo: Events[Unit]
With this annotation we get an event on the stream foo
every time the function foo
is called.
In the same manner one can declare events
Those events can not only be of type Events[Unit]
, but also carry data. For function calls you can specify which argument is of interest and for function returns you can get the return value of the function.
You can find a complete list of supported annotations and how to use them in the documentation of the standard library.
You can follow along the examples in this tutorial either in the web IDE or on your local computer using the TeSSLa tools and your own C compiler.
You've already learned how to use the web IDE in the TeSSLa tutorial. In the upper left editor in the web IDE you can switch to C code in order to enter a C program instead of a fixed input trace. If you now click Run the C program will be instrumented and executed and the generated events will be fed to the TeSSLa specification entered in the upper right. Errors during that process will appear in the lower left and the final output will be in the lower right. You can visualize the output using the TeSSLa Visualizer in the lower right by switching from the tab TeSSLa Output to TeSSLa Visualization.
If you want to follow along on your local computer you need to perform the necessary steps manually. Assume you have a C file main.c
with the minimal program shown above and a corresponding TeSSLa specification spec.tessla
:
@InstFunctionCall("foo")
in foo: Events[Unit]
out foo
def num := count(foo)
out num
Now you can use the following command in order to instrument, compile, run and monitor the program:
Instrument the C source code using the observation annotations defined in the TeSSLa specification:
java -jar tessla.jar instrumenter spec.tessla main.c /usr/lib/gcc/x86_64-linux-gnu/9/include/
After the TeSSLa specification and the C file we specify additional include paths for the C compiler, which are used to generate a proper compilation database for LLVM which is used for the instrumentation. What you need to specify here highly depends on your local setup and the headers used in your C file.
Compile the instrumented C code which was generated in the last step:
gcc main.c.instrumented.c -llogging -pthread -ldl -o main
This compilation uses the logging library which is available on the TeSSLa download page and is assumed to be installed on your system.
Execute the compiled program:
./main
This will generate the file trace.log
containing the generated traces.
Monitor the trace:
java -jar tessla.jar interpreter --base-time 1ns spec.tessla trace.log
The steps above perform offline monitoring. One can also do online monitoring by performing the last two steps in parallel. Before starting the program under test create a named fifo trace.log
and already start the TeSSLa engine which reads from that fifo. Next the program is started as shown above.
For the sake of this tutorial all the necessary tools have been made available as a Docker image. This image includes a small shell script which performs all the steps described above. Please note, that this script is only intended for this tutorial and some first experiments. If you want to include TeSSLa into your build process then please follow the manual steps shown above.
Using the Docker image you can use the following command to perform all the steps and immediately see the monitoring output:
docker run -v $(pwd):/wd -w /wd --rm registry.isp.uni-luebeck.de/tessla/tessla-docker:2.0.0 rv spec.tessla main.c
The effect of the parameters is as follows:
Parameter | Description |
---|---|
docker run | Create a new docker container from a given image and start that container. |
-v $(pwd):/wd | Mount your current directory as /wd into the container. |
-w /wd | Set the directory /wd as working directory of the container. (Similar to doing cd /wd in the container.) |
--rm | Delete the container when the started process terminates. Otherwise terminated docker containers will be kept. |
registry.isp.uni-luebeck.de/tessla/tessla-docker:2.0.0 | Name of the docker image used to create a new container. If a docker image is not available locally it will be loaded from the remote repository which is the first part of the image's name. The part after the colon is the tag name used to indicate the TeSSLa version you want to use. |
rv | A small shell script inside the container which does all the work. You can learn more about the scripts parameter by executing rv --help in the container. |
spec.tessla | The TeSSLa specification. |
main.c | One or more C programs which will be instrumented, compiled and executed. |
Instead of directly executing the rv
script inside the container you can also start the container and attach an interactive terminal to a shell running in the container:
docker run -v $(pwd):/wd -w /wd --rm -ti registry.isp.uni-luebeck.de/tessla/tessla-docker:2.0.0
Note the additional flags -t
and -i
:
Flag | Long Version | Description |
---|---|---|
-t | --tty | Allocate a pseudo-TTY |
-i | --interactive | Keep STDIN open even if not attached |
Inside that shell you can now either run the rv
script as described above or you can do the necessary steps manually.
#include <stdlib.h>
#include <unistd.h>
void compute() {
int duration = 40000;
duration += (rand() % 10) * 1000;
usleep(duration);
}
int main() {
for (int i = 0; i < 10; i++) {
compute();
}
}
@InstFunctionCall("compute")
in call: Events[Unit]
@InstFunctionReturn("compute")
in ret: Events[Unit]
def duration := runtime(call, ret)
out duration
out maximum(duration) as max
out average(duration) as avg
#include <stdio.h>
#include <unistd.h>
int add(int a, int b) {
return a + b;
}
int main() {
printf("%i\n", add(2,3));
printf("%i\n", add(17,4));
printf("%i\n", add(2000000000,1000000000));
}
@InstFunctionCallArg("add", 0)
in a: Events[Int]
@InstFunctionCallArg("add", 1)
in b: Events[Int]
@InstFunctionReturnValue("add")
in r: Events[Int]
def should = last(a + b, r)
def ok = r == should
out a
out b
out r
out should
out ok
#include <pthread.h>
void foo() {}
void *task () {
foo();
foo();
foo();
return NULL;
}
int main ()
{
pthread_t t1, t2;
pthread_create(&t1, NULL, &task, NULL);
pthread_create(&t2, NULL, &task, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
return 0;
}
@InstFunctionCall("foo")
in foo: Events[Unit]
@ThreadId
in tid: Events[Int]
out foo
out tid
An event on tid
is only generated when any other event is generated. Corresponding events will always have the exact same timestamp, similar to events indicating function calls and events carrying the values of the arguments of that function call.
#include <pthread.h>
#include <unistd.h>
int shared_memory[4] = {0};
pthread_mutex_t locks[4] = {
PTHREAD_MUTEX_INITIALIZER, PTHREAD_MUTEX_INITIALIZER,
PTHREAD_MUTEX_INITIALIZER, PTHREAD_MUTEX_INITIALIZER,
};
void use(int index) {
shared_memory[index]++;
}
void *task1 () {
for (int i = 0; i < 4; i++) {
pthread_mutex_lock(&locks[i]);
use(i);
pthread_mutex_unlock(&locks[i]);
}
return NULL;
}
void *task2 () {
for (int i = 3; i >= 0; i--) {
pthread_mutex_lock(&locks[i]);
use(i);
pthread_mutex_unlock(&locks[i]);
}
return NULL;
}
int main ()
{
pthread_t t1, t2;
pthread_create(&t1, NULL, &task1, NULL);
pthread_create(&t2, NULL, &task2, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
return 0;
}
@InstFunctionCallArg("pthread_mutex_lock", 0)
in lock: Events[Int]
@InstFunctionCallArg("pthread_mutex_unlock", 0)
in release: Events[Int]
@InstFunctionCallArg("use", 0)
in access: Events[Int]
@ThreadId
in tid: Events[Int]
def locksOfThread = {
def oldMap = last(map, tid)
def oldLocks = Map.getOrElse(oldMap, tid, Set.empty[Int])
def map: Events[Map[Int, Set[Int]]] = merge3(
on(lock, Map.add(oldMap, tid, Set.add(oldLocks, lock))),
on(release, Map.add(oldMap, tid, Set.remove(oldLocks, release))),
Map.empty[Int, Set[Int]])
map
}
def locksForResource = {
def old = last(map, access)
def currentLocks = Map.get(locksOfThread, tid)
def map: Events[Map[Int, Set[Int]]] = merge(
on(access, Map.add(old, access, Set.intersection(
currentLocks,
Map.getOrElse(old, access, currentLocks)))),
Map.empty[Int, Set[Int]])
map
}
def error = unitIf(Set.size(Map.get(locksForResource, access)) == 0)
out error