Mar 132014
 

PROWESS Midterm Review Meeting

8 May 2014 – Borås

To be of value to the European software industry, the work of the prowess
project needs not only to be grounded in state of the art research, but also to
be tensioned by industrial practice. This will allow the techniques and tools
built by the project to be applicable to realistic problems in the provision of
open web services, and to be integrated into the work flow of the practising
software developer.

The PROWESS project runs a series of industrial pilot studies, for which we
invite academics and industrialists with interests in the work of the project.
This will allow project members to get early, informed, disinterested feedback
on the project through presenting it to the attendees, as well as allowing
members of the project to hear about related developments from external
attendees.

Location

The meeting will be in Borås at the Pulsen Konferens meeting centre. The
address to this location is:

Pulsen Konferens
Kyrkängsgatan 8
Borås, Sweden
 

Click here to read how you can reach the conference centre. It is also possible to take
the bus from Landvetter Airport to Borås: “Swebus Landvetter – Borås”

Note that you can only buy a ticket online and over the phone with a Scandinavian credit card. You can buy a ticket with other cards at Swebus offices, Pressbyrån and at www.sj.se

Programme

Morning

- 09:00-10:00 : Overview of the projectJohn Derrick

- 10:00-10:20 : Case study: a web service for administrating the VoDKATV platformMiguel A. Francisco

VoDKATV is an IPTV Cloud Middleware Architecture that integrates several
service provider subsystems to provide customers with an advanced multi-screen
media experience. VoDKATV is composed of several components, using web
services for the integration of many of them. This makes VoDKATV a good case
study to use the new tools developed in the PROWESS project.

- 10:30-11:30 : Property based testing for web services – an introductionJohn Hughes

This talk will introduce property-based testing for stateful systems, such as
web services, based on modelling the service state abstractly, then modelling
each operation via a pre-condition, a post-condition, and an abstract state
transition. Examples will be drawn from the VoDkAtV web service.

Afternoon

The afternoon is reserved for demonstrations that last 15 minutes each.

- Inference of state machines from QuickCheck tracesKirill Bogdanov

A QuickCheck state machine can be seen as a low-level model of a system under
test. The work based on the Statechum tool permits one to infer a
higher-level state machine, corresponding to QuickCheck traces. This demo will
use VodkaTV web service to show how information can be extracted from traces
and then synthesized into a state machine.

- Automating Property-based Testing of Evolving Web ServicesHuiqing Li and Laura Castro

We demonstrate a set of tools that automate many aspects of property-based
testing of web services. From a WSDL description, we can generate initial
runnable test code, generators for random test data, as well as a set of
modules to provide the appropriate infrastructure for property-based testing
in this domain.

From WSDL descriptions of two versions of a web service we are able to infer
the difference between the two, and to generate refactorings that help testers
to evolve their code in sync with the web service itself.

- Fault injectionBenjamin Vedder

WebServices can be complex software products that need to implement a lot of
fault handling code. In normal test scenarios most of this fault handling code
is not triggered, since it is there for when things are really wrong external
to the software under test. In order to know whether the fault handling code
works, one needs to trigger these fault handling mechanisms by using fault
injection. Fault Injection techniques are well known in the area of safety
analysis and in this project we learn from the low level C fault injections to
find out how we can use fault injection and QuickCheck together. In this demo
we show an example of fault injection in C code to demonstrate how QuickCheck
and fault injection go hand-in-hand.

- More-bugs — how to not find the same bug over and over againUlf Norell

A problem that random testing suffers from, that doesn’t come up when you
have fixed test suite, is that if you have an easy to find bug, random
testing will keep finding this bug instead of moving on to other bugs. There
are various ways to deal with this, but they all involve manual work: you can
fix the bug, you can model the bug, or you can avoid generating the bug
during testing. More-bugs is an automated tool for the latter approach. The
key idea is to keep previously found bugs around and avoid generating test
cases which are instances of one of these bugs. The hard part is to figure
what it means for a test case to be an instance of a found bug, and to decide
this efficiently. We have done this for QuickCheck state machines and applied
it successfully to industrial automotive and telecom applications.

- A Property-based Load Testing FrameworkDiana Corbacho and Clara Benac Earle

In this demo, we will show how property-based testing (PBT) can be applied to
load testing web services in the cloud. The demo consists of three parts:

1. Introduction to Megaload, the cloud-based load testing tool developed in
this project.
2. Illustrate the integration between Megaload and PBT by means of an
example (VoDKATV set-top box interface).
3. Live demo on cloud environment displaying the PBT findings and Megaload
testing features.

- Smother: Extended code coverage metrics for ErlangRamsay Taylor

Smother is a tool developed by the University of Sheffield to provide MC/DC
analysis of Erlang programs. This demonstration will use a small component of
the VoDKATV system, provided by Interoud, together with a test suite to
demonstrate how Smother not only provides an assessment of the coverage of a
test suite, but can also be used to explore the coverage in more detail and
to identify particular program paths that should be covered with new tests.

- Automatic complexity analysisNick Smallbone

Computational complexity (e.g. O(n^2), O(n log n)) concisely summarises the
performance of an algorithm. I will demonstrate a tool that tries to infer
the complexity of an algorithm by using testing. Complexity testing
complements load testing: its goal is to find performance bugs early, by
testing small parts of the system on a small scale, and to expose problems
that may only appear under specific usage patterns.

The workshop will end at 17:00.

Contact

For questions about the midterm review meeting you can contact:

Alex Gerdes, alex.gerdes@quviq.com

Sorry, the comment form is closed at this time.