Smoked Parrot

This is part of a series of articles I started with Quality Assurance and Automated Testing in Open Source Software.

Parrot is a virtual machine for dynamic languages such as PHP, Perl 5, Python, Ruby, Scheme, Tcl with its main focus on Perl 6.

As one day it is going to replace the engine of Perl 5 it must run on the 50-so platforms where Perl 5 currently runs. It should be also tested on those platforms. Therefore having and easy way to setup a smoke test environment and to report the test results should be easy.

Development

Parrot is written in C.

For version control the developers use Subversion. You can find instructions on how to get the source code in the download area of their web site. Links to other development and testing related pages can be found under resources.

Testing

At the time of this writing there were 7380 unit tests written for Parrot and 3118 unit tests for the language implementations. As they don't have an automatic way to report test coverage, the team provides a document describing the level of coverage for each subsystem on the Parrot Testing Status page.

Similar to the standard in Perl, tests can be found in the t/ directory in the source code.

Tests are written on several levels and in several languages: There are tests written in C, PIR (Parrot Intermediate Representation), PASM (Parrot Assembler) and Perl5.

A guide on how to write tests for Parrot can be found in the documentation of Parrot.

In addition, test for the various language implementations, that is for the compilers of language X to Parrot Assembler, are usually written in their respective language and can usually be found in their own directories under languages/

The output of all the tests follows the
TAP - Test Anything Protocol used in Perl 5 and in various other languages.

Smoke testing

It is extremely easy to get involved in smoke testing Parrot. You will only need a C compiler and a recent version of Perl5 installed along with [dist://Test::TAP::HTMLMatrix] with its prerequisites from CPAN.

Results of the Parrot smoke tests can be found following the above link. The reports are automatically posted there, when you run make smoke.

The exact instructions to run and submit smoke test reports are these:

perl Configure.pl
make
make smoke

Obviously one has to check out the latest version from SVN first by

svn co http://svn.perl.org/parrot/trunk parrot

and then update to the latest version every time before running the tests. My scheduled job looks like this:

cd /home/gabor/work/parrot
svn up
make clean
perl Configure.pl --cc=cc --cxx=CC --link=cc --ld=cc
make
make smoke
make languages-smoke

the implementation of the Smoke client and server can be found in tools/util/ smokeserv-server.pl and smokeserv-client.pl and smokeserv-README.pod

On the smoke report page one can see a table divided based on platform. Within each platform you can see one row for each report. This is a summary of the results.

A sample row looks like this: Parrot 0.4.14 r20749 20 Aug 2007 20:03 Mon languages 11.40 min 99.23 % ok 3118: 3094, 24, 123, 29, 1 >> >>

The columns are:

  • Parrot version number (the latest release before the test run) (0.4.14)
  • revision number (from the Subversion revision count) (r20749)
  • Date when the test was executed (20 Aug 2007 20:03 Mon)
  • Flags used (languages)
  • Time the tests took (11.40 min)
  • Percentage of successful test (99.23 % ok)
  • Total number of test in this test run (3118)
  • Success (3094), Failed (24), todo (123) skipped (29) unexpected success (1)
  • The remaining two >> signs lead us to the more detailed reports

Potential flags can bee looked up in the parrot source code by typing perldoc Configure.pl. Specifically languages means that we are testing some of the language implementations hence the total number of test is only 3118.

In the details, the meaning of Success and Failed are obvious.
TODO means test that were implemented already before the actual feature was implemented. So we expect those tests to fail.
Unexpected success means that there were tests marked as TODO (that is expected to fail) that succeeded. This can happen in various cases, for example when someone implements a feature but forgets to update the test.
Tests marked as skipped are those that might be platform dependent (and the test is running on a platform where they are not relevant) or that they have some missing prerequisites.

In the parrot test report on the left side you can see filename. The test suit is organized into several files.

Each file contains one or more unite tests (or assertions, or ok calls). Each rectangular next to the file name represents one such unit test. The size of the rectangulars does not matter - they are different only to fit the screen. What is interesting is the number of rectangulars. On one hand in scheme/t/io/basict.t (4th row) you can see 2 rectangulars meaning there are two assertions in the file on the other hand scheme/t/arith/logic.t has many small rectangulars meaning it has many (I counted 42) units.

The colors mean

  • Green: ok
  • Dark green: TODO test that failed as expected
  • Light green: SKIPed tests
  • Red: failure
  • Yellow: TODO test that was unexpectedly successful

Placing the mouse over one of the rectangulars, you'll see the tool-tip showing the exact name of the assertion that usually explains what does that test. In case of the TODO and SKIPed tests it usually also says why is that in TODO or why was that skipped. (Obviously this will only work on the real report and not on this image).

On the right side of each row you can see a small summary of that file, the percentage of the tests that were successful.

Wish list

What I am missing form this is a report where I could see each assertion on which platform was it successful and where did it fail. I would also like to see some statistics on how the number of test and the success/failure rate changed throughout the development. I would also like to see an aggregated report from all the platforms.