The contents of the zipped file are:

README.txt:				
============
This file

bmvc_abstract.pdf
=================
1 page abstract of the paper.

bmvc_supplementary.pdf	
=================================
PDF file in which elaborated results of the paper are presented. Due to lack of space in the main paper, results for sensitivity analysis of performance with respect to key parameters, and some sample frames have been discussed here. It is almost 3.4 MB in size.

video1k.avi
=========
It shows the results of the proposed algorithm. Because of space limitation, the video has been created for almost 1 minute sequence. This video is created using FFmpeg version SVN-r24910.  The video is encoded at 25 frames/second and 3000 Kbps. To play the video, type "ffplay filename" on linux terminal. Both "ffmpeg" and "ffplay" can be obtained at http://ffmpeg.org/ .

The APIDIS dataset consists of 7 cameras, distributed around the basketball court. However, none of the cameras have the full view of the court. Therefore, images from two cameras (namely camera 1 and camera 6) have been stitched together to generate a virtual view that covers the whole field. As a result, one may observe some artifacts in the centre of the court. 

At any time instant, the tracks are drawn with different colors. For visual purpose, a track segment of up to 50 frames is overlaid. Each player is annotated with the output of the identification module, which is a digit with certain color (for example, a yellow jersey, digit 9 is written "9" with yellow color). Note that when the player doesn't have a sure ID, its probability distribution is displayed. This is typically the case with the referees and the spectators who do not have appearance features. The GT is available every second (or, 25 frames). Whenever, there is a wrong identification, a red rectangle appears. Green rectangles correspond to correct recognition. Misses are easy to distinguish as they are generally accompanied with the blue/yellow text near the target.


