SkePU

Autotunable Multi-Backend Skeleton Programming Framework for Multicore CPU and Multi-GPU Systems

FAQ Overview Download Publications Code Example Applications Features License Ongoing Work Contact Acknowledgments

Skeleton programming is an approach where an application is written with the help of "skeletons". A skeleton is a pre-defined, generic component such as map, reduce, scan, farm, pipeline etc. that implements a common specific pattern of computation and data dependence, and that can be customized with (sequential) user-defined code parameters.
Skeletons provide a high degree of abstraction and portability with a quasi-sequential programming interface, as their implementations encapsulate all low-level and platform-specific details such as parallelization, synchronization, communication, memory management, accelerator usage and other optimizations.

SkePU poster SkePU is an open-source skeleton programming framework for multicore CPUs and multi-GPU systems.
It is a C++ template library with six data-parallel and one task-parallel skeletons, two generic container types, and support for execution on multi-GPU systems both with CUDA and OpenCL.


Overview

Main features:

SkePU comes in two different distributions:
(1) as a stand-alone version which includes an off-line autotuning framework preparing for context-aware dynamic selection of the expected fastest implementation variant at each skeleton call, and
(2) as a version that is integrated with the StarPU runtime system. The latter version provides support for hybrid CPU+GPU execution, performance-aware dynamic scheduling and load balancing.


Source Code and Documentation

Current Release

  1. Standalone SkePU: Version 1.1.1 (Last release=16/05/2014, latest patch=26/09/2014):
    Download source-code.
    Major new features:

    • adaptive off-line tuning mechanism for context-aware implementation selection at skeleton calls
      (described in our APPT-2013 paper and Chapter 3.4 of Usman Dastgeer's PhD thesis) and
    • new memory management mechanism for Vector and Matrix containers
      (described in our HLPP-2014 paper and Chapter 4 of Usman Dastgeer's PhD thesis).
    • Patch v1.1.1 (26/09/2014): tests for Smart Matrices updated for CUDA 6.0. A missing function (isMatrixOnDevice) has been restored. The rest of the distribution is the same as of 16/05/2014.
    See also the html documentation generated by doxygen.

  2. SkePU with StarPU integration: Version 0.8 (Last Updated=06/11/2012):
    Download source-code.
    It contains seven data-parallel and one task-parallel (farm) skeletons for vector and matrix operands, with multi-GPU and hybrid execution support. Contains several enhancements in comparison to the previous release. See the 'CHANGES' file for more information.
    See also the html documentation generated by doxygen.
    Tested with (StarPU 1.0.4, CUDA/nvcc 4.2, GCC 4.7.1)

Previous Releases

  1. Version 0.6: (2010)
    The first public release of SkePU.
    Download source-code.
    It contains seven data-parallel skeletons for vector operands with multi-GPU support.
    See also the html documentation generated by doxygen.
  2. Version 0.7: (2011)
    Download source-code. It contains seven data-parallel skeletons for vector and (partial) matrix operands with efficient multi-GPU support on CUDA using a single host thread. See html documentation generated by doxygen.
  3. Version 1.0: (2012)
    Download source-code.
    It contains seven data-parallel skeletons for vector, dense and sparse matrix operands with efficient multi-GPU support on CUDA and OpenCL using a single host thread. Contains several enhancements in comparison to the previous release. See the 'CHANGES' file for more information.
    See also the html documentation generated by doxygen.
  4. Version 0.7 with StarPU integration: (2011)
    Download source-code.
    It contains seven data-parallel, one task-parallel (farm) skeletons for vector and matrix operands, multi-GPU and hybrid execution support.
    See also the html documentation generated by doxygen.

Publications

2010

2011

2012

2013

2014


A code example

SkePU is a C++ template library targeting GPU-based systems that provides a higher abstraction level to the application programmer. The SkePU code is concise, elegant and efficient. In the following example, a dot product calculation of two input vectors is shown using the MapReduce skeleton available in the SkePU library. In the code, a MapReduce skeleton instance (dotProduct) is created which maps two input vectors with mult_f and then reduces the result with plus_f, thus instantiating a dot product function. Behind the scene, the computation can either run on a sequential CPU, multi-core CPUs, single or multiple GPUs based on the execution configuration and platform used for the execution.

#include < iostream >

#include "skepu/vector.h"
#include "skepu/mapreduce.h"

BINARY_FUNC(plus_f, double, a, b,
    return a+b;
)

BINARY_FUNC(mult_f, double, a, b,
    return a*b;
)

int main()
{
    skepu::MapReduce <mult_f, plus_f> dotProduct(new mult_f, new plus_f);

    skepu::Vector <double> v1(500,4);
    skepu::Vector <double> v2(500,2);

    double r = dotProduct(v1,v2);
    
    std::cout <<"Result: " << r << "\n";
    
    return 0;
}
// Output
// Result: 4000

Applications

Several test applications have been developed using the SkePU skeletons, including

Their source code is included in the SkePU distribution (v1.1).

Some Features

This section lists some results of tests conducted with several applications, highlighting different performance-related aspects. For a comprehensive description and evaluation of SkePU features and implementation, see e.g. Chapters 3 and 4 of Usman Dastgeers PhD thesis.


Multi-GPU Execution

SkePU supports multi-GPU executions using CUDA and OpenCL, even with early CUDA versions.
With CUDA before version 4.0, the multi-GPU execution was rather inefficient due to the threading overhead on the host side. With CUDA 4.0 and later, it is possible to use all GPUs in the system concurrently from a single host (CPU) thread. This makes multi-GPU executions using CUDA a viable option for many applications. In SkePU, support for multi-GPU executions is changed when using CUDA 4.0 to use single host thread. The diagram below compares execution on one and two GPUs with SkePU multi-GPU execution using CUDA 4.0.

Hybrid execution: Coulombic potential grid execution on a hybrid platform (CPUs and GPU) for different matrix sizes

Auto-tuned context-aware back-end selection for skeleton calls

The stand-alone version of SkePU applies an off-line (deployment-time) machine learning technique to predict at runtime the fastest back-end for a skeleton call, depending on the call context (in particular, operand sizes). This includes the choice between a CPU implementation (sequential or with multiple OpenMP threads), or between CUDA resp. OpenCL with one or several GPUs. Even the number of GPU threads and thread blocks can be tuned. See the documentation and publications for further details.

The StarPU-integrated version of SkePU instead delegates the back-end selection to the StarPU runtime system's built-in dynamic performance modeling and tuning mechanism.


Hybrid Execution

When using SkePU with StarPU support, SkePU can split the work of one or more skeleton executions into multiple tasks and use multiple computing devices (CPUs/GPUs) in an efficient way. The diagram below shows a Coulombic potential grid application execution on a hybrid platform (CPUs and GPU) for different matrix sizes.

Hybrid execution: Coulombic potential grid execution on a hybrid platform (CPUs and GPU) for different matrix sizes
Overhead of memory transfer, in comparison to computations

Smart Containers

The diagram to the right shows a breakdown of the execution time for different SkePU single-skeleton computations coded in OpenCL, showing PCIe memory transfer and kernel computation times. It highlights the overhead for transferring data to (red) and from (green) GPUs in relation to the kernel's computational work (blue).

In order to eliminate unnecessary data transfers across multiple subsequent skeleton calls, SkePU provides "smart containers" for passing operands in generic STL-like data structures such as Vector and Matrix. A smart container internally performs software caching of recently accessed elements in various device memories and reusing them in subsequent calls on the same device where applicable, resulting in a run-time optimization of operand communication. In particular, it implements a "lazy memory copying" technique, transferring written elements back from device memory only if accessed by the CPU after the call.

The smart container concept and implementation has been revised in SkePU v1.1 (2014) compared to earlier versions. The Vector and Matrix smart containers in SkePU v1.1 internally implement a variant of the MSI coherence protocol, providing sequential consistency (thus no need to explicitly flush() device copies any more in a multi-GPU scenario). For multi-GPU usage, it also supports direct GPU-GPU transfer of coherence messages where available, and uses lazy deallocation of device copies to reduce memory management overhead.

Smart containers can lead to a significant performance gain over "normal" containers, especially for iterative computations such as Nbody simulation or SPH where we observed speedups of up to 3 orders of magnitude by using smart containers instead of naive operand data transfer before and after each kernel invocation. Some speedup results for a system with 2 GPUs, averaged over many runs with different problem sizes, are shown below to the right. For the details, see the documentation and publications.

Overhead of memory transfer, in comparison to computations

Performance Comparison

The diagram below shows the performance of the dotProduct example computed using different backends, highlighting performance differences between different back-ends. Moreover, it compares with CUBLAS, as the Multi_OpenCL MapReduce implementation of SkePU outperforms all others in this case.

DotProduct using MapReduce SkePU skeleton in comparison to cublas implementation

For more results, see the publications section.


Software License

SkePU is licensed under the GNU General Public License as published by the Free Software Foundation (version 3 or later). For more information, please see the license file included in the downloadable source code.


Ongoing work

SkePU is a work in progress. Future work includes adding support for more skeletons and containers, e.g. for sparse matrix operations; for further task-parallel skeletons; and for other types of target architectures. For instance, there exists an experimental prototype with MPI backends, allowing to run SkePU programs on multiple nodes of a MPI cluster without modifying the source code (not included in the above public distribution).

If you would like to contribute, please let us know.


Contact

For reporting bugs, please email to "<firstname> DOT <lastname> AT liu DOT se".

Acknowledgments

This work was partly funded by the EU FP7 projects PEPPHER and EXCESS, and by SeRC project OpCoReS.