SkePU - Frequently Asked Questions (FAQs)
What is SkePU?
SkePU is a skeleton programming framework for modern GPU based systems. It is a research project hosted at Linköping University, Sweden. For more information about skeletons and SkePU, please visit SkePU homepage.
What does SkePU stands for?
"Ske" stands for SKEletons and "PU" stands for Processing Unit. The motivation behind the name is that SkePU aim to provide skeletons for multiple kind of processing units present in the modern GPU-based systems (CPUs, GPUs).
How do you pronounce "SkePU"?
We usually call "Ski P.U.". Another pronunciation also quite common is "Ski Po".
How do you install SkePU?
You don't need to install it. It is a C++ template include library. You use it by including the appropriate header files for skeletons and containers that you use in your program (see examples folder for code examples). You compile your program using normal C++/CUDA compilers. We mainly use:
- GNU C++ (g++) compiler for compiling SkePU programs for Sequential C++, OpenMP and OpenCL execution.
- NVIDIA CUDA Compiler (nvcc) for compiling SkePU programs for CUDA execution. Obviously, one can also use sequential C++ and OpenMP (e.g. by passing '-XCompiler -fopenmp' or something similar) with nvcc compiler.
How do you compile SkePU programs?
The compilation is straightforward in most cases. Following are some possible compilation commands to enable certain kind of skeleton implementations:
OBS: The compilation command on your machine may look different depending upon the compiler versions and/or installation paths.
- Sequential C++ (Default backend, always available):
- g++ <filename>.cpp -I<path-to-skepu-include-folder>
- nvcc <filename>[.cu | .cpp] -I<path-to-skepu-include-folder>
- g++ <filename>.cpp -I<path-to-skepu-include-folder> -DSKEPU_OPENMP -fopenmp
- nvcc <filename>[.cu | .cpp] -I<path-to-skepu-include-folder> -DSKEPU_OPENMP -Xcompiler -fopenmp
- g++ <filename>.cpp -I<path-to-skepu-include-folder> -DSKEPU_OPENCL -I<path-to-opencl-include-folder> -lOpenCL
- Both OpenCL and OpenMP:
- g++ <filename>.cpp -I<path-to-skepu-include-folder> -DSKEPU_OPENMP -fopenmp -DSKEPU_OPENCL -I<path-to-opencl-include-folder> -lOpenCL
- nvcc <filename>.cu -I<path-to-skepu-include-folder> -DSKEPU_CUDA -Xcompiler -fopenmp
- Both CUDA and OpenMP:
- nvcc <filename>.cu -I<path-to-skepu-include-folder> -DSKEPU_CUDA -DSKEPU_OPENMP -Xcompiler -fopenmp
Why two different file extensions (".cpp" and ".cu") in SkePU programs?
There is a technical reason for that. As SkePU is an include library so using CUDA skeleton implementations (backend) would include all CUDA code in the program. For compiling CUDA code, we use nvcc compiler and it requires ".cu" extension to compile any file containing CUDA code as it internally invokes different compiler for CUDA and C++ code.
Can we use both CUDA and OpenCL skeleton implementations at the same time?
No! You cannot do that as nvcc compiler will complain about any OpenCL code it finds when compiling a file containing some CUDA code.
How do you control number of GPUs used by SkePU?
Use "SKEPU_NUMGPU" flag. As for other flags, define it above any SkePU include headers. For example "#define SKEPU_NUMGPU 2" will ask SkePU to use 2 GPUs (if available). By setting it to "0", SkePU tries to find all available GPUs and try to use them for skeleton calls. Default is "1".
How can I control OpenMP threads?
By default, SkePU will try to use as many OpenMP threads as possible. If you want to use a specific number of OpenMP threads, you can define "SKEPU_OPENMP_THREADS" flag before any SkePU include headers. For example,
"#define SKEPU_OPENMP_THREADS 8"
How can I control CUDA/OpenCL threads?
If for some reason you need to control how many CUDA/OpenCL threads should be used, you can do that by specifying "SKEPU_MAX_GPU_BLOCKS" flag above any SkePU include headers.
By default SkePU tries to use maximum number of threads possible. In some cases (e.g., when you are using too many registers inside user function) you want to use less number of threads. This
flag can help in those situations.