Posts tagged: ffi

Strange benchmarking results for FFI bindings

It looks like I am getting pretty good at getting hit by Haskell bugs. My previous post described behaviour that turned out to be a bug in GHC (thanks to Joachim Breitner for pointing this out). Now I found problems with benchmarking FFI bindings using method described a month ago.

I work on a project in which the same algorithm is implemented using different data structures – one implementation is done in C, another using Vector library and yet another using Repa. Everything is benchmarked with Criterion and C implementation is the fastest one (look at first value after mean – this is mean time of running a function):

benchmarking DWT/C1
mean: 87.26403 us, lb 86.50825 us, ub 90.05830 us, ci 0.950
std dev: 6.501161 us, lb 1.597160 us, ub 14.81257 us, ci 0.950
 
benchmarking DWT/Vector1
mean: 209.4814 us, lb 208.8169 us, ub 210.5628 us, ci 0.950
std dev: 4.270757 us, lb 2.978532 us, ub 6.790762 us, ci 0.950

This algorithm uses a simpler lattice function that is repeated a couple of times. I wrote benchmarks that measure time needed by a single invocation of lattice:

benchmarking C1/Lattice Seq
mean: 58.36111 us, lb 58.14981 us, ub 58.65387 us, ci 0.950
std dev: 1.260742 us, lb 978.6512 ns, ub 1.617153 us, ci 0.950
 
benchmarking Vector1/Lattice Seq
mean: 34.97816 us, lb 34.87454 us, ub 35.14377 us, ci 0.950
std dev: 661.5554 ns, lb 455.7412 ns, ub 1.013466 us, ci 0.950

Hey, what’s this!? Vector implementation is suddenly faster than C? Not possible given that DWT in C is faster than DWT using Vector. After some investigation it turned out that the first C benchmark runs correctly while subsequent benchmarks of C functions take performance hit. I managed to create a simple code that demonstrates the problem in as few lines as possible. I implemented a copy function in C that takes an array and copies it to another array. Here’s copy.c:

#include
#include "copy.h"
 
double* c_copy( double* inArr, int arrLen ) {
  double* outArr = malloc( arrLen * sizeof( double ) );
 
  for ( int i = 0; i < arrLen; i++ ) {
    outArr[ i ] = inArr[ i ];
  }
 
  return outArr;
}

and copy.h:

#ifndef _COPY_H_
#define _COPY_H_
 
double* c_copy( double*, int );
 
#endif

I wrote a simple binding for that function and benchmarked it multiple times in a row:

module Main where
 
import Criterion.Main
import Data.Vector.Storable hiding (copy)
import Control.Monad (liftM)
import Foreign hiding (unsafePerformIO)
import Foreign.C
import System.IO.Unsafe (unsafePerformIO)
 
foreign import ccall unsafe "copy.h"
  c_copy :: Ptr CDouble -> CInt -> IO (Ptr CDouble)
 
signal :: Vector Double
signal = fromList [1.0 .. 16384.0]
 
copy :: Vector Double -> Vector Double
copy sig = unsafePerformIO $ do
    let (fpSig, _, lenSig) = unsafeToForeignPtr sig
    pLattice <- liftM castPtr $ withForeignPtr fpSig $ \ptrSig ->
                c_copy (castPtr ptrSig) (fromIntegral lenSig)
    fpLattice <- newForeignPtr finalizerFree pLattice
    return $ unsafeFromForeignPtr0 fpLattice lenSig
 
 
main :: IO ()
main = defaultMain [
         bgroup "FFI" [
           bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         , bench "C binding" $ whnf copy signal
         ]
       ]

Compiling and running this benchmark with:

$ ghc -O2 -Wall -optc -std=c99 ffi_crit.hs copy.c
$ ./ffi_crit -g

gave me this results:

benchmarking FFI/C binding
mean: 17.44777 us, lb 16.82549 us, ub 19.84387 us, ci 0.950
std dev: 5.627304 us, lb 968.1911 ns, ub 13.18222 us, ci 0.950
 
benchmarking FFI/C binding
mean: 45.46269 us, lb 45.17545 us, ub 46.01435 us, ci 0.950
std dev: 1.950915 us, lb 1.169448 us, ub 3.201935 us, ci 0.950
 
benchmarking FFI/C binding
mean: 45.79727 us, lb 45.55681 us, ub 46.26911 us, ci 0.950
std dev: 1.669191 us, lb 1.029116 us, ub 3.098384 us, ci 0.950

The first run takes about 17μs, later runs take about 45μs. I found this result repeatable across different runs, although in about 10-20% of runs all benchmarks – including the first one – took about 45μs. I obtained this results on GHC 7.4.1, openSUSE 64-bit linux with 2.6.37 kernel, Intel Core i7 M 620 CPU. I posted this on Haskell-cafe and #haskell. Surprisingly nobody could replicate the result! I was confused so I gave it a try on my second machine: Debian Squeeze, 64-bit, GHC 7.4.2, 2.6.32 kernel, Intel Core 2 Due T8300 CPU. At first the problem did not appear:

benchmarking FFI/C binding
mean: 107.3837 us, lb 107.2013 us, ub 107.5862 us, ci 0.950
std dev: 983.6046 ns, lb 822.6750 ns, ub 1.292724 us, ci 0.950
 
benchmarking FFI/C binding
mean: 108.1152 us, lb 107.9457 us, ub 108.3052 us, ci 0.950
std dev: 916.2469 ns, lb 793.1004 ns, ub 1.122127 us, ci 0.950

All benchmarks took about 107μs. Now watch what happens when I increase size of the copied vector from 16K elements to 32K:

benchmarking FFI/C binding
mean: 38.50100 us, lb 36.71525 us, ub 46.87665 us, ci 0.950
std dev: 16.93131 us, lb 1.033678 us, ub 40.23900 us, ci 0.950
 
benchmarking FFI/C binding
mean: 209.9733 us, lb 209.5316 us, ub 210.4680 us, ci 0.950
std dev: 2.401398 us, lb 2.052981 us, ub 2.889688 us, ci 0.950

This first run is 2.5 time faster (!), while all other runs are two times slower. While the latter could be expected, the former certainly is not.

So what exactly is going on? I tried analysing eventlog of the program but I wasn’t able to figure out the cause of the problem. I noticed that if I comment out the loop in C function so that it only allocates memory and returns an empty vector then the problem disappears. Someone on Haskell-cafe suggested that these are cache effects, but I am sceptical about this explanation. If this is caused by cache then why did the first benchmark sped up when size of the vector was increased? And why does this effect occur for 16K length vectors on a machine with 4MB cache, while machine with 3MB cache needs twice longer vector for the problem to occur? So if anyone has a clue what causes this strange behaviour please let me know. I would be happy to resolve that since now result of my benchmarks are distorted (perhaps yours are too only you didn’t notice).

Benchmarking C functions using Foreign Function Interface

I am currently working on implementing Discrete Wavelet Transform (DWT) in Haskell. I want to make use of Haskell’s parallel programing capabilities to implement an algorithm that can take advantage of multiple CPU cores. My previous posts on testing and benchmarking were by-products of this project, as I needed to ensure reliability of my implementation and to measure its performance. The key question that is in my head all the time is “can I write Haskell code that outperforms C when given more CPU cores?”. To answer this question I needed a way to benchmark performance of algorithm written in C and I must admit that this problem was giving me a real headache. One obvious solution was to implement the algorithm in C and measure its running time. This didn’t seem acceptable. I use Criterion for benchmarking and it does lots of fancy stuff like measuring clock resolution and calculating kernel density estimation. So unless I implemented this features in C (read: re-implement the whole library) the results of measurements would not be comparable.

Luckily for me there is a better solution: Foreign Function Interface (FFI). This is an extension of Haskell 98 standard – and part of Haskell 2010 – that allows to call functions written in C1. This means that I could write my function in C, wrap it in a pure Haskell function and benchmark that wrapper with Criterion. The results would be comparable with Haskell implementation, but I was afraid that overheads related to data copying would affect the performance measurements. As it turned out I was wrong.

I started with chapter 17 of Real World Haskell. It presents a real world example – I guess that title of the book is very adequate – of creating bindings for an already existing library. Sadly, after reading it I felt very confused. I had a general idea of what should be done but I didn’t understand many of the details. I had serious doubts about proper usage of Ptr and ForeignPtr data types and these are in fact very important when working with FFI. Someone on #haskell advised me to read the official specification of FFI and this was a spot-on. This is actually one of the few official specifications that are a real pleasure to read (if you read R5RS then you know what I mean). It is concise (30 pages) and provides a comprehensive overview of all data types and functions used for making foreign calls.

After reading the specification it was rather straightforward to write my own bindings to C. Here’s a prototype of called C function, located in dwt.h header file:

double* c_dwt(double* ls, int ln, double* xs, int xn);

The corresponding dwt.c source file contains:

double* c_dwt( double* ls, int ln, double* xs, int xn ) {
  double* ds = malloc( xn * sizeof( double ) );
 
  // fill ds array with result
 
  return ds;
}

The important thing is that C function mallocates new memory which we will later manage using Haskell’s garbage collector. Haskell binding for such a function looks like this:

foreign import ccall unsafe "dwt.h"
  c_dwt :: Ptr CDouble -> CInt -> Ptr CDouble -> CInt -> IO (Ptr CDouble)

Here’s what it does: ccall denotes C calling convention, unsafe improves performance of the call at the cost of safety2 and "dwt.h" points to a header file. Finally, I define the name of the function and it’s type. This name is the same as the name of original C function, but if it were different I would have to specify name of C function in the string that specifies name of the header file. You probably already noticed that type int from C is represented by CInt in Haskell and double by CDouble. You can convert between Int and CInt with fromIntegral and between Double and CDouble with realToFrac. Pointers from C became Ptr, so double* from C is represented as Ptr Double in Haskell binding. What might be surprising about this type signature is that the result is in the IO monad, that is our function from C is denoted as impure. The reason for this is that every time we run c_dwt function a different memory address will be allocated by malloc, so indeed the function will return different results given the same input. In my function however the array addressed by that pointer will always contain exactly the same values (for the same input data), so in fact my function is pure. The problem is that Haskell doesn’t know that and we will have to fix that problem using the infamous unsafePerformIO. For that we have to create a wrapper function that has pure interface:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import Control.Monad (liftM)
import Data.Vector.Storable
import Foreign hiding (unsafePerformIO)
import Foreign.C
import System.IO.Unsafe
 
dwt :: Vector Double -> Vector Double -> Vector Double
dwt ls sig = unsafePerformIO $ do
    let (fpLs , _, lenLs ) = unsafeToForeignPtr ls
        (fpSig, _, lenSig) = unsafeToForeignPtr sig
    pDwt <- liftM castPtr $ withForeignPtr fpLs $ \ptrLs ->
            withForeignPtr fpSig $ \ptrSig ->
                c_dwt (castPtr ptrLs ) (fromIntegral lenLs )
                      (castPtr ptrSig) (fromIntegral lenSig)
    fpDwt <- newForeignPtr finalizerFree pDwt
    return $ unsafeFromForeignPtr0 fpDwt lenSig

Our wrapper function takes two Vectors as input and returns a new Vector. To interface with C we need to use storable vectors, which store data that can be written to raw memory (that’s what the C function is doing). I wasn’t able to figure out what is the difference between storable and unboxed vectors. It seems that both store primitive values in continuous memory block and therefore both offer similar performance (assumed, not verified). First thing to do is to get ForeignPtrs out of input vectors. ForeignPtr is a Ptr with a finalizer attached. Finalizer is a function called when the object is no longer in use and needs to be garbage collected. In this case we need a function that will free memory allocated with malloc. This is a common task, so FFI implementation already provides a finalizerFree function for that. The actual call to foreign function is made on lines 11-14. We can operate on Ptr values stored in ForeignPtr using withForeignPtr function. However, since we have vectors of Doubles as input, we also have Ptr Double, not Ptr CDouble that c_dwt function expects. There are two possible solutions to that problem. One would be to copy memory, converting every value in a vector using realToFrac. I did not try that assuming this would kill performance. Instead I used castPtr which casts pointer of one type to a pointer of another type. This is potentially dangerous and relies on the fact that Double and CDouble have the same internal structure. This is in fact expected, but by no means it is guaranteed by any specification! I wouldn’t be surprised it that didn’t work on some sort of exotic hardware architecture. Anyway, I written tests to make sure that this cast does work the way I want it to. This little trick allows to avoid copying the input data. The output pointer has to be cast from Ptr CDouble to Ptr Double and since the result is in the IO monad the castPtr has to be lifted with liftM. After getting the result as Ptr Double we wrap it in a ForeignPtr with a memory-freeing finalizer (line 15) and use that foreign pointer to construct the resulting vector of Doubles.

Summary

I had two concerns when writing this binding. First was the possible performance overhead. Thanks to using pointer casts it was possible to avoid any sort of data copying and that makes this binding real quick. Measuring execution time with criterion shows that calling C function that does only memory allocation (as shown in this post) takes about 250µs. After adding the rest of C code that actually does computation the execution time jumps to about 55ms, so the FFI calling overhead does not skew the performance tests. Big thanks go to Mikhail Glushenkov who convinced me with his answer on StackOverflow to use FFI. My second concern was the necessity to use many functions with the word “unsafe”, especially the unsafePerformIO. I googled a bit and it seems that this is a normal thing when working with FFI and I guess there is no reason to worry, provided that the binding is thoroughly tested. So in the end I am very happy with the result. It is fast, Haskell manages garbage collection of memory allocated with C and most importantly I can benchmark C code using Criterion.

  1. Specification mentions also the calling conventions for other languages and platforms (Java VM, .Net and C++) but I think that currently there is no implementation of these. []
  2. Calls need to be safe only when called C code calls Haskell code, which I think is rare []

Staypressed theme by Themocracy