Using OpenCV with the Raspberry Pi Camera

OpenCV RasPi CSI Camera    Send article as PDF   
OpenCV doesn’t work natively with the Rasperry Pi Camera as it is not a usb-webcam. That said, the applications such as raspivid or raspistill controls the Raspberry Pi Camera using MMAL Functions. So the one needs to modify the source code of these applications, by using the Buffer Memory of the Raspberry Pi Camera Board to be feed to OpenCV as Image Objects.

Rasberry Pi with CSI Camera
Raspberry Pi, Mini PC with a CSI Type Camera Attached.

In order to achieve this we need to follow these 7 Steps:
  1. Install the Raspberry Pi’s CSI Camera
  2. Installation and compilation of userland-master (including mmal and raspicam tools)
  3. Create your own project
  4. Link with OpenCV Libraries
  5. Basic use of OpenCV and Pi camera : display an image from camera
  6. Use many snapshots to emulate a video (slow)
  7. Capture video within OpenCV window !
  8. Face recognition (Magic Mirror release 2)

RasPi CSI Camera SocketRaspberry Pi Showing the CSI Type Camera Socket.

It’s quite easy to install your new Pi Camera. Installation procedure is described below:

  1. Open up your Raspberry Pi Camera module. Be aware that the camera can be damaged by static electricity. Before removing the camera from its grey anti-static bag, make sure you have discharged yourself by touching an earthed object (e.g. a radiator or PC Chassis).
  2. Install the Raspberry Pi Camera module by inserting the cable into the Raspberry Pi. The cable slots into the connector situated between the Ethernet and HDMI ports, with the silver connectors facing the HDMI port.
  3. Boot up your Raspberry Pi.
  4. From the prompt, run “ sudo raspi-config”. If the “camera” option is not listed, you will need to run a few commands to update your Raspberry Pi. Run “ sudo apt-get update” and “ sudo apt-get upgrade”.
  5. Run “ sudo raspi-config” again – you should now see the “camera” option.
  6. Navigate to the “camera” option, and enable it. Select “Finish” and reboot your Raspberry Pi.

RasPi CSI Camera AttachedRaspberry Pi Showing the CSI Type Camera Attached.


  •  “ raspistill” is a command line application that allows you to capture images with your camera module. Below is an example of this command in use.
  • To capture an image in jpeg format, type “ raspistill -o image.jpg” at the prompt, where “image” is the name of your image


  • raspivid” is a command line application that allows you to capture video with your camera module. Below is an example of this command in use.
  • To capture a 10 second video with your Raspberry Pi camera module, run “ raspivid -o video.h264 -t 10000” at the prompt, where “ video” is the name of your video and “ 10000” is the number of milliseconds.

Once your webcam is installed, test it with this command (show pictures till 10 seconds)

raspistill -t 10000


At this stage, you should undertake to make a Full Backup of you Pi’s SD-Card just incase of a disaster.

MMAL Library and raspivid/ raspistill source code are found in the Userland folder (@GitHub, here). First of all, we need to compile the whole package before doing anything else with OpenCV.

  • install Git & GCC components: sudo apt-get install git gcc build-essential cmake g++ libx11-dev libxt-dev libxext-dev libgraphicsmagick1-dev libcv-dev libhighgui-dev
  • get source code  here : git clone<span class="rangySelectionBoundary" id="selectionBoundary_1417511981512_12152562704423164" style="line-height: 0; display: none;"></span>
  • mv the directory “userland” to “/opt/vc”: sudo mv /userland /opt/vc
  • change to the directory: cd /opt/vc/userland
  • type : sed -i 's/if (DEFINED CMAKE_TOOLCHAIN_FILE)/if (NOT DEFINED CMAKE_TOOLCHAIN_FILE)/g' makefiles/cmake/arm-linux.cmake
  • create a build directory and compile (it takes a while)

sudo mkdir build<br /> cd build<br /> sudo cmake -DCMAKE_BUILD_TYPE=Release ..<br /> sudo make<br /> sudo make install

Binary should be under /opt/vc/bin

Go to /opt/vc/bin and test one file typing : . /raspistill -t 3000

At this stage, you should be able to modify this software to include OpenCV calls. Congratulation ! Now, all next steps are piece of cakes….

As an example, we will create camcv, a soft strongly inspired from raspistill. It will allow us later to modify source code and play with OpenCV.

  1. Create a new folder in your home directory and copy all raspicam apps source code.

    cd<br /> mkdir camcv<br /> cd camcv<br /> cp /opt/vc/userland/host_applications/linux/apps/raspicam/*&nbsp; .<br /> mv RaspiStill.c camcv.c

  2. Remove CMakeLists.txt: rm CMakeLists.txt

  3. Make a New  CMakeLists.txt with:

    cmake_minimum_required(VERSION 2.8)<br /> project( camcv )<br /> SET(COMPILE_DEFINITIONS -Werror)<br /> include_directories(/opt/vc/userland/host_applications/linux/libs/bcm_host/include)<br /> include_directories(/opt/vc/userland/interface/vcos)<br /> include_directories(/opt/vc/userland)<br /> include_directories(/opt/vc/userland/interface/vcos/pthreads)<br /> include_directories(/opt/vc/userland/interface/vmcs_host/linux)<br /> add_executable(camcv RaspiCamControl.c RaspiCLI.c RaspiPreview.c camcv.c)<br /> target_link_libraries(camcv /opt/vc/lib/ /opt/vc/lib/ /opt/vc/lib/ /opt/vc/lib/ /opt/vc/lib/ )

  4. Delete CMakeFiles directory if it exits
  5. Compile & test
    cmake .
    ./camcv -t 1000


Clean file. camcv.c is a long file with a lot of useless lines for us. All these following function could be delete (or commented out). Remove of course all call to these functions in the remaining code.

static void dump_status(RASPISTILL_STATE *state)<br /> static int parse_cmdline(int argc, const char **argv, RASPISTILL_STATE *state)<br /> static void display_valid_parameters()<br /> static MMAL_STATUS_T add_exif_tag(RASPISTILL_STATE *state, const char *exif_tag)<br /> static void add_exif_tags(RASPISTILL_STATE *state)<br /> static void store_exif_tag(RASPISTILL_STATE *state, const char *exif_tag)

The existing line default_status(&amp;state); avoid you to parse command line using defaults parameters. Just add a line after this one :

Recompile and retest.

At this stage you should watch an old good movie with John Wayne.

Of course, OpenCV must be already installed in your Pi. To do this, just follow the third step of my previous post “Magic Mirror

  1. Modify your CMakeFiles.txt to include OpenCV Library cmake_minimum_required (VERSION 2.8).

project( camcv )<br /> SET(COMPILE_DEFINITIONS -Werror)<br /> #OPENCV<br /> find_package( OpenCV REQUIRED )

#except if you’re pierre, change the folder where you installed libfacerec<br /> #optional, only if you want to go till step 6 : face recognition<br /> link_directories( /home/pi/pierre/libfacerec-0.04 )

include_directories(/opt/vc/userland/host_applications/linux/libs/bcm_host/include)<br /> include_directories(/opt/vc/userland/interface/vcos)<br /> include_directories(/opt/vc/userland)<br /> include_directories(/opt/vc/userland/interface/vcos/pthreads)<br /> include_directories(/opt/vc/userland/interface/vmcs_host/linux)<br /> add_executable(camcv RaspiCamControl.c RaspiCLI.c RaspiPreview.c camcv.c)<br /> target_link_libraries(camcv /opt/vc/lib/ /opt/vc/lib/ /opt/vc/lib/ /opt/vc/lib/ /opt/vc/lib/ /home/pi/pierre/libfacerec-0.04/libopencv_facerec.a ${OpenCV_LIBS})

  • Recompile. Should be ok. No change (of course!) since you didn’t modify your source code

    make<br /> ./camcv

Actually, this was a pretty easy step !

In this step, we will modify our camcv code to:

  1. remove the preview display provided by MMAL layer
  2. copy the camera buffer to a CvMat object
  3. link CvMat to a IplImage object and display it
  4. do some cleaning to remove all useless code (for us)

Lines 61+: add OpenCV Includes

// *** PR : ADDED for OPENCV<br /> #include &lt;cv.h&gt;<br /> #include &lt;highgui.h&gt;

Line 156: modify init values for test (size of file)

// *** PR : modif for demo purpose : smaller image<br /> state-&gt;timeout = 1000; // 5s delay before take image<br /> state-&gt;width = 320;//2592;<br /> state-&gt;height = 200; //1944;

Line 230+ : in static void encoder_buffer_callback function. This is the core of the modification. This function is a callback, call to get the image in the queue. buffer contains the picture from the camera.

// *** PR : OPEN CV Stuff here !<br /> // create a CvMat empty structure, with size of the buffer.<br /> CvMat* buf = cvCreateMat(1,buffer-&gt;length,CV_8UC1);

// copy buffer from cam to CvMat<br /> buf-&gt;data.ptr = buffer-&gt;data;

// decode image (interpret jpg)<br /> IplImage *img = cvDecodeImage(buf, CV_LOAD_IMAGE_COLOR);

// we can save it !<br /> cvSaveImage(“foobar.bmp”, img,0);<br /> // or display it<br /> cvNamedWindow(“camcvWin”, CV_WINDOW_AUTOSIZE);<br /> cvShowImage(“camcvWin”, img );<br /> cvWaitKey(0);

Line 711/726/823: we remove the native preview window (replaced by opencv window)

// *** PR : we don’t want preview<br /> camera_preview_port = NULL;

// PR : we don’t want preview<br /> // status = connect_ports(camera_preview_port, preview_input_port, &amp;state.preview_connection);

// mmal_connection_destroy(state.preview_connection);

  1. Download the camcv.c file here and note following comments/change :

  2. compile, run and check if your “foobar.bmp” file is created and if a nice window shows your picture taken by Pi Cam ! (press key to stop)

At this stage you should be very enthusiastic ! You can control your rpi camera with OpenCV! Beautiful are bits, trees and the Pi.

Well, I’ve tried to use the same code as step 5 (based on RaspiStill.c) and I’ve just added a loop to take many pictures as quick as possible (it’s called a movie isn’t-it ?). Result is not too bad, but we can surely do better starting with RaspiVid.c (but another story, next step).

This way is quite simple  and I get around 8 FPS (still better than my USB-cam and its 2-3 FPS).

Compared to step 5, only lines changes : take a look on main function (around line 743) and in encoder_buffer_callback (around line 235).

The file is here :

Note: filename changed, update your CMakeLists.txt file!

In this step, we will learn how to diplay a video from the camera board, using OpenCV display (and not the native preview GPU window).
At the end of this step, you should be able to capture frames from your camera board, and use them directly using OpenCV! Enjoy, creativity will be your only limit (and perhaps CPU a little bit)

This How-to is based on this file ( Download it and read explanations below. (don’t forget to change the CMakeLists.txt). I found many technicals difficulties to write it,  thanx to Matthieu Tardivon (a brillant student) for his precious hint and help. I appreciate.

We start  from raspivid.c (the camera app)  but we need to remove  all useless lines, not linked with capturing frames.


  • all lines related to the preview component,
  • all lines related to the encoder component.
  • all lines related to inline command parsing and picture info…

Change :

  • add the callback directly to the video_port  (line 286)
  • create and attach the pool (to get/send message)  to the video port… (line 320)
  • change format encoding to ENCODING_I420 in line 268 (instead of OPAQUE)


The callback is called with the right FPS (around 30fps/s) during the capture. (FPS without OpenCV treatment).
The Buffer variable contains the raw YUV I420 frame which needs to be converted in a RGB format to be used with OpenCV.
To do it, understand the I420 format : read some cryptic pages like and

I wrote few lines to convert the picture in the callback function (line 141):

  • read the buffer and copy it by parts in 3 differents IplImage starting with Y component (full size), continue with U (half size) and finish with V component (half size)
  • merge the 3 IplImage (YUV) into one (line 170)
  • convert with the right color space (RGB)  (line 171)
  • and display it !


cvMerge, cvCvtColor are slow functions. If you want to increase FPS rate, you can stay with gray picture (the first Y channel). You’ll double your FPS doing that.  ( parameter graymode=1, line 124). Line 118 set the timeout variable : it’s the period to capture (ms)

  • 320×240 color : FPS = 27,2
  • 320×240 gray : FPS = 28,6
  • 640×480 color : FPS = 8
  • 640×480 gray : FPS = 17

At this stage, you should be able to use your camera board with OpenCV. Frame rate is still not perfect (no HD possible) but it will be enough to play with face recognition with a far better rate than our old USB webcam ! That’s what we’ll see on step 7.

This step is easy:

we reuse the source code of previous step 6  and we add the OpenCV face recognition treatment of step 6 of “Mirror Magic”.

Watch this video to see result:

Source code modification:

Except some new #include statements and some global variables, all modification are in the callback function video_buffer_callback.

For face recognition, gray pictures are required. Thus, once we get the I420 frame, we don’t need to extract color information. This is a great new, since we saw on last post, that this step takes a lot of cpu!

Make it simple : we forget the pu and pv channels, we only keep the “py” IplImage (gray channel) and convert it to a Mat object.

The face detection is made by the detectMultiScale function. This call requires most of the cpu needed in a loop, it’s important to optimize it.

  • Let’s use the LBP cascade (Local Binaries Patterns) instead of the Haar cascade file ( haarcascade_frontalface_alt.xml). Modify the fn_haar variable to link to lbpcascade_frontalface.xml. Response time is much more faster but less accurate. Sometimes, (and you can see exemple on the video), the soft gives wrong predictions
  • Let’s increase the size of minimum rectangle to search size(80,80) instead of size(60,60) as last parameter of the call.
  • I read on the “blog de remi” a way to optimize this function, using an alternative home-made function smartDetect. Unfortunatly, but I didn’t notice any improvment. Thus, I removed it. (perhaps I did a mistake or a misuse?)


  • With a 320×240 frame, I’m between 8 and 17 FPS with almost no lag (17 FPS = no face to detect and to analyse. 8 FPS = analyse face at each loop)
  • With a 640×480 frame, I’m around 4-5 FPS whith a small lag (1s)


For me, these results are very good for a such affordable computer like Raspberry Pi. Of course, for a real-time use like RC robot or vehicle it’s too slow (need to detect quickly an obstacle, except if you build a RPCS (Raspberry Pi Controlled Snail) ;-).

But for most of others uses like domotic or education it’s fine.

Anyway, it’s far better than my USB webcam : I was even unable to do face recognition in 640×480 !

Download the source code here ( It’s really a quick&dirty code, don’t be offended by non-respect of C++ state-of-the-art coding rules!


At this stage, you should be able to detect Mona Lisa, in case of she rings at your door tonight!

Previous Post

Introducing the 'ARMinARM' board for the Pi Family

The ARMinARM board is an STM32 ARM Cortex-M3 microcontroller add-on board for the Raspberry Pi Model B+ with a focus on flexibility and hackability, while still being easy to use. ... Read more

Next Post
Raspberry Pi to VGA Mnitor

How to connect & use a VGA Monitor with Raspberry PI

For a while I have been looking into a compact Video Monitor to use with a Raspberry or Banana PI. Whilst in a opportunity shop looking for a couple of ... Read more

Short URL:

Leave a Reply

Your email address will not be published. Required fields are marked *

Do NOT follow this link or you will be banned from the site!
error: Content is protected !!