An exclusive interview with David Tschumperlé

Hello David.

Thanks for your availability. I am very much looking forward to what you have to say about such a hot topic as noise on digital images – and how to minimize it. Although not an expert in photography, you are the man behind the GREYCstoration noise removal algorithm and as such your field of expertise is of the highest interest for us, photographers.

For a start, can you tell us a few words about yourself?

Hi Joël and thanks for offering me this interview on your blog. I warn you I’m quite a big talker! I’m (currently) a 31-years old guy living in Caen (North of France). I’m a permanent researcher interested in Computer Sciences and Image Processing (IP), working in the Image Team of the GREYC lab. This lab belongs to the CNRS institute, which is mainly dedicated to public research. The GREYC lab is also affiliated with the ENSICAEN engineering school.

I’m a Computer Sciences engineer who decided to get a PhD degree in Image Processing and Applied Mathematics.I had the opportunity to do this in 1999, at the INRIA lab, in Sophia-Antipolis (South of France). Surprisingly, I achieved this goal in late 2002. After two years of post-doc, I finally succeeded in getting a permanent research position at the GREYC lab, in 2004.

As you will notice in the following interview, I’m absolutely not an expert in photography. This is not even one of my main hobbies, although I’ve been interested in it more and more since my daughter’s birth. So, please excuse my ignorance in this field.

 

Can you tell us a little bit about the history of GREYCstoration?

The complete history of GREYCstoration starts from the beginning of my PhD in late 1999.

Actually, the topic of my thesis was “PDEs Based Regularization of Multi-Valued Images and Applications” supervised by Pr. Rachid Deriche. He is quite well known in the IP field for his contribution in designing recursive algorithms for fast image filtering. On the other hand, PDE’s (Partial Differential Equations) are mathematical tools that are known to be very powerful in achieving non-linear (i.e contour-preserving) filtering of images and signals. So, during 3 years, I tried to understand PDE-based filtering as well as trying to design some new equations for smoothing multi-channel images (including color images) with the most possible flexibility. We applied these PDE’s for very different IP problems (including 2D and 3D image denoising, reconstruction and interpolation). Of course, having a supervisor who is an expert in image filtering techniques helps a lot, and we finally had some success to handle the problem of image smoothing with contour-preserving properties.

That’s how the theoretical foundations of GREYCstoration have been laid.

Unfortunately, our algorithms were still too slow for a general purpose usage (as PDE-based methods usually require a lot of iterating steps). It needed sometimes hours for filtering a high-resolution image, even on a modern computer.

During my PhD, I also started to write a C++ image processing library called CImg, that I’ve been able to release as an open-source library in 2003, thanks to the quite flexible INRIA open-source policy. Most of the code done during my PhD has been done with CImg.

When I moved to Caen, I had some luck and found major improvements to the theoretical work we did previously with Rachid Deriche. It had nice consequences on the quality results (better preservation of small curved structures in images), but above all it improved the algorithm speed roughly by a factor x8. This was very interesting since it became usable for image denoising without waiting for hours.

I admit the current algorithm seems to be still long to run, but believe me it has already nothing to do with the original one!

The command line version of GREYCstoration is the result of all these quite long steps, combining my latest theoretical results and algorithms within my open-source library CImg. Afterwards, some kind people took time to build a GIMP plug-in around the raw algorithm code, in order to make it usable for non computer geeks. I admit that is one of the greatest strength of the open-source community, since I wouldn’t probably had time and competences to develop the GIMP plug-in by myself.

You can see that GREYCstoration has in fact a quite long and serious history, even if the availability of the final GIMP plug-in is quite recent.

 

Are you involved with other OSS projects?

As I said before, I’m the main author of the CImg Library, started in 2000.

This is an open-source C++ library intended to help developers in creating generic image processing algorithms. I admit I’m quite proud of it, as I see more and more people using it and even (sometimes) appreciate it.

In this context, GREYCstoration is “just” an example of the library use, even if this one required a lot of work. I would never have thought that maintaining an open-source library requires so much time and patience, but this is an exciting long-term project, and it is pleasant to see that some people may take advantages from it.

 

Noise at high ISO is one of the most important criteria for judging a digital camera these days. Can you explain briefly where noise comes from on the hardware side?

I guess you are definitely more expert than me on this topic. I would sum up the problem like this : the cause of noise is multiple. It involves mostly sensors inaccuracy, quality and digital quantization.

For the particular case of CCD/CMOS sensors (used in most digital cameras to capture image pixels), the noise comes from a mix of Gaussian and Poisson noise. The Gaussian noise comes from the heat generated by the electronic devices (thermal noise), while the Poisson noise is due to the fact that the sensors try to counts light photons in a limited time, and sometime miss or even overcount some of them (Poisson noise is often called “counting noise”).

Finally, there is quantization noise appearing when converting the (continuous) data measured by the sensor into digital numbers with a limited number of bits.

The problem of these noise sources is their randomness and unpredictable aspects. It is then difficult to mathematically model and most of all inverse the noise process since the final law of the global noise present in the image is quite complicated and has a lot of unknown parameters.

If you consider also the fact that two sensors cannot be exactly cloned from an hardware point of view, then sensors located at different places won’t measure the same value even if you input them with the same light.

To sum up, it is a mess leading to a definitely random and non-inversible process.

 

After years of CCD only sensors on digital SLRs (with the notable exception of Canon), we see a shift and most of the new cameras include a CMOS sensor. What are the advantages of a CMOS sensor compared to a CCD in terms of noise?

I’m not an expert in sensor hardware, but it seems to me that CMOS sensors are less expensive than CCD ones. From what I’ve read and heard, CMOS sensors are more sensitive to noise than CCD ones, but this is perhaps a false impression.

 

Once the photo is taken, the fight against noise continues in software. We all heard of NoiseNinja (especially since it is available on Linux) but GREYCstoration is an OSS alternative which offers “similar results (not to say better)” – quote from GREYCstoration website. Can you explain (in simple words…) the principles behind the “nonlinear multi-valued diffusion PDE’s (Partial Differential Equations)” that are used by your denoising algorithm?

From a mathematical point of view, color images are “multi-valued” since they contain three channels (R,G,B) per pixels. This is opposed to scalar-valued where each pixel is only represented by a single value (lightness). GREYCstoration is multi-valued in the sense that it takes care of the correlation between all image channels in order to process the image. It also means you can process hyperspectral images with more than 3 channels with GREYCstoration. This is not so common, but you can have for instance hyperspectral images acquired from satellites, having more than 10 channels (each channel corresponding to a certain range of lightwaves). You can process them exactly as color images with GREYCstoration.

Concerning the “PDE” theory : GREYCstoration is based on nonlinear diffusion Partial Differential Equations. It may sound quite ugly, but just imagine your image as a field (room) of pixels that are not colors anymore but quantities as temperatures (or chemical concentrations). Diffusion PDE’s are typically the equations that would describe the physical evolution laws of these temperatures from an initial state (your noisy image) to a stable state. For instance, if your room is empty, the stable state consists in having the same average temperature on each location (point) in the room. But if the room has some walls, obstacles and heat sources in it, the diffusion laws are a bit more complicated and you can get a non-constant stable state.

That the basic idea behind GREYCstoration : it uses physical laws to diffuse your pixel values, considering that your room (image) is not empty, but has walls and obstacles mainly defined by the internal detected structures (contours, corners, etc…). From an algorithmic point of view, the result of the stable state is found by very fast and crude approximations of the PDE evolutions.

It is less precise but definitely faster to compute, with only few artefacts (but note that non-desired structures can appear if you try to approximate the process too much by choosing wrong parameters).

Finally, the “similar results (not to say better)” sentence appearing in the GREYCstoration website is a kind of a joke: when I first looked to the web pages of the contestant denoising products (such as NoiseNinja or NeatImage), I was suprised by the exaggerated qualifiers used to describe these software.

Sure they want you to convince to buy these ones instead of the others! This is a pure commercial reason, but it is really a non-sense.

I don’t think one denoising algorithm is always superior for all possible pictures. Also denoising results is often a matter of taste, and different people can have different opinions on denoising results from different softwares. So I did the same thing for GREYCstoration, saying that it is better than any other product. Of course, it cannot be true – but everybody understood right?🙂

 

How come GREYCstoration can maintain an on-par result quality with commercial software – who can review your code while hiding theirs?

You raise the interesting problem of comparing qualities of denoising results. I pretend this is a very hard (insoluble?) task which cannot be done by an algorithm (else, believe me, the noise problem would be definitely solved). Even two different humans can have divergent opinions on two denoising results.

So, it is very easy to create so-called products comparison tests by tweaking the parameters finely to work with one denoising method (the one that we want to highlight) and choose default parameters (and sometime worse) to get a completely blurred result with another one.This is unfortunately a common practice I’ve seen in a lot of scientific papers about image denoising algorithms,and I now hardly believe such comparison tests.

Concerning the quality of the code, I’ve nothing to hide and I’m quite confident with the GREYCstoration code quality. That’s why I released GREYCstoration as an open-source software instead of a freeware. More than that, open-source means that you can use it for free, but also study it, modify it and so on… In my case, this is like sharing a scientific experience and I believe this is the right way to come forward. People can test it easily, give their opinions, suggest modifications, etc..

Due to lack of time, this is not always possible to make it evolve as fast as I want but sure, new ideas are not missing. I don’t think that closed sources products are inevitably of better quality. Paying for something doesn’t mean it is worth, but customers generally do not like to be wrong.

 

When using GREYCstoration, I was very impressed by the result I got. However, the GIMP plug-in has many settings-sliders and it is not always easy to know what values will give the best results. Are there any remedy to the situation – like NoiseNinja has its “image analysis” before processing?

You are right, that would be a major improvement done to the algorithm: an image pre-processing step would be nice to automatically set the “right” parameters for the denoising process.

This is not a so easy task, and I think a more urgent and related need is to be able to have preset parameters, adapted to common types of noise. This part is quite easy to do since it doesn’t require any modification to the algorithm itself. This is just a “cosmetic” feature which would be very useful.

Anyway, it requires motivated people to integrate such an option into the plug-in. Unfortunately, I don’t have the needed competences in GTK and GIMP programming to do it. If you know someone interested in it….

 

GREYCstoration is foremost a command line utility which, while powerful, is not too user-friendly. Its OSS nature means that it could be used in other OSS projects. There is a Gimp plug-in, but have you had any interest from other projects – like CinePaint, RawStudio, UFraw?

I know GREYCstoration has already been integrated in open-source products, as Digikam, Krita and PhotoWipe. Of course, I would be very happy to see GREYCstoration in more open-source softwares.

Note that integrating an algorithm into an existing (big) open-source project (as a plug-in for instance) is a quite hard work if you start from scratch. It requires a lot of time and technical skills focused on some precise architectures or libraries used by the software.

My approach was rather to try to develop a nice and generic C++ API for GREYCstoration, in order to ease the integration of this algorithm into any softwares interested in. I think that people maintaining and developing these softwares can do the job definitely more faster than I would.

Also the good thing is that improving the algorithm library while keeping the same API can potentially benefit to all softwares at the same time. For now on, the GREYCstoration algorithm is not only a command line and a GIMP plugin, but it is available as a complete C++ library with a simple-to-use API intended to ease its integration into other softwares.

Of course, this is a technical point that only concern developers. Perhaps I should transmit the message more widely…

 

Or is there a project to develop a stand-alone application that would allow batch processing à la NoiseNinja?

No, there is no such a project. I prefer the idea of having the algorithm integrated in many image processing applications instead of having a single image processing application build around the algorithm. After all, image denoising is a quite common task one wants to do (even if is is in fact difficult to achieve), and I only provided an original algorithm to do that.

There is nothing magical nor revolutionary and there are already plenty of filters trying to deal with the noise problem. I’d like to see the GREYCstoration algorithm integrated as the same level as these existing filters. It doesn’t worth a dedicated software I guess.

 

More generally, what are your plans, ideas & dreams for GREYCstoration? And in which areas could you do with some help?

I think the GREYCstoration algorithm could be improved mainly in these two aspects:

I’d really like to improve the GIMP plugin. As said before, having possible preset parameters would be nice. More flexible preview window also would be interesting. This wouldn’t be so much work for people knowing how to deal with GIMP plugins.

If someone’s interested, do not hesitate to contact me!

About speed: I’d love to see a GPU implementation of GREYCstoration. A guy from NVIDIA already told me he translated one of my image filtering code available in CImg (not GREYCstoration, but a more simpler one) to work with GPU instructions. He obtained amazing performances (something like 350 frames/s for 512×512 images while I get perhaps 2 frames/s on my 3Ghz Pentium CPU). Even if the algorithm complexity of GREYCstoration is a bit higher than simple linear filtering, I guess a factor x100 could be achieved.

This would lead to quasi real-time denoising. This is not a dream, I’m sure this is possible. But here, quite a lot of work and technical competences are needed.

Also help would be nice for writing a nice documentation and tutorials on how to use GREYCstoration. As a non expert in photography, I have the tendency to explain the use of GREYCstoration from a mathematical point of view, which is absolutely not a good thing for the general audience : they just don’t care (and they are right) about the fact that the ‘anisotropy’ parameter sets the tendency for the diffusion tensors to have an anisotropic shape, or that the ‘strength’ parameter is related to the length of the integral lines used for convolving the image intensities.

 

DXO is boasting about denoising RAW files before interpolation, hence achieving better results than traditional post-interpolation denoising. What is your take on that?

I guess they are absolutely right. Processed image data should be the closest possible from the ones acquired by the sensors, especially when one wants to remove noise. Doing transformations on your noisy data (like performing RGB-Bayer reconstruction from raw image for instance) is like adding artificial, synthetic noise to your data, and you get a noise model which is even much more complex than the one at the beginning (which is already not simple). This is harder to get rid of it.

 

Would you say there is still a lot of improvements that can be achieved in the field of software noise reduction or are we seeing the limits with actual treatments?

Yes I’m sure of it. There are a lot of new highly capable denoising methods based mainly on the analysis of image patches, which have a high potential in term of result quality and algorithm speed. I’m sure these kind of methods will outperform the existing ones in a near future. I’m personally studying these methods since few months. Perhaps some improvements based on this work will appear in future versions of GREYCstoration (PDE’s and Patch-based image analysis are not incompatible).

 

Any other topic you would like to touch on? or one “last word”?

A last word : I hope people will consider open source products more and more as serious softwares. Open-source does necessarily not mean “made by a student” or “made quick and dirty”. GREYCstoration and CImg are the results of hard research and development works. It is not so user-friendly at the moment, but the core algorithm is quite capable and I guess only few things are missing to make it working easily for most people. I’m trying to work on it.

 

David, thanks for your time.

Thanks again for your interview.

8 Responses to An exclusive interview with David Tschumperlé

  1. Chris Frey says:

    Thanks for the interview. I have seen mention of NoiseNinja all over the place, but it’s nice to see an opensource library that tackles the problem as well, and with as much attention to detail and the mathematical work behind it.

    I’m a C++ programmer, and I should really volunteer to help with the plugin stuff, but unfortunately am way too busy this month. I’ll have to keep GREYCstoration in mind.🙂

  2. NewMikey says:

    Thanks for this interview! I had the pleasure of exchanging emails with David on the tiling routine of GreyCstoration. He is truly a remarkable guy and his algorithm has the makings of something great. It will benefit us all; photographers that use Linux by choice.

  3. jcornuz says:

    Hi there,

    Chris, I think it would be really cool if someone can pick-up the Gimp plug-in development. That would be a nice return on David’s time investment in this interview…

    I hope we will see GREYCstoration included in more and more OSS photography tool.

    Take care,

    Joel

  4. Drazick says:

    Great Info…
    I hope the community will keep improving the algorithms (Quality and speed).

  5. anonymous says:

    GREYCstoration for Photoshop with GPU support: http://moe.imouto.org/forum/show/2007

  6. There is perceptibly a bunch to realize about this. I assume you made certain good points also.

  7. The mens times are super tough! 3:05 for 18-34? Yikes! I dont know many men who can run that fast.

  8. Hey thanks Al. I hope you are right about it being warmer in Georgia. It sure was cold on Frosty Mountain this weekend.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: