An exclusive interview with Andrew Zabolotny

There was quite a bit of discussion about the Lensfun library in the recent Linux Graphic Meeting and I previously said that Lensfun is a project we want to keep an eye on. Today is the chance to do so with an interview of its main developer, Andrew Zabolotny. Andrew talks about how it all started, what the current situation is and where he sees Lensfun going in the future.


Hi Andrew, and thanks for taking some time for an interview. For a (usual) start, can you tell us a little bit about yourself, where you live, what you do as a day-job, if you have a family?

I’m a 35-year programmer from St. Petersburg, Russia, working full-time for a software company that does security systems for Windows <gasp> 🙂 I’m married and I have a 12-year daughter.


You are the man behind Lensfun. How did you get involved with that project?

Mostly accidentally. This is a long story really… dunno if somebody will be interested, but here’s it.

About one year ago (whoops… an anniversary 🙂 ) I bought a Samsung GX10 digital SLR, so the first thing I did after was looking what free software was available to handle digital photography 🙂 I found UFRaw but, as it always happens with me, I found that it misses some features that I would like to see there 🙂 So I started making patches for UFRaw. One of the patches was to improve the Russian translation of UFRaw, and I wrote to the author of the previous translation, Alexandre Prokoudine. We talked of translations, digital photo, shoes, and ships, and sealing-wax, cabbages and kings… oh I believe that’s from another opera. At some point he pointed me to this thread on the Create mailing list:

http://lists.freedesktop.org/archives/create/2007-May/000743.html

I was interested in this because it was related to one of the improvements I had in mind for UFRaw. So I followed the thread but at some point it became obvious to me that the thing isn’t going to move anywhere… nobody seemed to be motivated enough. So I started to force the process a little, here:

http://lists.freedesktop.org/archives/create/2007-August/000890.html

Then it started moving like a rolling snowball, and after a few months I had the draft implementation ready.


Are you active in other OSS projects?

Yes, I’m doing opensource programming since ~1995 I think, when I wrote my first useful opensource program – lxLite, a compressor for OS/2 executable files 😉 Then I started porting Unix opensource software to OS/2, which made me finally switch to Linux around ~1998 🙂 Nothing to mention specifically though, mostly doing small patches for this or that project, just like I’m doing now with UFRaw 🙂

I have one big hobby project (will be opensource when it will be at least alpha-quality) which lasts a couple of years, but it’s moving quite slowly as I always find some other project to contribute 🙂


(At which point I tried to extract some more info from Andrew about this project but without success…) Coming back to Lensfun, what is the goal of this library?

The primary goal is to have an easy to use library which provides access to an opensource database of data on photographic equipment (cameras and lenses for now). An auxiliary goal is to make some tools for supporting this database; the biggest missing thing now is some magic application which will take a set of horrible shots made with some horrible camera and some horrible lens, analyze them and provide a mathematical model of that horrible thing such that lensfun library can make a brilliant shiny image from any shot made with it 🙂


So it mean that Lensfun has the ability to automatically correct lenses optical defects? Is this some kind of OpenSource DXO?

To be honest, I never seen or used DXO, but at least I know it’s an application. Well, lensfun is a library, not an application, so it’s a bit different from DXO. This library can be used from many different programs, so in some regard it’s more than DXO. Currently you can develop digital negatives and apply lens corrections during that process (with UFRaw from CVS Head) and apply lens corrections on ready images (with the DigiKam plug-in). Possibly it will be used in Hugin (so that you won’t have to “Optimize” the lens parameters; this process may fail in suboptimal conditions while a flow of steady parameters from the database could lead to better results).

I hope with time lensfun will be used in more and more projects; as far as I know DXO was and always will be “the only and one”. Perhaps that’s better for their business, but certainly isn’t better for users 🙂


I read (in French magazine Chasseur d’Images) than the data for DXO correction is a per body / per lens / per focal / per diaphragm set that is measured in their high-tech laboratory. How are you tackling the issue of getting this information?

In a ideal world the difference between bodies (of the same model) is not big and can be ignored (I believe the DXO guys do that as well; perhaps for VIP clients they could do individual adjustments which could lead to some marginal improvements but I don’t really know much about DXO anyways).

The difference between lenses is huge, of course – that’s why there are 30-bucks lenses and there are 3000-bucks lenses. lensfun keeps every lens model in database separately, and there is a mathematical model for distortions (barrel/pincushion), for vignetting (that’s when the image is dimmed out, more to the corners), for chromatic aberrations (when the red and blue planes are slightly scaled relatively to the green plane, thus you can see coloured strips along sharp edges, stronger closer to the edges of the image).

Depending on the concrete image defect it is calibrated at different varying conditions (for example, distortion depends only on the focal distance (for zoom lenses) but vignetting depends of focal distance, aperture and distance to subject), and lensfun will interpolate the mathematical model for you when you need to apply it on a real-world image. For example, if you have your lens calibrated for distortions at 12mm and 16mm, and your image is taken at 15mm it will interpolate the coefficients of the distortion model to “guess” what they would be at 15mm.

Calibrating the concrete lense model is a long and complex task. To be honest, most of the information in the lensfun database was picked up from the PTLens project which was once kind of opensource, but then the author made it commercial, but allowed to use the old PTLens database in our project.


So currently the lensfun database is a snapshot from the PTLens; but your goal is to have anyone adding data for his own lenses and sharing them with others – in a true Open Source fashion. Can you tell us how you envision the (to be written) magic application which would process those datas?

Well, my own vision is to have a special sheet (published in a vector format like SVG or PDF), which everyone could print at any size and shot it at different conditions (aperture, focal lengths) so that it covers more or less the whole frame. Then the program, knowing in advance the structure of the sheet could measure how distorted the image got, and compute the reverse transformations needed to get the original undistorted sheet.

For now I think that the sheet could contain only relatively small filled black circles, placed on a regular grid and perhaps a black border around them to detect the end of the sheet. This will allow to calibrate:

  • distortion – by comparing the real-life grid with the ideal grid)
  • vignetting – since most of the sheet will be white, it should be either uniformly lit (which is hard to achieve in home conditions) or at least have a uniform gradient (which is easy to achieve even in home conditions – just keep away from any point lights). The gradient can be subtracted and then you can measure the vignetting.
  • chromatic aberrations – since you have a black/white image, any shift in color planes will result in R/G/B circles slightly moved relatively to each other. The center of every circle can be measured with sub-pixel precision, and then you can compute the aberration coefficients.

Now some people have rightly pointed out that some calibrations (for example, vignetting) depends on the distance to subject, and you can’t vary this parameter when you have a fixed-size sheet, which should cover more or less the whole frame. Some people are interested in calibration only for the infinite distance to subject (e.g. taking landscape shots).

They say that buildings (which are rectangular, at least most of the time 🙂 ) can be used for distortion calibration.

While I agree that the buildings can be used for calibration at infinity, I can’t think of a reliable algorithm to detect buildings on an arbitrary photo, so this turns from an automatic process (easy for any user) to a tedious manual process (which requires some understanding of the underlying theory).

Besides, for distortion and chromatic aberration calibration there’s no variance by distance to subject, so these two parameters can be automatically computed from a sheet shot. And for calibrating vignetting at infinity, you could just make shots of the clear sky (at some distance from the sun to avoid point lighting which causes radial gradients) or large evenly lighted uniform surfaces such as walls of some buildings without windows.


Since Lensfun is a library, its goal is to be used by other software. How are things going on that front? I think Hugin is already using Lensfun; are there any others?

I just submited a large patch for UFRaw which allows you to use about all the lensfun functionality from a easy to use (I believe) interface. You can play with all those mathematical models in realtime by moving sliders and see the effect of them immediately on the preview image. The final result (the developed digital negative) is interpolated with a high-quality Lanczos filter, so the resulting image doesn’t degrade due to interpolation (often it *looks* even a little bit better than the original, since a well-done Lanczos filter always sharpens the image a little).

Also the DigiKam team is working on a plug-in which will allow you to apply lens correction on normal images (UFRaw works only with digital negatives).

I’m not sure if Hugin implements direct support for lensfun yet. As far as I have seen in Pablo d’Angelo’s speech he only talks about how to calibrate lens model parameters using Hugin.


In your opinion is there a particular point that is currently blocking a more widespread adoption of Lensfun?

It all depends on me. How fast I will help library users with problems, how the database will grow. In general, adding new lenses to the database is the most important problem, otherwise the database will stagnate which will reduce the usefulness of lensfun.


How far are you from a 1.0 version – basic features working, stable API where you can tell application developers: “use Lensfun”?

Well, I’m not a great fan of round numbers, so I won’t make a 1.0 version just to mark a “mature release”. I think in a few days I’ll bump the version to 0.2.2 (currently it’s 0.2.1 in SVN) and mark that as the first version meant to go “in the masses”. There is no tarball yet on the site, this question is becoming important since with the upcoming release of UFRaw package maintainers will start looking for “stable lensfun tarballs” and there aren’t any yet.

The API itself I think won’t change anymore, or at least I don’t plan any changes to it.


Talking about the future, what are the developments you would like to see in Lensfun?

As I pointed above, the greatest missing thing in lensfun is an application which will help with calibration of the lenses. Having such a tool would greatly help in improving the database, in fact I think that if we make it so that almost every user can calibrate lenses the database will grow very quickly.

The core logic of the library is steady already, I can’t think of any more features that would be required to be in the core.


Any area(s) where help would be welcome?

Unfortunately my math is lacking, so I’m not sure I will be able to develop all the math required to “auto-magically” calibrate lenses by images.


More generally, what is your vision on the use of Linux (and open source software) for photography?

I think that the target users of the software for digital photography don’t really care about the openness of the software they use. Most of the time they’re ready to buy such software, and don’t mind if they can’t modify it/add features since they’re often not programmers.

Open-source digital photography programs are imho mostly used by beginners and non-professional photographers, on one hand, and by super-professionals on other hand (who are both a professional artist and a professional programmer at the same time, like the guys behind Blender and perhaps Cinepaint). Also there are super-professionals who write their own closed-source software, like the guys at Pixar studio.

The “regular” artists seem to be happy with what they’ve got on their Win/Mac machines, so I don’t expect something to drastically change here in nearest time: I don’t see any force vectors in this area pushing towards open-source.


Andrew, thanks again for your time and interest.

And I thank you for your interest in lensfun.

7 Responses to An exclusive interview with Andrew Zabolotny

  1. That is very interesting. I was not aware of Lensfun and was in fact curious about how Pablo d’Angelo of Hugin would continue on his proposal of a lens database (see http://www.linux.com/articles/62136)

    It seems logical that Hugin would use Lensfun but until now I have not read such information. Has anyone definite answers on that?

    Thanks for the interview, Andrew and Joel.

  2. NewMikey says:

    I’ve tried Lensfun through UFRaw and I must say: top marks! The Linux options for photoediting are obviously maturing at an ever-accelerating speed. The more options, the better. I already feel UFRaw with Lensfun surpasses Silkypix in quality of results and ease of use, as well as sheer detail resolution.

    Mike

  3. prokoudine says:

    I’m not sure if Hugin implements direct support for lensfun yet. As far as I have seen in Pablo d’Angelo’s speech he only talks about how to calibrate lens model parameters using Hugin.

    Well, at least Pablo was quite impressed by the quality of the code, when he was asked about it 🙂

  4. jcornuz says:

    Hi Alex,

    You should have mentioned your blog entry regarding Lensfun-enabled UFRaw:

    Take care,

    Joel

  5. Brian says:

    Is there a commercial-friendly license available? I’m looking for vignette correction.

  6. Aurelio says:

    I’ve bеen browsing on-line higher than 3 hours nowadays, but I never foսnd any fascinatіng articcle like yοurs.

    Its beautіful prоce suffcient durіng my situation. I think,
    if all site owners and bloggerѕ made good content as took aсtion now, thе internet wіll likely be a lot more helpful prevіously.

  7. plunder pirates android cheats says:

    Whatever system you decide to purchase; being able to interact with other players at the same
    time provides tons of entertainment. Talk to sales people in the store to get game suggestions.
    New gameplay details have been revealed for the upcoming multiplatform video game, “WWE 2K15.

Leave a comment