Unless you have been living in a cave for about 10 years, you know that the next big thing in GIMP is called GEGL – that library has been in development for several years and aims to be the new engine that will launch GIMP to the stratosphere of image editing (more or less…)
What GEGL brings.
What does GEGL (web) bring to the table that makes it so desirable? Basically, GEGL takes in any image (bitmap) and then applies operations to the image. Think curves operation, a bit of dodge or burn here and there and an unsharp mask (to keep things simple). Now the goodness of GEGL is that:
- all these operations are internally processed by GEGL in 32bit float (the highest precision level available to mere mortals). So that means high bitdepth at last. Let’s just remember that our human eye cannot see nuances finer than 8bits/channel (which was the standard defined when memory and processing power was scarce). However, color modification (or even moreso black and white) when done in this limited bitdepth end up with very noticeable artifacts. So for serious photo retouching, you need high-bitdepth (and GEGL).
- since GEGL gobbles in a bitmap and applies operations to it, you can save the bitmap (doh) and all the operations (and parameters) you applied to it. And you have access to all of your previous operations and modify them anytime (years later if saved in an adequate file format). To speak in Photoshop’s term, everything is an effect layer. And more: you can apply a series of operations to a bitmap and then copy and paste these operations to another one. And if you want to modify these operations you can decide if your modification will be applied to both objects or just one. Pretty nifty? YOU BET.
- Lastly, GEGL allows its operations to be executed by the CPU (slowly) or by the GPU (fast). Some operations are already accelerated via OpenCL and the 2011 Google Summer of Code students got temporarily hired by AMD to further work on his project.
The long story of GIMP GEGL integration
The only problem is that it has been 10 years (or so) that we have been waiting for GEGL to materialize. This is now done for several years, thanks mainly to Øyvind “Pippin” Kolås. And then some more (years) for GIMP to start using GEGL – and start delivering all the GEGLified goodness to its belated users. The recently released 2.8 version of GIMP has been dragging and although it started using GEGL for some of its operation, has been mainly about updating the user interface – to offer the single-window interface that is lovely, by the way. But still, for real photography work, we need higher bitdepth.
This spring, however (just before the release of 2.8) Øyvind Kolås and Mike “Mitch” Natterer gathered for a week to test the feasability of the approach they had in minde for the port of GIMP core to GEGL. As it happened, the approach worked so well, that the one week test, turned out to a 3 weeks hackaton were 90% of the GIMP core was ported to GEGL. It is worth reading Mitch’s blog on the subject – picked by Slashdot where the whole discussion was about the hackers “inadvertently” porting GIMP’s core to GEGL. No, the story doesn’t tell how much beer and pizza were consumed during these 3 weeks.
But what came out of it was a branch called “goats invasion” which has since been rebased to master and is now where the current GIMP developement is happening and which will be release as GIMP 2.10 – and the incubation time for this release should be much shorter than for 2.8.
What do we get (or not)?
So what do we get from this goats invasion? Well, GEGL is now used inside the main GIMP processing pipeline and we can now load and work on images with higher bitdepth (YEAH!). Most of the GIMP interface is the same, except for one submenu: precision. Saving files as xcf allows for high bit depth and 16bits PNG is supported but not tiff (yet).
What do we NOT get (yet)? Well, most plugins aren’t ported to GEGL yet, so if you need to oilify an image in 16bits depth, you are out of luck. As with any software being under active development, expect crashes – sometimes with an Eeeek and how many GEGL buffers where leaked to memory.
Note that accelerating GEGL via OpenCL/GPU is a GEGL-only project that can get going independently of GIMP development.
How to get this goats goodness? You basically need the git version of GIMP (as well as gegl-git and babl-git). Be warned though that GIMP is big and takes a long time to compile. But you will be rewarded by a precision submenu and quite a few crashes 🙂 Of course, if you find a repository with gimp-git for your distro, you’re all set.
What is next?
So with the goats invading GIMP, we have the first goodie of GEGL nearing: high bitdepth “for free” (as in part of GEGL’s standard mode of operations). What about the other one – unlimited history and “everything as effect layers”? Well, that is for later versions, since this requires quite a bit of UI design (for which Peter Sikking has a plan – or a least a blog post). So 2.8 has been released recently, 2.10 will be about GEGL and high-bitdepth. GIMP 3.0 will be about porting GIMP to GTK3 and then will come the UI revamp for GEGL operations. Although release times should be shorter, there is still quite a long way to go to fully profit GEGL goodness.
But what is maybe the most encouraging of all is that GIMP is finding back an enthusiastic community. No longer is it fashionable to grumble about the GIMP (like I have done so much in this blog), no longer are whining users met by disgruntled developers. If you look for GIMP on Google+ (almost) every new post is met by positives comments – it looks like enthusiasm is back to the GIMP camp: good to see more and more GIMP fanboys (and girls) raise their voice and cheer the vailiant developpers.
And of course…
Here is my first image processed entirely in 16 bits / channel: developped twice with Rawstudio (once for the sky and once for the ground) in 16bpc PNG, opened and mixed with GIMP. Let’s say that this pic has more technical than artistic value (notice the polarizer vignetting at the bottom 😛 )
32 bit floats being the max for “mere mortals” isn’t true. Use of 64-bit floats (“double precision”) is routine, and I believe it’s not hard to get 128-bit floats. In fact, if you’re doing floating point math of any kind, the only reason these days to use single precision (32 bits) is memory constraints. I believe internal computations are now usually done in 64 bits either way on modern processors, so there’s no speed gain.
@Reid … popularly available in the *photo processing / managing world*, for free? Software links, please.
Cool – you are back! At least were, but I hope to read more here soon!
I hope everyone like this information they shared as I do. Really great information.