Support my work on Patreon

If you like my work, please consider to help my growing family with a little financial support through my Patreon creator page.

Alternatively, you can leave a one-time donation via PayPal using the Donate button below.

Total Pageviews

Monday, December 22, 2014

The alien grasshopper - Complex layer masking using channel data

In this tutorial I will show how to create a layer mask based on the image data itself, and how to select given portions of an image seamlessly and with little effort.

To illustrate this technique, I will take as example the grasshopper image that I posted around some time ago, and show how I isolated this nice insect from the flower and the rest of the background. To show the technique more clearly, in this post I will turn the grasshopper into an alien insect by applying a selective hue shift, like you see below:

Isolating the insect manually, either by drawing with a brush or by using a freehand selection tool, would be very time consuming and boring, and probably not as precise as the technique shown here.
Instead, I will show you here how you can use the image data itself to create a convenient grasshopper mask, all that in few mouse clicks. The idea is to find image channels in which the grasshopper is well separated from the flower, and then add a curves adjustment to the channel data so that the grasshopper appears completely white and the flower completely black in the final mask (with some smooth variation in the transition regions). The first step is to identify which channels are more promising. You can choose between the individual Red, Green and Blue channels of the RGB colorspace and the L, a and b channels of the Lab colorspace.

The grasshopper differs a lot from the flower in terms of color, but not too much in terms of luminosity, therefore we can already guess that the "L" channel will not be very useful... the other channels are all potentially promising, and we need to look at them directly to see which are best.

To visualize the individual channels, do the following: create a "Clone" layer (which I called "channel selector", select the layer you want to inspect, and choose the channel from the corresponding drop-down list. The individual channels of the grasshopper image are shown below (move the mouse over the caption items to see the corresponding image). The contrast of the "a" and "b" channels has been increased to better see the tonal variations.

Click type to see: Red - Green - Blue - Lab "a" - Lab "b"

As expected, the "a" and "b" channels look the most promising ones... Not surprising, since the grasshopper is searated from the flower mostly in terms of color. The "a" and "b" channels might be difficult to interpret if you are not familiar with the Lab colorspace, so I'll spend a few words to explain why they look like that.
The "a" and "b" channels encode the color information of the image, independently of the luminosity, in a quite peculiar way: a neutral grey corresponds to "a=50%, b=50%", while values above and below 50% encode complementary colors. For example, "a<50%" corresponds to green and "a>50%" to magenta, while "b<50%" corresponds to blue and "b>50%" to yellow. The higher the distance from 50%, the higher is the saturation of the corresponding color component.

With that in mind, we can already expect that the flower and the grasshopper will have quite different values in the "a" channel (the grasshopper is greenish and therefore "a<50%", while the flower is pinkish and so we can expect that "a>50%"). The "b" channel is less obvious (the grasshopper is certainly yellowish with "b>50%", but the b values in the flower are not so evident...) but a direct look to it shows that there is a quite strong separation between grasshopper and flower in this channel as well.

For this image I ended up using a combination of the "a"and "b" channels to isolate the grasshopper, and the Red channel to mask the out-of-focus background.

Now that the masking strategy is ready, I made the "channel selector" layer invisible (but kept it there in case I wanted to inspect again the individual channels later on). Invisible layers do not consume any memory and are simply skipped by the processing pipeline, so you can have as many as you like without any negative impact on the processing performance...
I added a "Hue/Saturation" adjustment layer on top of the invisible channel selector. I will change the Hue of the layer until the green color of the insect will turn into electric blue, but first I will work on the layer mask to isolate the grasshopper from the rest. For that, I double-clicked on the icon corresponding to the "opacity mask" (the one at the extreme right right of the layer row) to open the layer stack of the mask itself (initially empty).
First, I added a new layer group (I've called it "a channel mask") and then added a "Clone" layer inside this group. I selected the "a" channel of the background layer as source, and clicked on the "show active layer" button below the preview window to activate the visualization of the mask itself. The preview window now looks like this:

Next, I've added a "Curves" layer on top of the "a" clone and applied a "threshold-like" curve that turns the mask into pure white above the grasshopper, and pure black above the flower (with some smooth transitions).
Here is how I did it:

I repeated then the above steps for the "b" channel (created a group, inserted a clone layer into the group and then a "Curves" adjustment on top of the cloned channel), obtaining this result:

The two masks look similar, but not identical... actually, the best would be a combination of the two. Combining the two masks is quite easy: I simply changed the blending mode of the top group (with the "b" channel mask) to "Lighten" mode, to obtain this:

At this point, the masking of the grasshopper looks quite ok (we could remove a bit of remaining red areas by hand), but I still need to isolate the out-of-focus background. For that, the "Red" channel looked like the most appropriate. The technique is always the same: I added a layer group, inserted a clone layer inside the group and a curves adjustment on top of it. I ended up with this red channel edit:

The red channel mask needs now to be combined with the "a+b" one: for that, I have applied a slight blur (5 pixels of radius) and then changed the blending mode the the red channel group to "Darken", to obtain an almost final result:

There are still a couple of unmasked spots in the background, but those are easily corrected by drawing directly on the mask with a large black pencil. Therefore, my last masking step is to add a "Draw" layer on top of the "red channel mask" group, set the background color to white and the pen color to black, and change the blending mode of the "Draw" layer to "Darken". Now I could draw with a large pencil (I've set the size to 100 pixels) to completely remove the unmasked regions. The final result looks like this:

As you can see, I've ended up with a quite complex mask edit. However, I can guarantee that the procedure is much longer to describe than to realize. With a little practice, it is easy to identify the channels that are likely to be good for masking, and the technique is rather simple: create a group, clone a channel inside the group, and add a threshold-like curve adjustment to create the mask.

The nice thing is that all steps in the mask creation are non-destructive, and you can go back and tweak any setting whenever you like, even after saving and reopening the file later on.
Moreover, despite the relatively large number of layers and blendings, the mask is computed quite fast and does not slow down significantly the preview of the edited result.

Finally, it's time to turn the color of the grasshopper into electric blue. First I switched back to the main "Layers" panel to edited the "Hue/Saturation" adjustment. I double-clicked on the corresponding layer name to open the configuration dialog, and shifted the hue by 150 degrees in the positive direction... and there it is, a weird alien has taken the place of the nice insect!

Of course, the same technique can be applied to more "traditional" edits... I've used the same mask to increase a bit the contrast and give some more "pop" to the grasshopper. That's the final result (mouse over to see original):

Thursday, December 18, 2014

Resurrecting an old idea...

Several years ago I have developed some code that adds natural-looking grain to a black and white image. The code was never released so far, it just stayed on my hard drive and I've been using it from time to time to add grain to my images. Now that PhotoFlow is getting more solid and stable, it is maybe time to resurrect this old project and try to integrate it with the rest of PhotoFlow's tools.

Here are a couple of examples of how the added grain looks like (they correspond to two possible choices of the grain size), compared to the original image.

The way the grain is generated is quite different from conventional approaches: instead of overlaying a "grain field" over the original image, the final result is gnerated by adding the individual grains one-by-one. The density at which the grains are locally distributed is such that, on average, the final tonal value matches the original one.

The difference with respect to a classic grain overlay might not be huge, but is visible. As an example, you can find below a comparison with the result of Darktable's grain filter at 6400 ISO and 50% strength (mouseover to see the result of my method):

All that might be just over-complicated, but it was an intresting intellectual exercise which brings (at least to my taste) some visual benefits. Probably it will land into PhotoFlow sooner or later, also depending upon the feedback I get from possibly interested users...

UPDATE: G'MIC also provides a nice grain simulation filter, and it was interesting to compare it with my own recipy.
Surprisingly, my "large grain setting" shown above matches very closely G'MIC's "TMAX 3200" preset at 80% of opacity, 100% scale and "grain merge" blending. The comparison is show below (left: G'MIC, right: my code):

Below  you can see the test image rendered by G'MIC using the same settings (mouse over to see my own result). As one can see, while the grain rendering in the mid tones is very similar, the two methods give quite different results in dark and light areas, as well as regions of high contrast.

Thursday, December 11, 2014

HiRaLoAm effect with PhotoFlow

The so-called "High Radius Low Amount" sharpening method (HiRaLoAm, originally introduced by Dan Margulis) consists  to apply the unsharp mask filter with a large radius (let's say more than 10px) and a low amount (typically below 40%). The method is more a local contrast enhancement technique than a sharpening one: the halos that are created by the large radius add "volume" to the image and give an impression of increased local contrast.

While this is a quick and "dirty" local contrast enhancement technique (which might produce visible halos in the final image), it can be quite instructive to see how this can be achieved in PhotoFlow by simply using the gaussian blur filter and the appropriate layer blending modes, and can be an easy way to give your image some  more "pop".

Through this short tutorial you will also see how to create and high-pass filter, as it is one of the necessary steps of the HiRaLoAm technique I'm showing here.

First of all, open an existing image and add a group layer at the top of the layer stack (I've called the group layer "hiraloam").

Then, add a gaussian blur filter inside the "hiraloam" group and set the blending mode of the layer to "grain extract". This produces and high-pass version of the original image. As a starting point, set the radius to something between 10px and 20px. You will have the possibility to tweak it afterwards if needed.

We are now one step away from enhancing the local contrast... all you need to do is to set the blending mode of the group layer to "Overlay". Et voila', your image immediately gets some "pop"! Or maybe too much "pop"... you most likely need to reduce the effect by lowering the opacity of the group layer until the result doesn't look artificial.

Here is my final result for the grasshopper picture, using a blur radius of 20px and an opacity of 50%. Move the mouse over the image to see the original.

Simple, isn't it? You can now save the "hiraloam" group as a PhotoFlow preset, and load it back to quickly apply this technique if your image looks a bit "too flat".

Saturday, December 6, 2014

"Color spot" RAW white balance mode in PhotoFlow

Sometimes, getting a correct white balance for a shot taken in non-standard conditions (like for example below the trees in a deep forest) is not straightforward, unless you had the precaution of shooting a neutral grey card in the same lighting conditions... I don't know about you, but I never have this precaution! So I tend to end up with several shots that are all different, without an obvious neutral object to use as a reference, and with a camera white balance that changes from one shot to another.

It is with this kind of situations in mind that I have decided to code a new white balance mode that is not found in other open source RAW editors: the "color spot" mode. This mode is used to adjust the white balance based on known colors that are not neutral. But before discussing that in detail, let's see briefly what are the white balance options currently available in PhotoFlow. First things first.

PhotoFlow offers for the moment only three RAW white balance modes:

  1. "CAMERA WB" applies the white balance coefficients stored by the camera in the raw file at the time of shooting
  2. "SPOT WB": this tool requires you to click on the image with the mouse in order to select a certain region that is supposed to be gray. The image data is then averaged over a small (15x15 pixels) area around the clicked point and the WB coefficients are adjusted to neutralize the corresponding color.
  3. "COLOR SPOT WB": this tool works in a similar way as the normal "SPOT WB", except that it lets you specify a non-neutral target color for the selected area. The target color is given in terms of Lab "a" and "b" values, so that the result is independent of the camera and working profiles being used.
The available options are clearly still quite limited, and all standard white balance presets (daylight, shadow, incandescent or fluorescent light, etc...) are badly lacking (but planned in some near future).

The first two methods are found in any raw converter, therefore I skip their detailed description and jump directly to the third one, which is something quite new. To describe how it works, I'll use a portrait shot that Patrick David made freely available on is blog and I will try to set the white balance based on the tonality of the model's skin.

To activate the "COLOR SPOT" tool, you have to open the raw developer dialog, select the "White balance" tab and chose the "Color spot" mode in the "WB mode" drop-down list. For more details on how to develop RAW images in PhotoFlow, you can have a look at this previous blog post.

It is time now to decide what color you want to match in your picture, and put the proper "a" and "b" values in the corresponding boxes; for this picture, I was able to come very close to the camera white balance using a=17 and b=16. This values make quite sense, as they correspond to a color tonality that has as much magenta as yellow... you can slightly increase the "b" value to get a more "tanned" aspect, depending on what you want to achieve. Decrease both values if you want to obtain a "paler" skin tone.

In order to sample a uniform skin region, I have chosen a point in the middle of the front head just above the eyebrows. Then I adjusted this point with three different settings for "a" and "b": the one that matches the camera WB, a "sun-burned" setting with "a" quite larger than "b", and an "extra-tanned" setting with "b" quite larger than "a". The last two settings are obviously far too extreme, and are there only to illustrate the range of results that can be obtained.

Source image: Mairi by Patrick David (cc by-sa)
Click type to see: Camera WB - Color spot (a=17,b=16) - Color spot (a=22,b=16) - Color spot (a=16,b=22)

That's it! Once you know the approximate "a" and "b" values that are needed to get the desired color tint, the procedure becomes quite fast and intuitive. And might save you a lot of time trying to get the right white balance "by eye".

Resources usage during image processing

I have already written in several places that PhotoFlow only uses a small amount of resources during processing. Today I decided to give you a nice example of that: the screenshot below shows a 100 megapixels image (10k x 10k pixels) with a curves adjustment and a gaussian blur filter applied to it, being processed in 32-bits floating point precision and saved to disk. As you can see, the processing saturates the two available cores on my machine, while the memory usage remains as low as about 3% of the available 4GB of RAM.

This is actually a benefit of using VIPS as the underlying processing engine. VIPS splits the image in small chunks that are processed in as many concurrent threads as the number of cores on your machine. At any moment, only the active chunks are loaded into memory, thus avoiding the need of very large memory buffers. Thanks to that, PhotoFlow is able to load and process images of arbitrary size, even much larger than then available physical memory. The image data is stored in temporary disk buffers, therefore you need a sufficient amount of free disk space to process very large images. But apart from that, there is theoretically no limitation.

Monday, December 1, 2014

News: G'MIC Dream Smoothing filter added to PhotoFlow

It seems that the Dream Smoothing filter is one of the preferred and most used filters of the G'MIC library. Given its popularity, it thought it would be a pity not to have it available in PhotoFlow... so here it is! The implementation is still preliminary, but it works!

The dream smoothing tool was also a good opportunity to test a new kind of filter implementation in photoflow. i.e. non-realtime filters. As you can see from the screenshot above, the configuration dialog for the dream smoothing has an additional "Update" button above the various sliders. This is because this filter is not really compatible with real time, tiled processing, as it is quite slow and also would require a large tile padding. Instead, you have to manually start the processing by hitting on the "Update" button. The same needs to be done whenever you change some of the parameters. For the same reason, if you save a photoflow image containing one or more dream smoothing layers, those layers will be by hidden by default whenever you will open the image again. The smoothed image will only be computed when you'll toggle the layer visibility the first time.

When it is run, the filter reads and writes 32-bits floating point TIFF images, therefore it is perfectly inserted in the floating point processing pipeline of PhotoFlow, without any loss of precision.
If you put additional adjustment layers on top of the dream smoothing, they will be immediately refreshed whenever the computation of the new smoothed image is finished.

By the way, this filter is really amazing... good job, G'MIC developers!

Friday, November 28, 2014

Black and white conversion in Photoflow

If you search on the internet, you will find tutorials describing dozens of different methods  for converting a color digital image to grayscale (or "black and white" as many people say). There are lots of web pages that describe the grayscale conversion techniques and the theory behind them, so I decided not to go through all the details here... others have done that much better than I would.

However, if you are new to the subject and you need a good introduction, one of the best places you can visit is Patrick David's blog post at There you will find a very clear explanation of the technical background as well as a detailed description of the methods I'll discuss in this post, so I really encourage you to have a look there before continuing...

Generally speaking, we can say that the goal of grayscale conversion is to create "volume" out of "color", or in other words to translate the colors into reach tonal variations. There are of course lots of artistic exceptions to this rule, but I think that the basic idea is there...
However, there are lots of (infinite?) ways to translate a given "color" into the corresponding shade of gray, and which one is "the best" depends in most cases on the personal taste and on the image itself. You'll have to experiment a lot to find your personal "signature". The software must help you by providing several different options that can be quickly and easily compared on your screen.

One of the main goals I had in mind when I started coding PhotoFlow was to provide a good set of grayscale conversion tools, with all the flexibility that is usually found in high-end software. This means not only different conversions methods, but also the possibility to blend different channels together in separate layers, and access to Lab and CMYK channels in addition to RGB.
Of course it is too early to say "mission accomplished", but things seem to be on good track, and this post will try to show what can be done at the moment of writing.

Photoflow provides three main categories of grayscale conversions:
  1. straight desaturation: there are four methods available, owever each method is not configurable
  2. channel mixer: one of the most classic methods for grayscale conversion, it gives a lot of flexibility
  3. Pat David's black&white film presets from G'MIC: they emulate the spectral response of several widely used B&W films.
All grayscale conversion tools in PhotoFlow generate output images that are in the same colorspace as the colored original. For example, if you start from an image in the AdobeRGB colorspace, the grayscale version will still be an RGB image in AdobeRGB colorspace. Thus, you can for example easily add some toning or duo-toning without having to convert your image back to RGB...

Let's now see each tool in some detail.

Desaturation methods

The desaturation tool in photoflow is a close match of the one you can find in GIMP, and provides the same three grayscale conversion methods: LIGHTNESS, LUMINOSITY and AVERAGE. There is a very detailed explanation of the formulas behind those three methods in Patrick David's tutorial, with many examples of how colors are converted to grayscale in the three cases, so I will not repeat all the arguments here.

To apply the desaturation tool to a color image, you have to add a new layer and choose "Desaturate" in the "Color" group of the tools chooser dialog (see below).

Once the new layer is added, the corresponding configuration dialog will open automatically and will let you choose the desaturation method through a selector widget. You can at any time access this dialog by double-clicking on the name of the corresponding layer.

Photoflow provides a fourth method that has no GIMP counterpart: "L channel (Lab)". As the name of the method indicates, it uses the L channel of the Lab colorspace to convert colors to grayscale. The Lab encoding is the result of long scientific studies of the human perception of colors, and the L channel is designed to provide the closest match of how the human eye translates colors into perceived "lightness". If you want to learn more about the Lab colorspace and the theory behind, the wikipedia page could be a good starting point.
Another nice advantage of the L channel is that it is "perceptually uniform", in the sense that the L values are encoded in a way that reflects the natural response of the human eye. For example, L=50% corresponds to  a well-exposed 18% grey patch, which is usually called "mid-gray".
Technically speaking, in photoflow the L channel is extracted by first doing a colorspace conversion from the input profile to the Lab colorspace, then filling the a and b channels (which encode the color information) with 50% gray, and then converting the resulting greyscale image back to the original colorspace.

The images below show the result of the different desaturation methods compared to the original color image. For this comparison I have used the same portrait image that you will find in Patrick David's tutorial (with the author's permission), so that you can eventually compare with the examples shown there. At least to my taste, the L channel conversion gives the best tonal variations, particularly in skin areas, followed by the luminosity one. The other two (lightness and average) give results that are globally flatter.

Whitney by Patrick David (cc by-sa)
Mouseover type to see: Original - Lightness - Average - Luminosity - Lab L channel

This last conversion method is also the only one that gives always the same result independently of the colorspace in which the input image is encoded. To understand that, we need to introduce a bit of color management theory...

RGB, colors and color spaces...

Many people think that RGB values represent colors... however, this is not entirely true. RGB values are sort of meaningless if you do not specify the colorspace in which they are encoded. For example, a typical caucasian skin tone is represented with quite different RGB values in the sRGB and ProPhotoRGB colorspaces, and this has some consequences on the grayscale conversions discussed above. For example, the picture below lets you compare the result of a "lightness" desaturation applied to the same image encoded in sRGB and ProPhoto. As you can see, the tonality in the skin areas changes quite significantly...

Mouseover type to see: sRGB - ProPhotoRGB

The only exception is the desaturation based on the "L channel", because in this case the whole conversion is performed by taking color management properly into account. Hence, the grayscale version will look the same independently of the input colorspace, making your workflow more "predictable". In all other cases, you should consider to convert all your color images to a reference colorspace before desaturating, to have a uniform starting point for all your edits. In photoflow, it is just a question of adding a "colorspace conversion" layer below the desaturation.
Choose the RGB colorspace that gives you the best results, and stick with it all the time... you have been advised.
As a rule of thumb, sRGB will give you the larges tonal differences between Red and Green on skin tones, while large-gamut colorspaces like ProPhoto will bring the Red and Green channels closer to each other.
Another interesting choice is the LstarRGB colorspace, because its tone curve is encoded the same way as the L channel of Lab.Therefore, a well-exposed 18% gray patch will be represented as R=G=B=50% in LstarRGB, which makes in my opinion the use of curves and other tonal editings more intuitive...

Enough theory for this post, let's go back to practice!

The channel mixer

PhotoFlow includes a simple implementation of the channel mixer. The tool is not as complete as the one found in other programs, but it is sufficient for converting images to grayscale with great flexibility. Again, for a detailed description of how the channel mixer works I encourage you to read this post before continuing.

The channel mixer tool is activated like usual: you have to add a new layer and select the "channel  mixer" in the "Color" tab of the tool selection dialog. Once the layer is added, the configuration dialog of the channel mixer will show up automatically and will let you control how the RGB channels get mixed to produce the grayscale result.

For the moment, the channel mixer only creates grayscale images. Moreover, the multiplicative coefficients of the three channels are automatically scaled so that they sum-up to one, thus avoiding any shift of the overall image brightness. However, the coefficients can be set to values larger than 100% (up to 200%) or negative (down to -200%).

Hue + channel mixer adjustment

The channel mixer is very flexible and powerful, but playing with the individual RGB multipliers is sometimes long and tedious before one reaches the optimal result. There is however a nice technique that allows to dramatically change the result of the channel mixer using just one slider. The idea is to add an HUE adjustment layer below the channel mixer, which is then kept at its default setting of R=100% G=0% B=0%. Once you are in that configuration, you can significantly change the tonality of the red channel by simply shifting the hue of the color image thoward positive or negative values. As you can see below, you can improve a lot the constrast of your image or turn your model into a dangerous alien... you have been warned!

Mouseover type to see: Original - Hue = -90 - Hue = +90

G'MIC B&W Film Presets (from Patrick David)

The last conversion technique that I'm going to discuss is based on the amazing film presets prepared for us by Patrick David and included in the G'MIC processing library.

I've already written a detailed blog post on how to use the film presets in PhotoFlow, so I suggest you to read it here if you are not familiar with them.
In this case, we need to select the "Emulate Film [B&W]" item in the "G'MIC" tab f the layer chooser dialog s shown below.

Once you hit the Ok button, you will be prompted with the preset configuration dialog, where you can choose the actual film brand to emulate. In the example below, I've choosen the "Ilford Delta 400" preset.

Final considerations

That's all for this post. It's now time for you to do your own experimentations... you should also keep in mind that the techniques I've shown here can be further combined in lots of different ways: since they are applied as adjustment layers, you can play with layer opacities, blend modes and masks to further refine your conversions. There is almost no limit to what you can do...

Monday, November 24, 2014

Initial version of pixel cloning tool

Recently I have been working on a feature that I was planning since long time, but I had no idea how to implement... a pixel cloning tool. Finally, it was not that difficult and the preliminary version is now on GitHub for testing.

You might think that this is not such a great step forward, and that many other programs have a similar tool since long time (gimp, krita, etc...). However, the version implemented into photoflow has the nice feature of being non-destructive.
Practically, this means that pixels are copied on-the-fly when requested, and that they always reflect the up-to-date status of the input data. For example, if you put the cloning tool above a curves layer, the cloned areas will automatically reflect any changes done to the curves adjustment...

Here is the preliminary tool in action:

It still needs a lot of improvements, like feathering the edges of the strokes or adjusting their opacity, but the basic functionality is there and the code is fast enough to be used in real time.
To use the cloning tool, you have to add a new layer and select the "Clone stamp" item in the "misc" tab of the tool selection dialog.

Cloning the pixels works more or less like in GIMP: first you have to ctrl-click on the image to define a source area, and then start drawing to clone the pixels at some other place. With just one minor caveat: in order to avoid drawing accidentally, the tool is "active" only when the corresponding layer is selected and the configuration dialog is opened.

If you feel brave and want to test the new tool, you'll need to update the source code from GitHub and compile it. Any feedback will be really appreciated!

Friday, November 14, 2014

High-iso noise reduction with PhotoFlow and G'MIC (part 1)

Now that several G'MIC noise reduction filters are available in PhotoFlow, I've started experimenting with them to see how they work on typical high-iso RAW images. This post will show a first attempt, using either Iain's noise reduction filter or a custom combination of Iain, Garagecoder's despeckle and guided blur filters.
There will be most likely more posts to come in the future on this subject...

Noise reduction is always a trade-off between preservation of details and removal of luminance and color artifacts. You should not expect noise reduction tools to do miracles: they will not create details that are lost due to noise, and if you ask them to remove noise completely they will also remove a lot of important details... Hence, fine-tuning and personal taste are the keys.
My personal goal is to transform the initial digital noise into a more "grainy" and "good looking" (maybe even "artistic") noise, such that it does not disturb the eye.

For this tutorial I will use a  3200-ISO 6400-ISO image from a Nikon D300 DSLR camera. The original NEF file is available from this link.
If you would like to play with the settings of my custom noise reduction, you can get the corresponding preset from here. The preset can be applied to any image, with the only limitation that the input image should be in RGB colorspace.
As usual, this tutorial requires a very recent version of PhotoFlow. You can either download and compile the sources from GitHub or get the updated windows installer from here.

The result of the recipy described in this post is shown below (mouse over to see the final result), and the rest of the post will describe how I got there...

Choice of demosaicing method

The first important choice comes even before applying noise reduction methods, and involves the demosaicing of the RAW image. PhotoFlow provides two different demosaicing methods:

  • Amaze, which is designed to maximize the level of details that is extracted from the Bayer pattern; it works very well on low-noise images, but we will see how it can introduce artifacts in noisy ones.
  • IGV, which produces softer results than Amaze which are however very "clean" when noise is present.
Below you can see a comparison between the two methods (mouse over to see the Amaze result). I have disabled on purpose the false color suppression inside the RAW developer module, as I want to address the noise reduction problem entirely with G'MIC.

Iain's Noise Reduction

G'MIC provides a powerful noise reduction filter (called "Iain's Noise Reduction"), now included in the set of filters imported into PhotoFlow, which I will use as a term for comparison for my own experimentation.

Let's then see what Iain's noise reduction filter is able to do on this image. The filter parameters that I've used are shown in the screenshot on the right. I've left the chroma NR to the default value of 3, and reduced the luma NR to 1 to avoid loosing too much high-frequency details. I've also disabled the details recovery step, as it makes the filter VERY slow.

The result of applying those settings to the IGV image is shown below (mouse over to see original).

Custom noise reduction

Having Iain's reference in mind, I've tried to see if I was able to get something different (and maybe better) using other noise reduction filters form G'MIC.

As a starting point, I'll use Iain filter for chroma NR (setting the luma NR slider to 0) and use two additional filters for luma NR:
  • Garagecoder's "Despeckle" filter for salt&pepper noise reduction
  • "Guided blur" filter for additional luma NR
In order to efficiently target luma noise only, I've converted the result of Iain's chroma NR to Lab colorspace, and then I've applied the "despeckle" and "guided blur" filters to the "L" channel only.
This time I'll not go through all the steps of the procedure, assuming that you already know how to add layers and layer groups in PhotoFlow (if not, I suggest you to have a look at some of the past tutorials), and I'll concentrate on the results of the intermediate steps. As mentioned at the beginning of the post, you can get the corresponding preset from here to see how layers are configured and grouped.

First of all, let's see how the "despeckle" filter works. Below you can see the result of applying this filter with "tolerance=20" and "max_area=10". I've found that lower tolerance values or larger max_area values tend to introduce artifacts in in small-scale details. Mouse over to see the result of the chroma NR only.

Next, I've applied the "guided blur" filter on top of "despeckle", with "radius=5" and "smoothness=100". Larger values of the radius tend to increase the noise near high-contrast edges, while lower values tend to increase the overall noise. The "smoothness" parameter can be pushed up to obtain a smoother picture that still preserves good detail.
The images below show the final results (mouse over to see image after despeckle) for the IGV (top) and Amaze (bottom) demosaicing. One can clearly see that Amaze introduces more artifacts that are difficult to remove by the NR.

Finally, I show for reference the result one gets with the IGV image when setting "smoothness=200" in the guided blur filter (mouse over to see "smoothness=100" version).

As one can see, fine details are still quite well preserved. Moreover, one can reduce a bit the opacity of the "guided blur" filter in order to restore a bit of noise texture, so that the image looks more "natural".

I hope you will find this first introduction to noise reduction with G'MIC useful. And if you find mistakes or have suggestions to improve the results, just leave a comment and I'll be more than happy to update the post with new material!

Wednesday, November 5, 2014

Patrick David film presets included in PhotoFlow

Now that the G'MIC interface has been integrated into PhotoFlow, adding specific G'MIC filters is quite easy, sometimes even straightforward...

After having added several smooth filters for noise reduction, I decided to give Patrick David's film emulation presets a try... and the results are really nice!

The different presets are now integrated into PhotoFlow as any other non-destructive filter, meaning that you can change presets or tweak parameters and see the result on-the-fly. There are some performance and gui improvements still possible, but the overall usability is I think already quite ok.

In order to use the film presets, you need first of all to download and install the color lookup tables: first you have to get the zip file from here, and then you have to unpack it into the right directory on you disk, which is


in the Linux case and


for Windows.

Once the presets are installed and you have downloaded and compiled a fresh version of photoflow, you can apply them as any other tool:

  1. click on the "+" button to add a new layer
  2. activate the "G'MIC" tab and scroll down to the film presets category you want to use
  3. click "OK" to add the layer and open the corresponding configuration dialog
The steps are summarized in this screenshot:

Once the layer configuration dialog is opened, you will be able to switch between presets and change parameters like opacity, contrast, hue, etc... Your changes will be immediately shown in the preview window (see below).

The controls in the configuration dialog are still a bit ugly and would require some clen-up and rearrangement, but the basic functionality is there.

For the moment the film presets are only available from the git develop branch compiled from source. An updated windows executable is in preparation...