Monday, December 22, 2014

The alien grasshopper - Complex layer masking using channel data

In this tutorial I will show how to create a layer mask based on the image data itself, and how to select given portions of an image seamlessly and with little effort.

To illustrate this technique, I will take as example the grasshopper image that I posted around some time ago, and show how I isolated this nice insect from the flower and the rest of the background. To show the technique more clearly, in this post I will turn the grasshopper into an alien insect by applying a selective hue shift, like you see below:



Isolating the insect manually, either by drawing with a brush or by using a freehand selection tool, would be very time consuming and boring, and probably not as precise as the technique shown here.
Instead, I will show you here how you can use the image data itself to create a convenient grasshopper mask, all that in few mouse clicks. The idea is to find image channels in which the grasshopper is well separated from the flower, and then add a curves adjustment to the channel data so that the grasshopper appears completely white and the flower completely black in the final mask (with some smooth variation in the transition regions). The first step is to identify which channels are more promising. You can choose between the individual Red, Green and Blue channels of the RGB colorspace and the L, a and b channels of the Lab colorspace.

The grasshopper differs a lot from the flower in terms of color, but not too much in terms of luminosity, therefore we can already guess that the "L" channel will not be very useful... the other channels are all potentially promising, and we need to look at them directly to see which are best.

To visualize the individual channels, do the following: create a "Clone" layer (which I called "channel selector", select the layer you want to inspect, and choose the channel from the corresponding drop-down list. The individual channels of the grasshopper image are shown below (move the mouse over the caption items to see the corresponding image). The contrast of the "a" and "b" channels has been increased to better see the tonal variations.


Click type to see: Red - Green - Blue - Lab "a" - Lab "b"


As expected, the "a" and "b" channels look the most promising ones... Not surprising, since the grasshopper is searated from the flower mostly in terms of color. The "a" and "b" channels might be difficult to interpret if you are not familiar with the Lab colorspace, so I'll spend a few words to explain why they look like that.
The "a" and "b" channels encode the color information of the image, independently of the luminosity, in a quite peculiar way: a neutral grey corresponds to "a=50%, b=50%", while values above and below 50% encode complementary colors. For example, "a<50%" corresponds to green and "a>50%" to magenta, while "b<50%" corresponds to blue and "b>50%" to yellow. The higher the distance from 50%, the higher is the saturation of the corresponding color component.

With that in mind, we can already expect that the flower and the grasshopper will have quite different values in the "a" channel (the grasshopper is greenish and therefore "a<50%", while the flower is pinkish and so we can expect that "a>50%"). The "b" channel is less obvious (the grasshopper is certainly yellowish with "b>50%", but the b values in the flower are not so evident...) but a direct look to it shows that there is a quite strong separation between grasshopper and flower in this channel as well.

For this image I ended up using a combination of the "a"and "b" channels to isolate the grasshopper, and the Red channel to mask the out-of-focus background.

Now that the masking strategy is ready, I made the "channel selector" layer invisible (but kept it there in case I wanted to inspect again the individual channels later on). Invisible layers do not consume any memory and are simply skipped by the processing pipeline, so you can have as many as you like without any negative impact on the processing performance...
I added a "Hue/Saturation" adjustment layer on top of the invisible channel selector. I will change the Hue of the layer until the green color of the insect will turn into electric blue, but first I will work on the layer mask to isolate the grasshopper from the rest. For that, I double-clicked on the icon corresponding to the "opacity mask" (the one at the extreme right right of the layer row) to open the layer stack of the mask itself (initially empty).
First, I added a new layer group (I've called it "a channel mask") and then added a "Clone" layer inside this group. I selected the "a" channel of the background layer as source, and clicked on the "show active layer" button below the preview window to activate the visualization of the mask itself. The preview window now looks like this:


Next, I've added a "Curves" layer on top of the "a" clone and applied a "threshold-like" curve that turns the mask into pure white above the grasshopper, and pure black above the flower (with some smooth transitions).
Here is how I did it:


I repeated then the above steps for the "b" channel (created a group, inserted a clone layer into the group and then a "Curves" adjustment on top of the cloned channel), obtaining this result:


The two masks look similar, but not identical... actually, the best would be a combination of the two. Combining the two masks is quite easy: I simply changed the blending mode of the top group (with the "b" channel mask) to "Lighten" mode, to obtain this:


At this point, the masking of the grasshopper looks quite ok (we could remove a bit of remaining red areas by hand), but I still need to isolate the out-of-focus background. For that, the "Red" channel looked like the most appropriate. The technique is always the same: I added a layer group, inserted a clone layer inside the group and a curves adjustment on top of it. I ended up with this red channel edit:


The red channel mask needs now to be combined with the "a+b" one: for that, I have applied a slight blur (5 pixels of radius) and then changed the blending mode the the red channel group to "Darken", to obtain an almost final result:


There are still a couple of unmasked spots in the background, but those are easily corrected by drawing directly on the mask with a large black pencil. Therefore, my last masking step is to add a "Draw" layer on top of the "red channel mask" group, set the background color to white and the pen color to black, and change the blending mode of the "Draw" layer to "Darken". Now I could draw with a large pencil (I've set the size to 100 pixels) to completely remove the unmasked regions. The final result looks like this:


As you can see, I've ended up with a quite complex mask edit. However, I can guarantee that the procedure is much longer to describe than to realize. With a little practice, it is easy to identify the channels that are likely to be good for masking, and the technique is rather simple: create a group, clone a channel inside the group, and add a threshold-like curve adjustment to create the mask.

The nice thing is that all steps in the mask creation are non-destructive, and you can go back and tweak any setting whenever you like, even after saving and reopening the file later on.
Moreover, despite the relatively large number of layers and blendings, the mask is computed quite fast and does not slow down significantly the preview of the edited result.

Finally, it's time to turn the color of the grasshopper into electric blue. First I switched back to the main "Layers" panel to edited the "Hue/Saturation" adjustment. I double-clicked on the corresponding layer name to open the configuration dialog, and shifted the hue by 150 degrees in the positive direction... and there it is, a weird alien has taken the place of the nice insect!



Of course, the same technique can be applied to more "traditional" edits... I've used the same mask to increase a bit the contrast and give some more "pop" to the grasshopper. That's the final result (mouse over to see original):


Thursday, December 18, 2014

Resurrecting an old idea...

Several years ago I have developed some code that adds natural-looking grain to a black and white image. The code was never released so far, it just stayed on my hard drive and I've been using it from time to time to add grain to my images. Now that PhotoFlow is getting more solid and stable, it is maybe time to resurrect this old project and try to integrate it with the rest of PhotoFlow's tools.

Here are a couple of examples of how the added grain looks like (they correspond to two possible choices of the grain size), compared to the original image.









The way the grain is generated is quite different from conventional approaches: instead of overlaying a "grain field" over the original image, the final result is gnerated by adding the individual grains one-by-one. The density at which the grains are locally distributed is such that, on average, the final tonal value matches the original one.

The difference with respect to a classic grain overlay might not be huge, but is visible. As an example, you can find below a comparison with the result of Darktable's grain filter at 6400 ISO and 50% strength (mouseover to see the result of my method):


All that might be just over-complicated, but it was an intresting intellectual exercise which brings (at least to my taste) some visual benefits. Probably it will land into PhotoFlow sooner or later, also depending upon the feedback I get from possibly interested users...


UPDATE: G'MIC also provides a nice grain simulation filter, and it was interesting to compare it with my own recipy.
Surprisingly, my "large grain setting" shown above matches very closely G'MIC's "TMAX 3200" preset at 80% of opacity, 100% scale and "grain merge" blending. The comparison is show below (left: G'MIC, right: my code):

 
Below  you can see the test image rendered by G'MIC using the same settings (mouse over to see my own result). As one can see, while the grain rendering in the mid tones is very similar, the two methods give quite different results in dark and light areas, as well as regions of high contrast.

Thursday, December 11, 2014

HiRaLoAm effect with PhotoFlow

The so-called "High Radius Low Amount" sharpening method (HiRaLoAm, originally introduced by Dan Margulis) consists  to apply the unsharp mask filter with a large radius (let's say more than 10px) and a low amount (typically below 40%). The method is more a local contrast enhancement technique than a sharpening one: the halos that are created by the large radius add "volume" to the image and give an impression of increased local contrast.

While this is a quick and "dirty" local contrast enhancement technique (which might produce visible halos in the final image), it can be quite instructive to see how this can be achieved in PhotoFlow by simply using the gaussian blur filter and the appropriate layer blending modes, and can be an easy way to give your image some  more "pop".

Through this short tutorial you will also see how to create and high-pass filter, as it is one of the necessary steps of the HiRaLoAm technique I'm showing here.

First of all, open an existing image and add a group layer at the top of the layer stack (I've called the group layer "hiraloam").



Then, add a gaussian blur filter inside the "hiraloam" group and set the blending mode of the layer to "grain extract". This produces and high-pass version of the original image. As a starting point, set the radius to something between 10px and 20px. You will have the possibility to tweak it afterwards if needed.



We are now one step away from enhancing the local contrast... all you need to do is to set the blending mode of the group layer to "Overlay". Et voila', your image immediately gets some "pop"! Or maybe too much "pop"... you most likely need to reduce the effect by lowering the opacity of the group layer until the result doesn't look artificial.



Here is my final result for the grasshopper picture, using a blur radius of 20px and an opacity of 50%. Move the mouse over the image to see the original.


Simple, isn't it? You can now save the "hiraloam" group as a PhotoFlow preset, and load it back to quickly apply this technique if your image looks a bit "too flat".

Saturday, December 6, 2014

"Color spot" RAW white balance mode in PhotoFlow

Sometimes, getting a correct white balance for a shot taken in non-standard conditions (like for example below the trees in a deep forest) is not straightforward, unless you had the precaution of shooting a neutral grey card in the same lighting conditions... I don't know about you, but I never have this precaution! So I tend to end up with several shots that are all different, without an obvious neutral object to use as a reference, and with a camera white balance that changes from one shot to another.

It is with this kind of situations in mind that I have decided to code a new white balance mode that is not found in other open source RAW editors: the "color spot" mode. This mode is used to adjust the white balance based on known colors that are not neutral. But before discussing that in detail, let's see briefly what are the white balance options currently available in PhotoFlow. First things first.

PhotoFlow offers for the moment only three RAW white balance modes:

  1. "CAMERA WB" applies the white balance coefficients stored by the camera in the raw file at the time of shooting
  2. "SPOT WB": this tool requires you to click on the image with the mouse in order to select a certain region that is supposed to be gray. The image data is then averaged over a small (15x15 pixels) area around the clicked point and the WB coefficients are adjusted to neutralize the corresponding color.
  3. "COLOR SPOT WB": this tool works in a similar way as the normal "SPOT WB", except that it lets you specify a non-neutral target color for the selected area. The target color is given in terms of Lab "a" and "b" values, so that the result is independent of the camera and working profiles being used.
The available options are clearly still quite limited, and all standard white balance presets (daylight, shadow, incandescent or fluorescent light, etc...) are badly lacking (but planned in some near future).

The first two methods are found in any raw converter, therefore I skip their detailed description and jump directly to the third one, which is something quite new. To describe how it works, I'll use a portrait shot that Patrick David made freely available on is blog and I will try to set the white balance based on the tonality of the model's skin.

To activate the "COLOR SPOT" tool, you have to open the raw developer dialog, select the "White balance" tab and chose the "Color spot" mode in the "WB mode" drop-down list. For more details on how to develop RAW images in PhotoFlow, you can have a look at this previous blog post.

It is time now to decide what color you want to match in your picture, and put the proper "a" and "b" values in the corresponding boxes; for this picture, I was able to come very close to the camera white balance using a=17 and b=16. This values make quite sense, as they correspond to a color tonality that has as much magenta as yellow... you can slightly increase the "b" value to get a more "tanned" aspect, depending on what you want to achieve. Decrease both values if you want to obtain a "paler" skin tone.

In order to sample a uniform skin region, I have chosen a point in the middle of the front head just above the eyebrows. Then I adjusted this point with three different settings for "a" and "b": the one that matches the camera WB, a "sun-burned" setting with "a" quite larger than "b", and an "extra-tanned" setting with "b" quite larger than "a". The last two settings are obviously far too extreme, and are there only to illustrate the range of results that can be obtained.



Source image: Mairi by Patrick David (cc by-sa)
Click type to see: Camera WB - Color spot (a=17,b=16) - Color spot (a=22,b=16) - Color spot (a=16,b=22)

That's it! Once you know the approximate "a" and "b" values that are needed to get the desired color tint, the procedure becomes quite fast and intuitive. And might save you a lot of time trying to get the right white balance "by eye".

Resources usage during image processing

I have already written in several places that PhotoFlow only uses a small amount of resources during processing. Today I decided to give you a nice example of that: the screenshot below shows a 100 megapixels image (10k x 10k pixels) with a curves adjustment and a gaussian blur filter applied to it, being processed in 32-bits floating point precision and saved to disk. As you can see, the processing saturates the two available cores on my machine, while the memory usage remains as low as about 3% of the available 4GB of RAM.



This is actually a benefit of using VIPS as the underlying processing engine. VIPS splits the image in small chunks that are processed in as many concurrent threads as the number of cores on your machine. At any moment, only the active chunks are loaded into memory, thus avoiding the need of very large memory buffers. Thanks to that, PhotoFlow is able to load and process images of arbitrary size, even much larger than then available physical memory. The image data is stored in temporary disk buffers, therefore you need a sufficient amount of free disk space to process very large images. But apart from that, there is theoretically no limitation.

Monday, December 1, 2014

News: G'MIC Dream Smoothing filter added to PhotoFlow

It seems that the Dream Smoothing filter is one of the preferred and most used filters of the G'MIC library. Given its popularity, it thought it would be a pity not to have it available in PhotoFlow... so here it is! The implementation is still preliminary, but it works!


The dream smoothing tool was also a good opportunity to test a new kind of filter implementation in photoflow. i.e. non-realtime filters. As you can see from the screenshot above, the configuration dialog for the dream smoothing has an additional "Update" button above the various sliders. This is because this filter is not really compatible with real time, tiled processing, as it is quite slow and also would require a large tile padding. Instead, you have to manually start the processing by hitting on the "Update" button. The same needs to be done whenever you change some of the parameters. For the same reason, if you save a photoflow image containing one or more dream smoothing layers, those layers will be by hidden by default whenever you will open the image again. The smoothed image will only be computed when you'll toggle the layer visibility the first time.

When it is run, the filter reads and writes 32-bits floating point TIFF images, therefore it is perfectly inserted in the floating point processing pipeline of PhotoFlow, without any loss of precision.
If you put additional adjustment layers on top of the dream smoothing, they will be immediately refreshed whenever the computation of the new smoothed image is finished.

By the way, this filter is really amazing... good job, G'MIC developers!