X
    Categories: General ImagingImage Processing

Do it in the Buffer: Introduction to Dithering

I recently became intrigued by some art work I saw online. It is a bunch of Rubik’s cubes set up to reproduce a picture.

The artists are essentially reducing the image’s colors down to a palette of six colors with dithering. By dithering the picture, it appears to look like the subject from a distance because the colors mix and merge to resemble the original colors. But the closer you get to the piece, the more it appears to be a bunch of randomly ordered colored squares. It sparked the artist in me, so I decided to use LEADTOOLS to help me come up with my own Rubik’s Cubism-like art.

For more information on different dithering methods, refer to Introduction: Color Resolution and Dithering

Not Dithering

The simplest way to reduce the colors is to just replace the target pixel color with the color palette that is closest match. This is not dithering; just color replacement. Dithering uses noise (contrasting colors) so that when the pixels are viewed from a distance, the color appears to be close to the original color.

Translating the Real-World to Digital

In real world terms, a standard Rubik’s cube face consists of nine 5/8″ squares. For this specific example, I am going to the reduce the size of the squares to ½”; I will explain more why shortly. Relate that to an image, which consists of pixels and a DPI. If I change the DPI of the image to 2, then each pixel represents a real world ½”. Now it becomes a trivial task to resize the image data to be whatever real-world size I want for my final piece or base the size on the total number of squares (pixels) I have. For example, if I know I have 2500 squares to create a piece, then I could do something like this:

int originalPixelCount = originalImageWidth * originalImageHeight;
double factor = Math.Sqrt(2500d / originalPixelCount);
int newWidth = (int)(originalImageWidth * factor);
int newHeight = (int)(originalImageHeight * factor);

Once the new width and height is calculated, resizing and changing the resolution of the image with LEADTOOLS is accomplished like this:

new SizeCommand()
{
   Flags = RasterSizeFlags.Bicubic,
   Height = newHeight,
   Width = newWidth
}.Run(rasterImage);
rasterImage.XResolution = 2;
rasterImage.YResolution = 2;

Back to why I changed the size of the blocks on the cube: if I were to keep the 5/8″ pixels, the math becomes a little more complicated because DPI is an integer and pixels are whole as well. For now, I am keeping it simple.

The next thing I need is a six-color palette. To do this, I found some Rubik’s Cube images online and sampled the colors. I used those colors to create an array of RasterColors to represent the custom palette:

private readonly RasterColor[] _rubiksCubePalette =
{
   new RasterColor(255, 255, 255),
   new RasterColor(140, 0, 15),
   new RasterColor(0, 115, 47),
   new RasterColor(0, 51, 115),
   new RasterColor(255, 210, 0),
   new RasterColor(255, 70, 0),
};

Making Art

Now that I have palette and image resized to the total number of squares that I have available, I can start reducing the colors. Using the LEADTOOLS built-in ColorResolution dithering function is easy and extremely fast. For example,

new ColorResolutionCommand(
      ColorResolutionCommandMode.InPlace,
      4, RasterByteOrder.Bgr,
      ditheringMethod,
      ColorResolutionCommandPaletteFlags.UsePalette,
      palette)
   .Run(rpbProcessed.Image);

By using the LEADTOOLS ColorResolutionCommand, LEADTOOLS handled the matching of colors and dithering the image for me.

In the screenshot below, the original image is top-right. Bottom-right is the resized image with all colors. Top-left is the result of the reduced color. Bottom-left is the image as if being viewed from a distance.

But, the artist in me is not satisfied. I want more control over the color matching to get the specific look I am after. To start, I need to get the image data into a buffer. For this project, I load the image into a byte array as follows:

private static byte[] AllocateImageBytes(RasterImage image)
{
   int height = image.Height;
   int bytesPerLine = image.Width * (image.BitsPerPixel == 24 ? 3 : 4);
   var imageBuffer = new byte[bytesPerLine * height];
   image.Access();
   try
   {
      for (var y = 0; y 

With the image data in a byte array, I can process it in any way imaginable.

Color Matching

I have researched how to find the nearest color and have found many different solutions on Wikipedia. I have implemented three. One is a simple Euclidean distance formula, while the other two are variations of relative luminance. I included links to the wiki pages in the code for additional reference, background, etc...

private static int CalculateEuclideanDistance(this RasterColor c1, RasterColor c2)
{
   // https://en.wikipedia.org/wiki/Color_difference
   int dR = c1.R - c2.R,
       dG = c1.G - c2.G,
       dB = c1.B - c2.B;
   return dR * dR + dG * dG + dB * dB;
}

private static double CalculateRelativeLuminance601(this RasterColor c1, RasterColor c2)
{
   // https://en.wikipedia.org/wiki/Relative_luminance
   // https://en.wikipedia.org/wiki/Luma_(video)
   double y1 = (c1.R * 299 + c1.G * 587 + c1.B * 114) / (255d * 1000),
          y2 = (c2.R * 299 + c2.G * 587 + c2.B * 114) / (255d * 1000);

   double dLuma = y1 - y2;

   double dR = (c1.R - c2.R) / 255d, 
          dG = (c1.G - c2.G) / 255d, 
          dB = (c1.B - c2.B) / 255d;

   return (dR * dR * .299 + dG * dG * .587 + dB * dB * .114)
            + dLuma * dLuma;
}

private static double CalculateRelativeLuminance709(this RasterColor c1, RasterColor c2)
{
   // https://en.wikipedia.org/wiki/Relative_luminance
   // https://en.wikipedia.org/wiki/Luma_(video)
   double y1 = (c1.R * 212.6 + c1.G * 715.2 + c1.B * 72.2) / (255d * 1000),
          y2 = (c2.R * 212.6 + c2.G * 715.2 + c2.B * 72.2) / (255d * 1000);

   double dLuma = y1 - y2;

   double dR = (c1.R - c2.R) / 255d,
          dG = (c1.G - c2.G) / 255d,
          dB = (c1.B - c2.B) / 255d;

   return (dR * dR * .2126 + dG * dG * .7152 + dB * dB * .0722)
            + dLuma * dLuma;
}
public static RasterColor FindClosestColor(
   RasterColor color, 
   RasterColor[] palette, 
   ColorDifferencingMethod colorDifferencingMethod)
{
   double minDistance = double.MaxValue;
   var bestIndex = 0;

   switch (colorDifferencingMethod)
   {
      case ColorDifferencingMethod.EuclideanDistance:
         for (var i = 0; i 

At this point, I am ready to start processing some data. I start with a simple color replacement to keep the code simple and test the nearest color method.

public static RasterImage NoDithering(
   RasterImage image, 
   RasterColor[] ditheringPalette, 
   ColorDifferencingMethod colorMatching)
{
   int width = image.Width, 
       height = image.Height;
   int rOffset, bOffset, gOffset;
   if (image.Order == RasterByteOrder.Bgr)
   {
      rOffset = 2; gOffset = 1; bOffset = 0;
   }
   else
   {
      rOffset = 0; gOffset = 1; bOffset = 2;
   }
   
   byte[] buffer = AllocateImageBytes(image);

   // Translates the x,y coordinates to a buffer index
   int CalculateIndex(int x, int y) =>
      y * width * (image.BitsPerPixel == 24 ? 3 : 4)
      + x * (image.BitsPerPixel == 24 ? 3 : 4);

   for (var y = 0; y 

Error Diffusion

Now that I can see that the color matching is working well, I move on to doing some error diffusion dithering. Error diffusion means that the difference of the closest color and the original color is spread around (diffused) to the neighboring pixels. There are many different type of error diffusion matrices. One of the oldest is Floyd-Steinberg. In the matrix below, the # represents the current pixel. – represents a previously processed pixel. The numbers represent numerators. The denominator for the matrix is 16.

– # 7
3 5 1

For example, assume a two-color palette of black (0) and white (255) and the following gray image.

100 100 100
100 100 100 100
100 100 100 100

The current pixel value is 100. The closest color is 0. The error is 100 (100-0). The color values change as follows:

0   0   143 100
118 131 106 100
100 100 100 100

Note that only the pixel currently being processed has to be a color in the palette. The surrounding pixels just need to be updated with the error and it does not matter if the new value is in the palette until that pixel is processed. Also, this is why the matrix is forward-only. Once a pixel has been processed, its value should not be changed.

The process happens again on the next pixel (143). 143 becomes 255. The error is -112, which is distributed to its neighbors. If a new value under or overflows the range of a byte, it gets clamped to 0 or 255:

0   0   255 51
118 110 71  93
100 100 100 100

The combined result: the first gray pixel became black. The next gray pixel became white. When viewed from a distance, black and white merge and look gray again. As the error propagates around to it neighbors, more pixels will become closer to black resulting in a higher density of black pixels making the gray darker.

I have created a little helper class to represent the matrix:

private class ErrorMatrix
{
   public int Denominator;
   public Cell[] Cells;
   public class Cell
   {
      public int XOffset;
      public int YOffset;
      public int Numerator;
   }
}

Floyd-Steinberg

The Floyd-Steinberg matrix is defined like this:

ErrorMatrix floydSteinbergMatrix = new ErrorMatrix
{
   Denominator = 16,
   Cells = new[]
   {
      new ErrorMatrix.Cell() {XOffset = 1, YOffset = 0, Numerator = 7},
      new ErrorMatrix.Cell() {XOffset = -1, YOffset = 1, Numerator = 3},
      new ErrorMatrix.Cell() {XOffset = 0, YOffset = 1, Numerator = 5},
      new ErrorMatrix.Cell() {XOffset = 1, YOffset = 1, Numerator = 1},
   }
};

Then for each pixel:

byte b = buffer[index + bOffset],
     r = buffer[index + rOffset],
     g = buffer[index + gOffset];

RasterColor bestColor = FindClosestColor(
   new RasterColor(r, g, b), 
   ditheringPalette, 
   colorMatching);

var error = new int[image.BitsPerPixel == 24 ? 3 : 4];
error[rOffset] = r - bestColor.R;
error[gOffset] = g - bestColor.G;
error[bOffset] = b - bestColor.B;

foreach (ErrorMatrix.Cell cell in floydSteinbergMatrix.Cells)
{
   int targetX = x + cell.XOffset,
       targetY = y + cell.YOffset;

   if (targetX = image.Width) continue;
   if (targetY = image.Height) continue;

   ApplyError(targetX, targetY, error, cell.Numerator, floydSteinbergMatrix.Denominator);
}

// Set the best color
buffer[index + bOffset] = bestColor.B;
buffer[index + rOffset] = bestColor.R;
buffer[index + gOffset] = bestColor.G;

Atkinson

Another example of error diffusion dithering is the Atkinson matrix created by Apple programmer Bill Atkinson. In this case, only 75% (6/8) of the error is distributed. This can result in the loss of detail in the highlights and shadows.

ErrorMatrix atkinsonMatrix = new ErrorMatrix
{
   //  - # 1 1     where # = pixel being processed, - = previously processed pixel
   //  1 1 1       and pixel difference is distributed to neighbor pixels
   //    1  
   // (n/8)
   Denominator = 8,
   Cells = new[]
   {
      new ErrorMatrix.Cell() {XOffset = 1, YOffset = 0, Numerator = 1},
      new ErrorMatrix.Cell() {XOffset = 2, YOffset = 0, Numerator = 1},
      new ErrorMatrix.Cell() {XOffset = -1, YOffset = 1, Numerator = 1},
      new ErrorMatrix.Cell() {XOffset = 0, YOffset = 1, Numerator = 1},
      new ErrorMatrix.Cell() {XOffset = 1, YOffset = 1, Numerator = 1},
      new ErrorMatrix.Cell() {XOffset = 0, YOffset = 2, Numerator = 1},
   }
};

Sky is the Limit

Now with the basic framework in place, any error diffusion matrix, custom palette, or color matching algorithm can be easily implemented and tested.

Get the Code

The project is a Visual Studio 2017 Win Form application. To run the project, you will need LEADTOOLS v19 installed with a valid LEADTOOLS license file.

Download the project

About 

Developer Advocate

    Find more about me on:
Gabriel Smith: Developer Advocate