How to calculate the Inverse Camera Native Matrix

In this post we’re gonna cover something really important when it comes to building a clean color pipeline. The issue with the majority of manufacturers (ARRI excluded) is that they don’t give you access to the native color space of the camera. So if we take Sony for example you can record in SGamut3.cine or Sgamut3, which many perceive as the camera’s untouched color space. The truth is those color spaces are not the native color space of the sensor. The camera manufacturer takes the native color space of the sensor, and applies a colorimetric 3x3 matrix to give us color spaces like Sgamut3.cine and Sgamut3 for Sony. Arri Widegamut RGB for Arri and so on and so forth. This matrix that the manufacturer applies, creates some serious issues down the line, which become apparent with highly saturated colors. Apart from the neon sign, look at the difference in rendition of the blue small light light in the bottom left corner.

Unfortunately Arri is the only manufacturer that allows to debayer ARRI raw in the native color space and even provides the inverse matrix to go back from ARRI Wide Gamut RGB to camera native. While other manufacturers don’t disclose it as they treat it as some sort of trade secret. Which is a shame if you want to build a clean color pipeline. The good news is that there are methods to actually calculate the inverse matrix for any camera as long as it can record RAW, ProresRaw, cinema DNG, or even raw stills (which is my preferred method and the one I’m going to show you today) The workflow is the following: I’m gonna first give an overview and then guide you step by step how to practically do it.

Quick overview with a Sony FX3 (this is the camera I own but this workflow works with any camera that can shoot raw):

  • We’re gonna shoot a color chart. (a color checker classic)

  • We’re gonna record a video in Sgamut3/Slog3 or Sgamut3.cine (I suggest using Sgamut3 not 3.cine as it is already closer to the camera native space)

  • And we’re also gonna shoot a Raw still of the same chart, under the same conditions and camera settings (shutter speed, iso, aperture)

  • The Raw still has to be developed without any color management applied (how to exactly do this step I’ll show you in a minute).

  • At this point we have a reference of the same chart as seen natively by the sensor and in camera output color space (Sgamut3/slog3). So we just need to compute a matrix that matches the camera output color space to the native color space.

Since manufactures use only a 3x3 matrix to go from native to camera output color space, the transform will be very accurate and it will bring us back to the camera native color space. From there we will be able to develop a color pipeline specifically tailored for the sensor, that won’t suffer problems in highly saturated areas light tail lights, neon signs etc….

How to do it practically (at the bottom of the post there’s a video that clarifies these steps)

Once we have our Sgamut3/slog3 capture of the chart and the raw still of the chart, we’re going to open the raw still in raw therapee. Select the neutral profile, and then in the color management section we’re gonna select “save as reference image” WB included (WB enable or disabled does’t make a difference since we’re gonna balance it anyway in linear later on in Nuke). This will save a linear tiff, in the native color space of the sensor. To prepare the charts we’re gonna use Nuke (non-commercial free version). I wish this workflow could work for DaVinci Resolve but the level of accuracy needed in the prep, requires Nuke’s precision.

If you recorded the Sgamut3/Slog3 or whatever other Space and Gamma your camera uses in h264, it won’t open in the free version of Nuke as it’s not supported, so in that case we’re gonna have to first open it in Resolve and from there save a tiff image.

Now we have 2 TIF files. One is Sgamut3/Slog3 and the other one Native/linear. Once in Nuke we’re gonna have to tag them as RAW (very important do not forget to do this).

In Nuke we’re gonna apply a color space transform node to the Sgamut/Slog3 image and convert it to linear. The native image is already in linear so we don’t need any transform there. (press TAB to add the desired node)

Now we have to align the 2 linear images based on the middle grey patch. By pressing shift+command and dragging the cursor we create a sampling area that averages the pixel values of that area.We pull up a color correct node, and by using the gain slider we bring the middle grey values to be precisely 0.1800 on all the channels (RGB). I mean precisely! This will take some time and you will have to enter manually numbers in small increments or decrements to land precisely on 0.1800

Once we’re done with this process on both images we have perfectly aligned linear images of the same scene, and we’re ready to export the tiff files and compute the matrix.

Since Nuke non commercial allows to export images only up to 1920x1080, if your image exceeds that you just need a resize node and set it to 1920x1080.

To export the files we need a write node. In the write node we’re gonna select the saving location, the type of file we’re gonna export (tiff) the compression (none) and the data type (raw).

Now we’re gonna open the python RGBSampling tool I shared with you. Once we open it we need to load an image. (you can also read multiple images at the time if they belong to the same dataset) Select patch size, in this case 50 will do it, and then click on interactive selection. Now we go on the image and interactively select the sampling points. If it’s non responsive give a little wiggle to the window, it will become responsive. Remember that the order of the selection matters. So if you do multiple selections to compare 2 or more images keep the order of the sampling the same. Then we can save the coordinates as an CSV file to read the other image later. Choose the output text file saving directory and name. Click: read RGB float values. To read the second image we need to restart the program, as I didn’t implement a clearing feature for the loaded images. At this point we have 2 .txt files with the RGB values from the 2 linear tiff images.

We can then open the other python script I shared (getmatrix.py) Load the Sgamut3 .txt file as the source Native .txt file as the target and compute the matrix. Once we have the matrix we can copy paste the numbers in a matrix node in Nuke to see if every worked correctly. If it did you’re gonna have 2 identical images.

We successfully created the inverse matrix to go back to camera native space and build a color pipeline on top of.

You can then use this matrix like so (example for Sony FX3 SGamut3/Slog3):

SGamut3/Slog3———transfrom from slog3 to linear———Inversematrix———transform back to slog3———Custom Color pipeline.

This is what Steve Yedlin does for the Arri Alexa for which Arri provide the inverse matrix. This is a way to obtain it for other cameras.

Keep this in mind: Once you have the image going through the inverse matrix, you’re not in a standardized color space anymore. This process is useful the moment you wish to build a clean pipeline based on a given dataset as in film profiling for example. You cannot use the inverse matrix and stick to standardized color pipeline and color management, as they don’t expects camera native color space.

PS: This Technique will work for computing any kind of matrix and it works very well when it comes to camera matching. Since digital camera are very similar to one another (more than how manufacturers would like us to think), a simple matrix can go a long way to minimize the differences between 2 digital cameras.

Google Drive Link to the tools to the python scripts to follow the tutorial: https://drive.google.com/drive/folders/1hrApaB_ePymslU2tb22O-qGqllzBUGzb?usp=share_link

Below a short video tutorial:

Next
Next

Color Management is Ruining Your Images