PortraMatch

Science Driven Film Profiling

Ok another guy trying to sell you a film emulation look that you think will probably disappoint you. But before buying anything and before purchasing a LUT or preset ever again, keep reading!

We’re gonna discuss what film emulation really is, how it’s done when it’s done properly, and you’re gonna realize that if you bought some presets or LUTs sold as film emulation, well—you’ve probably been lied to.

Over last 3 years i’ve been obsessed with trying to emulate film digitally both for video and photography for 2 reasons:

First: Film Looks Amazing

Second: Film it’s so f*****g expensive!!!

At the beginning, I was excited—then highly discouraged—by the fact that true emulation felt impossible to achieve. (And, honestly, it kinda made me want to give up.)

Since that moment and many hundreds of hours of research, and trial and error. I fine tuned a complete workflow that now allows me to profile film stocks with an incredible degree of accuracy.

If you want to stick around for moment I’ll walk you through some of the basic of what’s actually required to profile a film stock.

So the first step to profile a film stock, to profile anything really, a camera, a guitar amplifier, your grandma, anything…. the very first step is to gather some data about what you want to profile, and while if you’re profiling grandmas you need many grandmas, to profile a film stock you need colors….many colors. And to do that what’s commonly use is color charts.

Well of course I’m not saying anything new here. And many people already did something similar to this. You take a color chart you shoot it on film and digital side by side. Bring it in the computer and do the match.

There is a big problem though. Even if you manage to match the 24 colors of a color checker or a 100 colors or 200 colors in an emulation the question is: What happens to all the other colors that you didn’t match.

It doesn’t take too long (it actually took me quite some time) to figure out that this is not representative of all the colors that exist therefore creating an emulation by matching these 24 colors, or let’s say even 300 color samples it’s only gonna be accurate for those specific 300 colors in the dataset and for the light used to illuminate those colors. But the moment we change the scene and lighting condition. The emulation won’t be accurate.

So basically we do the match it looks great and accurate but, as we point the camera to a different scene…well…there is no match anymore :(

So how to create a profile that is actually accurate under many different lighting conditions and scenes? How to capture the soul of the film stock?

Well…it’s starts from gathering the data of not only 24 or 100 or 200 color samples but of thousands of thousands of color points, shot under different lighting conditions and exposures. For a dataset of around 20k color samples (I’m also experimenting on expanding the dataset even more). And then what do I do? I’ll start a manual match of those 20k color points so that in around 5 years I’ll have built an amazing film profile. Actually no!!

I’m gonna turn all the data into numbers and feed it to a neural network that in about 40 minutes it’s gonna output an incredibly accurate LUT that matches digital to the film stock

Why using a Neural Network? Why complicating things to this point to profile a film stock?

Film being an analogue medium has a non linear response to light and to color which means, as I already explained, that matching a set of colors is not enough to capture the film response across a wide variety of scenes and lights. This is where the neural networks comes in.

The power of a neural network is that, when fed enough data, it can generalize to unseen date (in this case colors). This means it can predict how a color not in the dataset should look, creating a profile that works not only for the sampled colors but for a vast range of real-world conditions.

The data sets was created by shooting the color checker classic under 3200k tungsten, 5600k daylight and 7500k cooler daylight (from 5 stops underexposed to 5 stops over exposed in intervals of 1 stops). Plus at hue intervals of 30 degree at different saturation levels to cover the whole color space plus a custom backlit chart to cover the highest saturation colors.

Just to show you how powerful this approach the gallery below shows validation shots. Which means that the colors from the objects in frame were not fed into the neural network. The gallery shows objects shot on the Sony FX3 with a standard rec709 conversion after that the Portra 400 scan and then the FX3 matched to Portra through the LUT generated by the neural network.

I was incredibly impressed by the results but I know what you’re thinking. What about the skin tones!!! And you’re right, because skin tones is what we care about the most. Even though there is no reason to believe that the skin tones behave somewhat differently then the complex color of an apple (and as we seen in the validation frames the match is really really good) I understand that there can’t be belief without prof in this case!

Unfortunately I didn’t shoot skin tones as validation the day I shot the charts simply because I ran out of film (the dataset required 5 rolls). But, you know…I couldn’t write this article if I didn’t have some skin tones to show. And here’s the beautiful thing: 3 weeks after I shot the charts for the profile, I bought another roll of Portra 400 to test 2 things.

First: will the profile work for skin tones Under a different spectrum of light than the one used in the dataset? (aka did the neural network generalize well)

Second and most important: will the profile work even considering film labs development inconsistencies?

What do I mean by this? Well, the developing chemicals change over time as they develop roll after roll and, at some point, they need to be changed. So unless you developed your rolls of film on the same day, which will yield very consistent results across the many rolls. If you shot a scene under the same light and setting and developed the same film negative 2/3 weeks apart, you would notice that they don’t look exactly the same. The film itself would require some balancing to compensate for development inconsistencies. Can the LUT generated by the neural network sustain accuracy even after balancing?

Judge yourself from the gallery down below:

This process has been an intense but rewarding journey, proving that true film emulation isn’t just about slapping on a LUT—it’s about real data, real science, and a workflow that respects the complexity of film. This Portra 400 profile is a step toward bridging the gap between digital and analog, proving that true film emulation is possible with the right approach. And this is just the beginning!

If you wish to purchase the Portra 400 profile, click the button below to go to the product page and check the specs of the profile.