In this guide we will go over how you get the best final image from colour negatives by using camera scanning in conjunction with purpose made software. By the end of this short guide you will not only be able to convert an image using Negative Lab Pro, you will also understand how to get the best starting point during the capture phase and how to adjust those images to look like you want them to.
If you need information about how to scan your film in the most efficient way, we suggest you check out our Technique Guide.
Why do we need to 'convert' negative film
When shooting negative film, the film itself is just one step towards the final image, and while they back in the day would have printed the film onto specially made photosensitive paper, most photographers today choose to scan or digitize their negatives. They then make a positive, viewable image directly from that negative instead. This allows more flexibility, better quality and is generally faster and less expensive than printing using a chemical process and then scanning it for digital sharing. However, this process has many pitfalls you want to avoid and generally requires some kind of software to help you get a pleasing final result.
You can click any of the below to jump straight to that section. At the bottom of each main section you also find a button to jump back here.
The Three Steps to Good Negatives
In the capture step, the goal is to record as much data as possible from the original material, not to try to match the original scene. To do this, we have a set of requirements and settings that will help your camera get all the information it can from the negative film. Matching this data to the original scene is only part of the final two steps in this process. Understanding that scanning is just one step in a long chain of processes to go from original scene to final view-able print, and that on each step within that process the output is not anything like the original scene, is crucial to understanding why we scan the way we do.
We generally have these steps in our photographic process:
Scene through camera → chemical processing → digital capture → digital processing → manual adjustments
Learn more about all the steps of scanning in the Technique Guide:
Firstly, we need to make sure the light used to illuminate the negative is capable of rendering a natural spectrum. For this, the CRI (color rendering index) scale was invented - it evaluates the spectrum of light emitted from LEDs. Higher numbers mean closer to ‘perfect’ rendering, although for our purposes a rating of 95 out of 100 is enough. Anything over CRI 90 is also acceptable. If a low-quality LED light is used instead, the light is not able to illuminate certain parts of the spectrum leading to the camera seeing less colour information. Getting a strong, CRI 95 light is therefore important.
Next, we need to make sure the camera is set up to capture and save as much information as possible. The two important settings is file-type and exposure:
For file settings, you want the setting that saves the most information from your sensor. Almost all cameras today are able to capture in a file format often called RAW, which saves all the information from your sensor without processing or degrading it in any way. You should therefore set your camera to this. The file-type name you see when you import the files to the computer varies from manufacturer to manufacturer (NEF, CR2, CR3, RW2, ARW, ORI, ORF). Some manufacturers have the possibility for RAW output that is not a true, full-size file. Make sure you don’t have settings enabled that say things like “compressed RAW” or that scale down the number of pixels from the full resolution of the sensor. With these settings in place your camera should give you the very best it has to give.
Exposure is the second part of capturing all the information with the best possible quality. Under- expose and the densest part of the negative will get more digital noise, over expose too much and you will start losing information in the least dense parts of the negative. We therefore have to find a balance between the two. The most obvious way of doing this is of course just to follow your digital camera’s meter, but because of the orange mask of negative film, this is not what we want. Instead, we want to find the exposure that makes the mask appear white, because that is what it is - unexposed, white. We should therefore overexpose the image, compared to what the camera tells us.
The next question is - should you use a consistent scanning exposure across the film or should you optimise the exposure to each frame. People disagree about this and both have drawbacks and benefits:
Consistent exposure across the roll (manual mode) gives you the opportunity to copy the conversion settings to all the images of the same scene to get a more consistent look. This would be useful if you shot them all in the same place under the same lighting, such as in a studio shoot. The downside is that you risk losing information in the highlights or shadows.
Adaptive exposure on each picture (aperture priority mode) will give you the most information out of each and every negative and is what scanners do. This can be very useful for most walk-around pictures where scene matching is less important and the original film exposure is less consistent.
How to do Consistent exposure
To expose with consistent exposure, frame and focus your image. Set your camera to manual mode and fixed ISO. Then set your white balance to tungsten to negate most of the colour from the negative orange mask. This has no influence on the final image but helps you judge the final exposure. Now change the exposure until the rebate (non-exposed area) becomes almost completely white. It might be helpful to take a picture, import it to the computer and evaluate it there. Once you have determined an exposure, keep that exposure the same across the roll.
How to do Adaptive Exposure
For most people we would recommend adaptive exposure, as this will yield excellent results on almost any type of film with any type of exposure. This is also what scanners typically do, for same reasons mentioned above. To do this, make sure your camera is set to the tungsten setting for white balance. Again, this has no influence on the final image but helps the camera find the correct exposure as it compensates for the orange mask. Set your camera to aperture priority and set your desired aperture and the lowest ISO setting. You can now take pictures of the whole roll. Using adaptive exposure, you also want to make sure no sprockets are showing and that the black of your film holder is not showing. This will confuse the camera meter and lead to wrong exposures. If you are scanning with sprockets showing, use even more over-exposure than +1.
To summarise, what makes up good capture, you can follow the list below:
Use a bright CRI 90+ light, preferably CRI 95
RAW: Not compressed or reduced in size
White balance: Tungsten
Exposure setting: Manual
Change shutter speed until rebate appears almost completely white
White balance: Tungsten
Exposure setting: Aperture Priority
Exposure compensation: +1 (more if scanning with sprockets)
Frame so none of the holder is showing if possible
The conversion process takes a digital file of a negative from a flat, ugly image with reversed colours to your beautiful final picture. The only step after that is to make your final adjustments. The reason we need a proper conversion is that negative film, despite its name, is not a direct negative of the image. Instead, it’s a complex set of dyes that produce densities that are meant to work together with the spectral response of the paper it is printed onto to produce the dyes that represent the final scene. In more normal terms, this means that the image is too blue (due to the orange cast), very low contrast and that the colour dyes are cannot just be flipped to positive even once the contrast has been adjusted and orange cast has been removed.
Flipping the curve, you get something like this:
Instead, we use software that tries to balance the different colour channels in the shadows, midtones and highlights of an image to achieve the correct look. Just to mention a few popular ones, we have Negative Lab Pro, FilmLab, Grain2Pixel, Negmaster, Darktable’s Negadoctor and Rawtherapee (Film Negative). This is a process that can be done by hand but even those who do it well will take 5-10 minutes on each image, and even then it is typically not as good as most of these pieces of software.
If you want to find out which one suits your needs best, we suggest you look at the different options in the Gear Guide.
In this following section we will go through conversion with Negative Lab Pro. Even if you are not using that software, there will be commonalities with other pieces of software as they all work in roughly the same way. However, you should always check the user guide for the software that you are using.
Most software has a step for white balancing against the rebate (non-exposed area of the film) to help with eliminating the orange mask. To do this, make sure that some of the rebate is showing in your scan. With a VALOI holder this should be no problem as long as you scan with enough room. Grab the white balance eye dropper and click on the orange border. You will see the colour change from orange to almost white.
Left: Before white balancing, Right: After white balancing
If you are scanning a whole roll, you can ‘sync’ the white balance across the roll so you don’t have to repeat the process on each image. To do that, while you are in the Develop module in Lightroom, make sure you have selected the image that you just white balanced, then highlight all the pictures you want to sync this setting across (either by selecting all the pictures in the folder (CTRL/CMD + A) or using some SHIFT or CTRL so you can select more than one). Now you can press the “Sync” button in Lightroom and select only white balance in the menu as shown below - all the images should now get the same white balance.
You can now proceed to the next step.
Most conversion software will ask you to crop out the border. Don’t worry, when using Lightroom and Negative Lab Pro (NLP) you can always ‘uncrop’ it once you have converted to get the border again.
To crop, go the develop module and select the crop tool as shown below. Then select the area you want.
If you have done your capture phase right, you can now do the same as when syncing white balance, and sync the crop across all the images so you don’t have to do it individually. Capturing your images, making sure they are in the exact same place every time is incredibly important if you want to increase efficiency. It might take slightly longer to capture, but you save 5x the amount of time when processing your images.
An alternative to cropping in the step above is to use the “border buffer” setting in NLP. The software evaluates the whole image, so if the border is included it might mess with the algorithm. However, if you set a border buffer of 10% it will ignore the 10% pixels on the edge. This is usually enough to get within the image, though if you have a lot of border you can avoid cropping before converting and use 20% “border buffer”, or if you are scanning with sprockets you can use a 30-40% border buffer to be sure.
The final conversion is done automatically, but there are a few settings worth keeping in mind before you hit convert.
Choose “Source: Digital camera”. This will prepare NLP for the type of scans, as the files look quite different compared to the output of a dedicated scanner.
Choosing the “Color Model” is up to you unless you are converting Black and White. The different alternatives have slightly different looks, so we suggest you convert the same image with each one to find out which one you like the most.
Pre-saturation controls the saturation of the negative before conversion. According to the maker this affects not only the saturation (vividness of colour), but also the hues and amount of colour separation. Start on default and try different settings to find the best one for you.
When you are ready to convert, click the “CONVERT NEGATIVE(S)” button (not the ‘Apply’ button). It will take a few seconds per image, then finish and open a menu where you can do adjustments. However, the process is not quite done as you, with any image, will want to do some final things to make sure it looks like you want it to. We will cover some points regarding that in the next section.
When working in Lightroom and NLP, one of the benefits often mentioned is that you can do non-destructive adjustments. While this is true, making adjustment in Lightroom directly on the converted file is tricky and the best way to do it is to use the limited tools with NLP.
The reason it is tricky to do the adjustments in Lightroom is that it still thinks you are editing a negative, so the curve is inverted and all the controls are inverted. For example, if you want darker blacks you have to push the “Whites” towards the right instead of the “Blacks” to the left - very confusing if you are not used to it!
We recommend making simple adjustments using the (admittedly cumbersome and confusingly differently labelled) controls in the NLP interface. Then, if you need to make finer adjustments or local adjustments such as dodging and burning, you can tick the “make copy” button at the bottom of the “Edit” tab in NLP, making sure that your “Positive Copy Settings” under the “Advanced” tab is set to 16bit TIFF. When you click “Apply”, NLP will export a positive TIFF that is easy to make further adjustments on as it behaves like a normal image. Be warned that this generally multiplies the size of your file by 3-4 times, as these are uncompressed TIFF files.
Adjusting White balance
NLP does give you quite pleasing colours, but it will usually struggle to get the white balance right, especially if the scene contains a lot of a single colour, such as a blue sky or a green forest. In a forest, NLP will see a lot of green and try to compensates for that by adding more magenta. The result is that the greenery in your forest looks dull, while the tree trunk and earth looks magenta. When this happens, you can try cycling through the different automatic presets for white balance in the NLP “Edit” tab, but you might have to adjust it manually. If you do, look for a neutral area that you know well, such as grey asphalt or the brown earth in a forest. Adjust while trying to look for that neutral, then evaluate. If you stare at it too long your eyes will adjust and you will be blind to the colour cast - look at something else for a few seconds or a minute and come back to it. This sounds complicated, but you will get the hang of it quickly, and start seeing the relationship between temperature (blue/yellow) and tint (green/magenta). The process is exactly the same as those printing colour film in the darkroom haves to deal with.
Your image should now be quite close to your final result. Remember that converting negative film is an interprative process that has no straight answer, and that you should make it look the way you want it to rather than the way you have heard the film should look like.
If you want further instructions on how to use Negative Lab Pro, the creator has written quite extensively about all the different functions of it on his website. We highly recommend you look through it to get the best out of this piece of software.
If you happen to use some other software, we also recommend that you look at all the material that the maker(s) has/have made as this is the only way to get the most out of them.
For other options you can have a look at our website Gear Guide under Software:
Before leaving, you should check out our Technique Guide for how to get the most out of your setup.
We have gone through the three main steps of getting good scans and colour of your negatives. Perhaps the most important work is done in the ground work, and this piece can be used regardless of what software you use. If you do use Negative Lab Pro, it is a fantastic tool with many opportunities, but it is important that you explore and learn it if you want the best results. While the tools have improved a lot in the past 10 years, they are still not perfect. As camera scanning becomes more commonplace they will also continue to improve.