Hire a local photographer. Available In 100+ countries.

We take dozens of them a day: to record good moments, to remember details we might forget, to compare our outfits, to choose the color of our new couch. Today, taking a picture is a normal sign if there ever was one.

It's true, we don't think twice before waving our smartphone's cameras. Over two centuries of existence, photography has established itself as a reliable and simple way to capture reality. Its main advantage: to show things as they are. But with great progress in retouching algorithms, it is hard to believe that things are still simple.

how to train your ferrari
Photo by fikry anshor / Unsplash

There was further concern about the abandoned photographs for the last two decades, but things have been picking up in recent years. High-performance apps can change your selfie in seconds. But that's not all: realistic photos can now be produced entirely artificially. In terms of photos, 'before' and 'after' were managed. There used to be reliable photos, made by photographers who wanted to infect them with a little bit of humanity. After, fake photos are, proven by machines ...

Enticed to think that way ... but if you know how a camera really works, you soon realize that things are far from being that simple. What if the cameras, even the most basic ones, were always a little deceitful?

Shot with my phone - Oneplus 5T
Photo by Conor Luddy / Unsplash

Magnificent travel of light particles
Let's take a look at this familiar process, which has allowed you to keep souvenirs of all your birthdays, or to immortalize your cat's maid. When you think about it, it's impressive: how can you capture a given scene exactly as it was? Well, you can't, not enough. Your digital camera's sensor converts light into an electronic signal.

Let's be a little clearer: A camera lens is a combination of convergent and diverging lenses, each with a particular role. In a reflex (or DSLR) camera, light enters the lens, passes through the lens assembly, and is reflected on a mirror. An image appears on the viewfinder. When you press the button, the mirror retracts, thus allowing light to reach the sensor (this is why you asked to see anything on the viewfinder as the picture is being taken). This is in such a way that the image can be formed on the innate surface of the sensor. Sensor particles are photosensitive because they convert light particles (photons) into electrical particles (electrons). This can happen thanks to an electronic component called a photocyte: each photocyte indicates the amount of light that hits it. In practice, each photosite corresponds to the pixel of the photo. When we gather the information gathered by all the photographers, it becomes possible to recreate the image. What was the first optical information becomes digital information. Simple, right?

Christmas Bokeh Tinsel Tree with Holiday White Lights that Sparkle
Photo by Rinck Content Studio / Unsplash

An incomplete vision of reality
So you go for the general concept. In practice, it is slightly more complicated. The goal of the camera is not to capture all the photons penetrating the lens. No, the goal is to recreate an image that looks like the experiences in our eyes. Therefore, it is necessary for humans to exclude imperceptible rays: ultraviolet or infrared.

But that's not all ... These famous photos, which collect light, are not sensitive to the wavelength of the photons that they receive. In other words, they are impervious to color: they see the world in black and white.

How is color photography possible? A filter is placed in front of each photocyte that passes photons of a particular color: red, green, or blue. As we know the location of each filter, we know which color each photocyte corresponds to. The set of these small filters is called a Bayer filter, except for Fuji cameras that use another type of filter and Sigma that contains photos with three colors. You may not know it, but human eyes are more sensitive to green than other colors. So the Bayer filter has twice as many green filters as the red or blue filters. This is another example of "image manipulation", which aims to make the camera-produced images like those seen with our eyes. The pictures produced by our camera are already "filtered". Because if they are not filtered then we will not recognize them.

Rebuild reality
At this stage, our photons have already traveled a lot. But the image we get is far from the screen that will end up on our screen. It is a patchwork of pixels in varying degrees of blue, red or green. To smooth all of this, the camera implements a process known as demociding. This process, which is the first step to convert a RAW file into a JPEG-type image, can be performed by a variety of algorithms: each camera manufacturer has its own favorites.

This means that even at the most basic stages of creating an image, some editorial options must be made to optimize rendering. Different demoselling algorithms give us different results.

Vivid Sydney
Photo by Christopher Burns / Unsplash

After democasting, another editorial process comes into play: white balance. An algorithm detects areas of a photo that appear white, applies a certain brightness to them, and then subtracts other colors.

These two processes are the most basic. The camera, whether more or less advanced, may do so to others such as correcting the risk or eliminating noise. It also sometimes corrects spatial distortions due to the curvature of the lens.

This means that the photo that the camera delivers is technically already manipulated. Editing starting with post-production is not a step. Post-production is meant to correct imperfections that could not prevent automated retouching.

No such thing as unedited photos
In September 2020, a large wildfire changed the color of California's sky. The clouds of smoke, pointing to the sun, gave the sky a supernatural color. You may recall in the news, many residents tried to capture this frightening sight with their smartphone cameras. However, it was impossible to transmit the color of this sky on his screen: it appeared gray and dabbed on photographs.

This curious phenomenon is a reminder of what photography really is. Cameras are not open to the world. They are machines that artificially reconstruct information into an admirable representation of reality. In this particular case, the smartphone's algorithm tried to give the sky a normal color. When a photograph is taken, a dozen or so rapid processes to manipulate the light begin. So this would be unrealistic as opposed to edited photos and "neutral" photos.

Recent advances in computer vision and artificial intelligence are new steps on this spectrum. Like cameras, they are devices that allow us to manipulate reality so that it matches this or that criterion we have set. Our editorial approach is all the more important because these tools are powerful. Creating a sky blur, a smooth surface, or a shiny white can all be automated, just as demosaging or white balance processes are today.

We now have powerful tools to put our editorial choices into action: but we first need to determine what the approach we want to execute is. And this is something that the algorithm will never be able to do for us.

Get more related Posts to CAMERA

Get more related Posts to PHOTOGRAPHY