Long ago and far away, I used to spend much of my day creating perfectly cropped photos ensuring that whether you landed on the homepage, click directly to the article or saw the piece via social, the right faces were all showing and no vital details (like eyes) were missing. When the early rumbling of Artificial Intelligence (AI) and Machine Learning (ML) for Digital Asset Management (DAM) started, my fellow photo editors and I were worried we’d be replaced.
What actually resulted was predictable from any industrialization - some good, some bad. The good: removal of repetitive, mind-numbing labor. The bad: removal of some co-workers.
Like other fields where robots have arrived, DAM has to reckon with what it means when the hours spent tagging, naming, sharing or cropping are cut due to automation. We will discuss in more detail the state of AI for DAM on today’s webinar (plug: join us at noon ET) but AI’s smart crop function is a pretty good encapsulation of other tasks to come.

DAMs are getting very good at creating smart crops based on DAM professionals providing the original assets, the resolution, the image dimensions and then training it on what additional requirements (room for text overlay, for example) we have for the images.
These decisions on cropping are hardly new. Even before we relied on digital tools like Photoshop, commercial, fashion and art photographers would crop their photos too, first by the framing of the shot in camera and later in the processing and printing of their work. Proofs layered on light tables, marked up prints, physically cutting negatives - all ways to crop.
Questions of objectivity started almost as soon as news photography began. Why did you select this framing, with sinister shadows covering only one political candidate? Why did you exclude the trash just next to the garden for an article on city beautification? Or, why did you choose to include the trash? Even before bringing in image selection, collage, lighting, angle and other ways we influence the mood and impact of our photos, cropping has always been how photographers make decisions on what is the important information in their photos. Then come the editors, publishers, framers and other people involved in image selection and cropping to make their own decisions, saddled with their own biases, before the image is shared with its audience.
We do this in our personal photos too, of course - cropping out the person we wish hadn’t come to the party, the power lines sullying our bucolic landscapes, even the mess at home as highlighted in this early-pandemic New Yorker cover.
Whether it is over zoom, or our holiday photo card, we use selective cropping to remove those pesky imperfections (even before FaceTuning etc.). Every crop is a choice. Our prejudices, taste and biases are present in every choice.
So maybe it is better to leave it to the cold, hard, data-driven machines to handle cropping. After all, the smart crop features in our DAM platforms are supposed to use content-sensitivity and subject detection algorithms to feature the most important part of the image, without cropping out anything important. But when we look to other algorithmic platforms in use today, we can see that AI and ML, programmed by people, can still pose the same questions of bias. I remember the famous Twitter McConnell/Obama crops which were further explored in The Verge to show Twitter’s preference for lighter, younger, more beautiful (although I question McConnell…) photos when making selective crops. How we program the AI tools will influence the resulting cropping decisions the AI makes.
When it comes to DAM, so far the content sensitive selections in my experience have not veered in to questionable results. I am sure it has happened, but this brings me back to those early photo editing days. If a person - and people are very much still important - reviews smart crops and sees that the tool is removing darker skinned people or women or older people, the platforms I have used make it quick and easy to reframe the images, and increasingly videos, to include what I would have selected.
AI and ML are tools. These tools have the potential to increase productivity and remove unnecessary steps in the production process. As DAM expands the ways it includes AI to content repurposing, semantic search and asset metatagging, these decisions will need to be made when it comes to language as well as visual (which in addition to cropping includes automatic image retouching, lighting, etc.) choices. I want to make sure to think about the language we train the AI to include as well. What we leave out is as important as what we include. Just like all digital tools, AI is only as unbiased and competent as the people who program it. So when I think of the future of AI & ML, it’s just as important that we imperfect humans remember to check our own biases before we fault the results these imperfect robots provide.