Canada is sorry for celebrating its birthday with AI

A debacle involving Adobe Stock flags the need for transparency

Canada’s embassy in D.C. deleted an AI-generated promotional image after failing to acknowledge its source—which led THE GRAIN to wonder where it originated from.

Rather than coming from a prompt to a generator like DALL-E, Midjourney or StableDiffusion, it was sourced from Adobe Stock, under the name ‘Celebratory Crowd Waving Canadian Flags on City Street'.

The caption clearly indicates the image is Made with AI, with the subtext: Editorial use must not be misleading or deceptive.

There are two issues at play with the publication of this image: artificial intelligence companies potentially sidestepping copyright law, and government passing off AI-generated images as authentic.

Companies potentially sidestepping copyright law

Adobe Stock has guidelines for accepting AI-generated images from contributors, but it’s their responsibility to ensure all sources are legally derived. There’s little way of determining if the source images were copyrighted if they were made in an application like Midjourney, which doesn’t verify the origins of any source material.

This particularly stings amidst a time when AI companies are actively urging the government to make them exempt from obtaining copyright for the source images their models train on. Adobe recently recruited a lobbyist in Canada to influence policy and legislation around AI.

Passing off AI-generated images as authentic

We expect government to publish content that we can trust, free of misinformation. I understand this image was used as a celebratory concept for Canada’s birthday—rather than a news image—but Global Affairs Canada loses trust and credibility.

In a world where images posted online are increasingly dubious, skeptics like Twitter co-founder Jack Dorsey are foreshadowing the feeling of life being a simulation:

“This is going to be so critical in the next 5-10 years because the way that images are created—deepfakes, videos—you will literally not know what is real and what is fake. It will be almost impossible to tell.”

If this is indeed where society is headed, it is vital that government institutions remain vigilant when it comes to messaging around AI—and that includes how they use it in their own material. The public is increasingly concerned about malfeasance and misinformation in our media. The best we can hope for is that the same authorities responsible for regulation also grasp how to govern themselves.

Previous
Previous

'The Last Screenwriter' review: A creative playbook for the future

Next
Next

Toys "R" Us's new AI commercial stinks—but that isn't the point