Loading ...

There’s been a constant media drumbeat over the last several years that some sort of all-powerful, all-seeing, all-knowing conglomeration of algorithms is just over the horizon, poised to sweep everything behind it. Either artificial intelligence (AI) will take over and render designers useless (if not enslaved), or AI will be the greatest thing since sliced silicon wafers.

Here’s a handy way to demystify AI: it is a system of automatic decision-making based on data. Ask Siri a question, and you are using AI. Use Waze to find the shortest route through crosstown traffic, and you are using AI. Ride on a jet plane, and you are flying with AI. So if you are worried about AI coming, relax. It’s already here.

Penny is an AI created by San Francisco–based data visualization studio Stamen Design and researchers at Carnegie Mellon University’s Center for the Arts in Society. Built using an analytics platform by satellite imagery provider DigitalGlobe, Penny was trained to predict the average household income level of any area of a city by looking at a satellite image. Stamen designed an interface (above) to enable users to explore how Penny “thinks.” Users drop urban features, such as trees or a high-rise apartment building, wherever they want in high-resolution satellite imagery of New York City or Saint Louis, seeing how this affects Penny’s prediction about the household income of that area. Interestingly, it’s easier to convince Penny that a rich neighbor-hood is poor than vice versa. The project investigates how we might discover and discuss the hidden and latent biases in AI systems.

When it comes to disruptive technology, Eric Rodenbeck, founder of San Francisco–based data visualization studio Stamen Design, takes the long view. “Every technology revolution carries with it an attendant anxiety,” he says. “I got my start when the Mac was new. Everyone freaked out about digital typography, about the death of print, and the rise of magazines like Ray Gun and Emigre.” And yet, he notes, “There’s more printed material out there than ever.”

Today, Rodenbeck is positioning Stamen at the intersection of AI and design. Sources of new data are all around us, he says, and they are only going to increase. A new project for a nonprofit working on preventing wildfires in California provides a case in point. “It’s really expensive to do on-the-ground surveys of canopy conditions at scale.” By training an AI system, Rodenbeck explains, “Stamen can compare satellite imagery. If you take samples from a couple of different places, match them up, then run [the imagery through] a machine learning program, you can model the entire forest canopy at any moment.” This enables stakeholders to “build better tools, make better predictions, and prepare better emergency medical services systems now, in a way that doesn’t break the bank,” Rodenbeck says.

Design visionary John Maeda, who cut his teeth on AI at MIT’s AI Lab in the 1980s, and, until recently, led computational design and inclusion at Automattic, the company behind WordPress.com, WooCommerce and Jetpack, thinks some designers are right to be worried. Maeda uses this example: “If you are making a poster in Adobe Illustrator, there are an infinite amount of posters you can make. But they all typically have a graphic, some copy, dates, times, places and they will typically fall into just a few families of styles.”

How does that play into AI? “Before, when a designer created a poster,” Maeda explains, “all we had was a finished image. But now, we have patterns of usage that can be interpreted by a computer. All that pattern data, churned by a gigantic machine-learning algorithm, is combined in a software system that helps you design a poster that’s ‘knowable’ from looking at styles of patterns.” In other words, “AI reduces design to a recipe book. Designers who are good at original thought have nothing to worry about,” Maeda says. “People who are not as good... well, that’s not going to be a winning formula.”

We’ll need more people, not fewer, because there’s only more production coming: more messaging, more visuals and more content, which demands more creativity and more problem-solving.” —Silka Miesnieks

One designer clearly capable of original thought is Jeff Ong, a design director at Automattic. Ong, who learned how to use the tools of the web to make art at New York University’s Interactive Telecommunications Program and then went on to apply emerging technology to design problems as lead technologist at frog design, calls himself a “computational designer.”

Ong explains that the work of a computational designer means taking a problem—such as translating a design intention into a user interface—and using software to generate solutions to automate aspects of the creation process. A computational designer, Ong says, “defines the constraints of the system and writes code that can generate design outputs.”

To help implement a refreshed brand language at WordPress.com, Ong wrote a piece of software that generates images in the style of this visual language. The software analyzes a batch of portraits, selects one, places copy, and composes and renders an image in a few seconds. Instead of having to cut many similar images “by hand,” a computational designer creates templates—essentially, raw text files—that describe what the image should look like, including details like dimension, color and copy. An open-source tool called DrawBot is used to render the image. “The software can generate a wide variation of the kinds of assets needed for marketing, including images for social media, HTML5 banners and content marketing,” Ong says.

In other words, Ong explains, “Designers don’t always have to push the pixels around; they can be thinking of what’s next. There are more-interesting creative problems for designers to solve than cutting banner ads.”

Characterizer, a feature in Adobe Character Animator that is powered by Adobe Sensei AI technology, enables users to quickly create puppets based on images taken from a web cam. Users stylize the puppet by choosing one of the built-in art styles or creating their own style from any piece of reference art. 

That’s exactly what Tom Ollerton, founder of Automated Creative, does with the popular I’ll be Back meet-ups he runs in London. Compelled by the provocative headline “Will a Robot Take My Job?” bright minds from agencies, universities and startups pack a room each month to discuss the future of AI.

What AI is good at, Ollerton says, “is looking at what’s happened before and predicting what will happen next. The key thing is that it learns. This gets interesting when you can teach the same machine to learn whether an advert is a good advert—or not.” Ollerton believes this type of tech is creating new advertising jobs, not putting people out of them.

Ollerton’s company, Automated Creative, is a tech platform that experiments with AI to make ads. The way it works, Ollerton explains, is “the creative agency comes up with the strategy, the platform and the idea, then shoots the video. We use AI to analyze the content of the ads and break them down into different emotional triggers. We then test those emotional triggers at an incredible speed and scale to work out in real time what matters to the audience. Then we generate hundreds of variants of the images and written words, and explode that out on Facebook and Google.”

“Each advertisement becomes a tiny data point,” Ollerton says. “We collect that, centralize it and the AI tells us what to create next based on what the audience responds to. Advertising doesn’t need to be perfect, it just has to work. And by ‘work,’ I mean it just has to get a click. AI doesn’t care about awards. It just gets the job done.”

Adobe, meanwhile, has positioned its AI as a time-saving initiative it calls Sensei. Behind the scenes, Sensei-powered tools streamline photo editing for Lightroom, implement Face-Aware Liquify in Photoshop and automate color matching in Premiere Pro. According to Silka Miesnieks, head of Emerging Design at Adobe, it’s all part of an effort designed to “eliminate the drudgery associated with those jobs. People in the design community worry that AI is eliminating craft,” she says. “But then when you point out all the time they spend tweaking images, they say, ‘Oh yeah, I don’t want to to do that.’”

While drudgery may disappear, Miesnieks thinks design jobs aren’t going away anytime soon. “We’ll need more people, not fewer, because there’s only more production coming: more messaging, more visuals and more content, which demands more creativity and more problem-solving,” she explains.

Instead of worrying about AI, Miesnieks thinks designers should lean in. To start, she suggests taking online courses in AI and machine learning. “You don’t have to write an algorithm,” she says. “But you have to understand what an algorithm is, what a model is, what a data set means, so you can have the right discussion when working with engineers, policy makers and C-suite executives.”

Meet Lucy, an interactive character created by virtual beings company Fable. Audiences were first introduced to Lucy in the initial chapter of Wolves in the Walls, an animated VR experience based on the book by writer Neil Gaiman and illustrator Dave McKean. By powering Lucy with AI, Fable has enabled her to interact naturally with viewers. In the upcoming experience Whispers in the Night, which builds off the world of Wolves in the Walls, viewers will be able to have a unique conversation with Lucy.

For millennia, people have gathered to tell each other stories. The power and pleasure of story is one of the things that makes us human. Surely there’s no room for AI in that.

Edward Saatchi would like to disagree. “The next great art form is artificial intelligence,” he says. His new company, Fable, which he cofounded with director Pete Billington, is busy combining AI with storytelling. By deploying computer vision, image recognition, and natural language processing and generation, Fable is connecting subfields of AI, which will serve as the eyes, ears, mouth, heart, memory, creativity and will of something very nearly human: a “virtual being.” And Fable’s team of engineers and storytellers has already created one.

Based on The Wolves in the Walls, a book by writer Neil Gaiman and illustrator Dave McKean, Fable’s animated VR/AR production Whispers in the Night introduces us to Lucy, a virtual being who draws you into her world. “She can hand you things, follow your eyes to make eye contact, have a conversation with you, and respond and react to what you are doing in the space,” Saatchi explains. The more you share, the more she shares.

Lucy is only the beginning. Magic Leap recently unveiled a virtual being named Mica, who may one day become an AI-powered assistant. Talespin introduced Barry to human resources departments in soft skills VR training sessions. And Lil Miquela has gathered 1.5 million followers on Instagram, appeared at Coachella and kissed Bella Hadid in an advertisement for Calvin Kelin. As AI matures and merges with AR and VR, the future of virtual assistants imagined by Blade Runner 2049 is poised to become real. And we’ll see it long before 2049. ca

Sam McMillan is a San Francisco Bay Area-based writer, teacher and producer of interactive multimedia projects for a number of Bay Area production houses, and can be reached at sam@wordstrong.com.

With a free Commarts account, you can enjoy 50% more free content
Create an Account
Get a subscription and have unlimited access
Already a subscriber or have a Commarts account?
Sign In

Get a subscription and have unlimited access
Already a subscriber?
Sign In