AI Won’t Replace Your Designer, But It May Give Them Superpowers
You may have heard: don’t send your kids to art school, because the AIs have taken over the creative professions. No sooner had MidJourney and DALL-E made the news, than we started to hear about Google and Facebook’s entrants in not only image generation, but full-motion video and audio. The news is coming faster than career counselors can keep up.
So let’s compare the hype to the practical applications. How can an AI add value to an initiative that needs creative expression? How can we use AI to have a greater impact–to provide better service–to a human audience? We’ll look at some specific uses of AI image generation in design exploration.
Greater breadth of visual content exploration
Lab Zero is primarily a product design company. But then six months ago, we worked with a client to build a new Brand and ID. We used shared moodboards, brand attributes, and journey maps together to inform a process that produced candidate directions for discussion with a client. A hotspot on a moodboard, for example, would be the beginning of a Figma or Photoshop exercise to create a visualization for discussion with a client.
A real-life example: to connote family living in a highrise apartment, for example, we wanted to show a golden retriever on the balcony of a 20th floor apartment, looking out. To make the idea more concrete, to test it with real people, we searched for such an image–and didn't exactly nail it. The dog isn't exactly in a high-rise, and looks like it's going to jump off the balcony.
After the discussion, we agree on a direction, and they give feedback such as, the dog looks in danger of falling off the balcony. Can the dog be safely inside the fence? Yes, the dog can be inside the fence. Maybe we should just stage our own photography? But then the season is wrong, the dog Lucia is too cute, and the city (here in Sofia) is obviously not New York.
Powered by AI, a good designer would never get stuck on the dog for more than a few minutes. They would just type a prompt to get ten images of a dog on the balcony of a high-rise, and pick the best one. Then they would be exploring more options from the moodboard, imagining more prompts that line up with brand attributes, finding illustrations of critical points on the user journey, typing in text from testimonials of clients…
…and getting new ideas.
Unique content … and new ideas
On a recent project to brand a creative agency, we had a few points of information to start discovery. We had already built a logo and rudimentary design system around the brand name ‘Indaba’, which means ‘business matter’ in the language of the Zulu people. We wanted to explore linking the brand to a specific visual tradition. And we wanted to anchor the home page with something physical, tangible–something that you could grab with your hand. Prompts materialized which included the following terms:
- Indaba (the brand)
Sorting through all the results, refining prompts led us through wave after wave of discovery. Color schemes emerged : bright earth tones, textures of sandstone and clay, in triangular shapes that mirrored our logo construction. Refinement led to a mockup of a home page that we could validate with real users.
Using an AI causes us as designers to shift gears, from execution to evaluation. For every generated image that harmonizes with our goal, there are perhaps ten or a hundred images that are really cool but that distract us from our goal. We invest more time in forming the concept, and leverage the AI to generate examples of the concept that allow us to validate it.
Extend & Refine
In a traditional design environment, you depend on style guides and individual aesthetics for stylistic continuity. An AI can help with that. In the previous example, we were able to launch a one-page site very quickly for a client to test an agency concept. As the site and concept matured, we became more confident of our broad thematic direction, and we also needed to build out more of the offering.
As we add services to the site, we find that we want them to be consistent with the theme overall, but we don’t want to put the same ‘amulet’ on every page. So instead of trying to find the designer who inked the original amulet with $23 markers on rice paper (for example), we can go back to the AI and just start where we left off.
Challenges Arise from Use of AI
With these new super powers come a few new concerns.
You can’t argue with the AI
The AI can’t have a conversation with a person to understand what they want to show, so the AI can deviate substantially from your idea(s). You can’t explain to an AI what they did wrong. This makes some ‘minor’ corrections very difficult to implement, except by brute force of multiple iterations
Reproducibility demands versioning info
The versions of the AI get rev’d as you go, so you have to keep some configuration information to be able to go back to the baseline where you created original images earlier. Need to remember things like:
- precise prompts
- versions of the libraries used
- initial seeds
Beware getting ‘done’ too quickly
The speed with which you get to a technically well-executed concept can lead you and your client to become confident too early. It used to be that a significant part of your time was spent executing (through illustration, layout, content creation) on the concept. With the help of an AI, you get to a technically ‘complete’ design much more quickly. You still have to hold time and space for validation and testing of your concepts.
Staying on the edge of innovation
At Lab Zero, we like to keep on top of new developments, and use just enough of the latest thing to give our clients an edge. After all, our objective is to put the AI in service to the humans who will use our applications. Think automated image, text, audio or video generation would help you deliver more value to your users? Then we should talk.
Continue the conversation.
Lab Zero is a San Francisco-based product team helping startups and Fortune 100 companies build flexible, modern, and secure solutions.