Senior UX/UI Designer at OZ Global

Yoav Sondak Recent Posts

blog cover 1

Synthetic Humans and Brand Stories

By 
Yoav Sondak
, 25/12/2025

AI can create faces in seconds, but trust, character, and good storytelling still take human judgment.

min read

Synthetic humans are quickly becoming the newest wildcard in the marketing world. They promise instant faces, infinite diversity, zero scheduling conflicts, and a level of visual flexibility that traditional photoshoots could never match. At the same time, they introduce new questions about realism, accuracy, and what it truly means to build trust through images. The result is a strange mix of creative freedom and creative chaos, efficiency and unpredictability, impressive breakthroughs and very real failures. In other words, exactly the kind of shift that forces brands and designers to rethink how they tell visual stories.

If you work with visuals, you cannot ignore this technology. And if you use it, you need to understand both its possibilities and its pitfalls. Here is what synthetic humans actually make easier, what they complicate, and how smart teams are learning to use them without losing their focus or their brand integrity.

  What AI really makes easier in client visuals Let us start with the fun part. AI lets us create very specific personas in minutes. If a campaign needs a clinic nurse in her early 40s with a calm presence and warm eyes, an irrigation engineer with sun-tanned skin and slightly dusty trousers, or a farmer in his fields who looks like he actually knows how to repair a dripline by hand, we can dial that in. No casting, no schedules, no “our best model canceled at the last minute” drama. It also helps when we need groups that stock libraries do not always represent well. A mixed team of engineers with balanced ethnic diversity. A group of middle-aged professionals for a B2B product. A family that does not look like a toothpaste commercial from 1992. AI gives us a bigger playground and fewer excuses. And yes, budget wise it saves time. Generating a range of visual directions early in a project helps a client choose a tone before any real production happens. This is especially valuable in B2B where you often want a mix of realism and polish without spending a fortune on photoshoots for niche equipment. But let us be clear. It is faster than a real shoot, not magic. Getting the right facial expression, personality, lighting, emotional tone and cultural nuance still takes iteration and artistry. Someone still needs to do the work and make creative decisions. AI is a sketchbook, not a photographer. Representation and diversity. Easier to specify, easier to mess up AI absolutely makes it easier to design for diversity on purpose. We can ask for specific mixes of age, gender, skin tone, and cultural background that reflect a brand’s real audience rather than a generic Western template. But here comes the cynical part. If you do not direct the AI explicitly, it has the tendency to give you the same person over and over. A smiling young white male doctor. A slender white woman in a “corporate success” pose. A suspiciously symmetrical engineer that looks more like an NBA star than someone who works in a metal factory. AI does not magically fix representation. It mirrors the biases of its training data unless we intervene with clear prompts and visual QA. So yes, AI is powerful, but it also needs supervision. Otherwise you get a multicultural team that all somehow look… the same.   The continuity problem (The serious one and the ridiculous one) This is where things get interesting. Keeping a synthetic person the same If you generate a character once, they will look great. If you generate the same character again in a different pose or setting, there is a good chance the face will be slightly off. Or very off. Or suddenly look like their cousin. This is a known limitation. Generic text to image models do not remember a specific face across multiple scenes. To fix that you need either a consistency-focused tool, a custom trained character, or a very precise workflow.   The Real Model Dilemma Stock Faces, Influencers, and the Question of Trust At some point in almost every campaign, a very human question appears: should this face be real? Sometimes a client selects a specific real person from a stock library. A nurse who looks exactly right for a clinic campaign. An engineer who feels credible on the factory floor. A farmer whose face tells a story of experience without saying a word. The problem begins when the campaign grows. One image is no longer enough. The client needs variations, new scenes, seasonal updates, and continuity over time. Stock libraries rarely deliver that level of flexibility. This is where AI enters the conversation. With the right tools and workflows, it is now possible to generate synthetic variations of an existing stock model. The same person, placed in new environments, wearing different outfits, interacting with new products. When done carefully, this can extend the life of a chosen face without repeated licensing fees or logistical constraints. When done carelessly, the result is uncanny. The face looks familiar but not identical. Subtle features drift. Expressions change personality. The person becomes almost themselves, which is often worse than being clearly fictional. This tension becomes even sharper when the conversation moves from models to influencers. Influencers are not just faces. They are identities built over time. Their value is not only how they look, but the perception of authenticity, lived experience, and continuity across platforms. An influencer has a history, opinions, imperfections, and a relationship with their audience. These are things AI can simulate visually, but not fully embody. For this reason, real influencers still cannot be replaced in many contexts. When trust is personal, when credibility depends on lived experience, or when a brand relies on long-term emotional connection, a synthetic figure falls short. An AI generated wellness coach or skincare expert may look convincing, but it does not age, struggle, contradict itself, or evolve in the way people do. And audiences notice. That said, artificial models are already stepping into influencer-like roles in limited and carefully framed scenarios. Virtual brand ambassadors, synthetic characters, and fictional personas can work when transparency is clear and expectations are managed. They can represent ideals, explain complex products, or act as consistent brand guides. But they are closer to mascots than humans, even when they look realistic. The real question is not whether an artificial model can gain attention. It already can. The question is whether it can earn trust. And trust, at least for now, still depends on the belief that there is a real person behind the voice, the choices, and the imperfections. In practice, the smartest campaigns treat AI models as extensions, not replacements. They support real people, fill visual gaps, and offer flexibility where human logistics fail. But when a brand needs genuine influence rather than visual presence, the human factor remains difficult to fake.   Product continuity. A reality check Trying to show a specific irrigation valve, cosmetic device, sensor probe or medical connector in an AI generated image is its own adventure. AI tends to simplify or distort product details, change proportions, or invent buttons that do not exist. So for product accuracy we still rely on photography, vector illustration, or 3D renders. AI is usually used around the product, not instead of it.   The legal, ethical and trust layer Clients trust visuals. And as AI grows, so do the expectations for transparency. There is still legal uncertainty around how some models were trained, how copyrights apply, and how synthetic humans should be disclosed in sensitive industries. The EU is moving toward stricter transparency rules. Customers are becoming more aware of what looks AI generated. So we treat AI the same way we treat any production tool. We choose platforms with clearer IP practices, we avoid misleading imagery, and we recommend disclosure when it matters for trust. Our philosophy is simple. A great brand story should feel honest even when the people in the picture are not.   So where does this leave us? AI models are not replacing real photography. They are not replacing human creativity. But they have become a natural part of the workflow in branding and marketing. They help us iterate faster, visualize concepts earlier, explore diversity with more intention, and tell better stories without waiting for permissions, flights or makeup. They also require discipline. They require artistic guidance. They require common sense. And sometimes they require a designer to say “No, we are not showing the irrigation engineer as a flawless fashion model. Let us try again”. As for the future, it is moving fast. Character consistency tools are improving. Product level accuracy is improving. Ethical guidelines are being written. The hybrid workflow of real photography, 3D, and AI will probably become the default rather than the exception. The conclusion is simple. AI models are not the answer to everything, but they have opened a creative door that is not going to close. The brands that benefit most will be the ones that use this tool with intelligence, taste, humor, and responsibility.
Read More
AI & Designers

Will AI Replace Designers or Serve as Their Creative Allies?

By 
Yoav Sondak
, 21/02/2024

Will AI replace designers, or will it become their indispensable assistant? Can anyone become a designer with AI's help? Discover Yoav's insights after extensive experimentation with AI tools, and learn what AI itself has to say about the matter

min read
As designers, AI is having a significant impact on the way we work. Designers have been using AI tools available in widely used design software such as Adobe Photoshop and Illustrator for some time. In OZ’s design department, we’ve also experimented with a variety of dedicated AI image generators, such as Midjourney, DALL·E 2, and eluna.ai. New applications are popping up all the time, so it’s worth trying them out and comparing them, to see what works best. Since this article is about AI, let’s get some input from ChatGPT about the benefits of integrating AI capabilities into software design tools: “It’s not only enhanced the capabilities of designers but has also redefined the creative process, ushering in a new era of efficiency and innovation.” So far, so good. While there are lots of advantages to using AI in design work, here are some key features:
  • Automated image editing features — analyzes images to intelligently suggest enhancements and automatically makes adjustments such as color neutralizers and detail enhancers. This speeds up the editing process and is a valuable resource if you’re looking for inspiration or need to make quick refinements.
  • Content-aware fill and object removal tools — by understanding the content of the image, enables you to easily remove unwanted objects or fill in gaps within a scene. This allows greater flexibility and streamlines the editing workflow.
  • Accelerates the ideation and iteration phases — lets you explore diverse concepts efficiently.
  • Intelligent recommendations — makes suggestions for shapes, layouts, and color schemes based on the context of their project. This makes the conceptualization phase faster and may even help you overcome a creative block.
  • Organization and management of design assets — automated tagging and content categorization make asset management easier and help you quickly navigate vast libraries of resources. It’s able to analyze vast datasets and identify patterns that we might miss.
  • Rapid prototyping — enables quick experimentation with lots of variations.

Traditional graphic tools have also made huge strides

In recent years, existing software and applications we’ve been using for years have incorporated a variety of tools that shorten processes and streamline design work. Because designers can give online feedback, the tools' capabilities keep getting better. During image processing, design tools can complete missing details, increase resolution, carry out color corrections, and produce more controlled and accurate results. There are also lots of options for image manipulations and variations. These advances have meant that the time between planning and the finished product is much shorter, and have saved me and many others from a tiring technical struggle with traditional design tools. However, AI has added new superpowers which can create impressive artwork at warp speed. If AI can do all that, why do we need humans?

If AI can do all that, why do we need humans?

While AI provides a broad range of design options and there is a wealth of tools to choose from, the act of selection and focus during the design process has become more challenging. The need to differentiate the customer’s brand, memorably present their messages, and tell their unique story means that every designer needs a clear understanding of what they want to create using the tools. Formulating the design prompt and selecting results, when every result seems to be ‘beautiful and impressive’, means that designers need to carefully examine what is ‘right and appropriate’ — and ask themselves what will yield the desired results. It’s the same question that we’ve been asking for years when approaching a design task, long before AI tools entered our lives.

The human touch

The human touch

“While AI excels at generating designs based on existing patterns, it lacks the intrinsic understanding of cultural nuances, emotional subtleties, and context that human designers bring to the table,” responds ChatGPT, adding that design is not only about aesthetics but also about storytelling, conveying emotions, and understanding the audience – elements that are deeply embedded in human experience. It goes on to point out that the human mind possesses the ability to empathize, interpret abstract concepts, and infuse designs with cultural relevance. Designers are adept at translating complex ideas into visual narratives that resonate with people on a profound level. “The intangible aspects of creativity, intuition, and emotional intelligence are quintessentially human and crucial to the design process,” it notes. Don’t fire your designers!

Don’t fire your designers!

Bearing these unique human attributes in mind, ChatGPT’s conclusion is almost inevitable: While AI image generators are valuable tools, amplifying designers’ creative capacities and streamlining workflows, they are best seen as collaborators rather than replacements. The future of design lies in a harmonious collaboration between human designers and AI, where each contributes its strengths. While AI enhances the design process by offering efficiency and inspiration, according to ChatGPT, “the unique qualities of human creativity, ingenuity, and understanding will continue to be an indispensable force in the world of design.” As with any AI application, there are ethical considerations. Designers must “be mindful of potential biases in AI algorithms and exercise discretion in their application. Striking a balance between automation and human intuition remains crucial to preserving the authenticity and creative vision of the designer.” ChatGPT predicts that “As technology continues to advance, the symbiotic relationship between AI and design is poised to reshape the industry, unlocking new realms of creative possibilities.” On a personal level, I use AI design tools daily to carry out various tasks and to fuel my creative process. I also find that ChatGPT is useful in helping to formulate prompts and finding the right wording for the industry or application. The more accurate the prompt, the better the results. However, personal experience has shown AI can’t replace the invaluable interactions between our design team and our clients. We’re able to dig deep into their story and translate that into unique and powerful visuals that communicate their brand promise. It’s the ‘secret sauce’ that makes the difference. In short, AI will become our creative allies, but they’re not going to replace us any time soon.  
Read More