👈 Blog

Design and the rise of A.I. - a perspective

A.I. tools are everywhere - as a product designer how worried should I be?

May 14, 2023 • 🍿🍿 12 min. read posted in design

Introduction - why am I writing about this?

I'm a product designer and sometimes developer, and as a card-carrying geek, I have been watching with interest as A.I. has slowly, then very rapidly, started to weave its way into pretty much every sphere of technology online and on our devices.

I wanted to get some of my thoughts down on what A.I. is, what it isn't, where it is now, and where it's going, and attempt to answer the question as to whether as User Interface/Product Designers, should we be worried about its impact on our day-to-day work and employment prospects and how we should use it going forward.

This is such a fast-moving area of technology that I realise as soon as I publish this post it's going to be out of date, so I will try to be as high-level as I can, but will certainly talk about certain A.I. services that will have no doubt changed by next week 🙂

Each of the areas covered really deserve their own blog post, as there are so many deeper nuances, and all this is just my opinion (and written by Al (me) not A.I.) to be clear!

What is A.I?

For those who haven't been reading the news, or living under a rock, A.I. is everywhere, and companies are racing to add it to their products on a seemingly daily basis.

IBM say A.I.[1] (or Artificial Intelligence) is:

In its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence.

In this post when I talk about A.I. I'm referring to things like OpenAI's Chat GPT, a large language model, that has been built on a large dataset and can understand natural language input (i.e. human prompts) and respond naturally.

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
Open AI - a free research preview you can try out

A.I. simulates intelligence in a machine by responding to inputs in as 'correct' a fashion as it can figure out from its data model, and in more advanced cases can learn from your responses.

There are also a lot of other types of A.I. out there like image-generating AI models like Midjourney and StableDiffusion, both of which are getting better by the day.

The dangers and challenges of A.I.

A.I. is opening up a whole host of ethical and moral challenges as it's being integrated into many aspects of our modern world.

Speaking purely from the perspective of text-based A.I., some challenges are people understanding the truthfulness of A.I. responses, partly due to lack of citation of sources, but equally due to the way results are presented conversationally as fact.

If you were to search for something in Google pre-A.I., you could review
to see which site you thought was most useful and trustworthy.

Once A.I. search results start coming back above regular results, it's going to be harder to see where answers come from because language models are made up of a huge amount of different sources.

The A.I. is just trying to come up with what it thinks the most likely answer to your question is, based on what it knows.

Some experts are beginning to question the legality of training data used to create language models. This data is usually scraped from the internet and often includes copyright-protected text and pirated ebooks. Tech companies creating these models have generally responded by refusing to answer questions about where they source their training data from. James Vincent - reporting in the Verge

There's no element of truthfulness or fact-checking or even 'right' - it's parsing what you write and trying to come up with what it thinks is a suitable response from it's Large Language Model.

Understanding LLM responses

To generate responses, LLMs use a technique called natural language generation (NLG). This involves examining the input and using the patterns learned from its data repository to try to generate a contextually correct and relevant response.

This is the reason they're often factually wrong - LLMs don't actually "know" anything - there's no sense of what the information means, only what sounds plausible based on the training data, and what words it should output next.

Accuracy can never be guaranteed, as LLMs are only as good as the quality of training data, which may contain bias, lack common sense, or be factually incorrect or out of date.

Due to their conversational nature, A.I. responses feel very human and can be very convincing, especially when you don't know the answer to something or are not familiar with the subject matter.

While a lot of folks are more likely to fact-check a random website, if a search engine responds in a conversationally convincing way without citing sources, it can be hard to know how accurate an answer is.

Ethicists warn A.I. runs the risk of biased answers, increased plagiarism, and the spread of misinformation.

There are also problems inherent to the output of language models like “hallucinations,” i.e. the tendency to simply make up information.

My advice: treat A.I. as a useful tool or assistant, but fact-check or learn the source of something before using it elsewhere.

The 6 stages of A.I. maturity

A.I. has appeared in movies for a long time, from Terminator to Her.

To place where we are now, I feel there's a maturity model for A.I [2] that's a rough timeline encompassing the past, the now, and the possible future (that may or may not happen).

To understand where we are now I'll try and break it down.

  1. PAST to NOW - A simple rules-based system - we've had this for a while. Think auto-pilots in aircraft.

  2. PAST to NOW - Contextually aware systems - think chatbots that are trained on experience in a specific domain, and are updated manually. You often see this on websites, where companies try to take the load off customer support to guide people to the most common answers.

    I feel Apple's Siri or Google's Assistant fit in here, as voice-linked tools hooked up to search indexes. Again, nothing new.

  3. PAST to NOW - Tools with Domain-specific mastery - here we have more advanced systems in particular domains. Think Chess computers or cancer diagnosis systems.
    A nice example was Google's Deepmind which defeated the world GO champion using it's AlphaGo model, which has since been enhanced with AI further. Again, nothing new - we've had these for a while, albeit in more specific areas.

  4. NOW to NEAR-FUTURE - Integrated A.I. systems - these are the current generation of A.I. These are systems that can interact and deal with other machines and humans. Think tools that we beta as co-pilots, and things like search starts to use A.I. greatly. Shortly after this, we become more co-pilots of tools as they advance.

  5. THE FUTURE - Artificial General intelligence - a main aim of A.I. scientists is to develop a machine with intelligence like humans. This is what we see in SciFi movies now - I feel we are a way off, but the recent acceleration in research makes this something we will see in my lifetime I think.

  6. FAR FUTURE - Artificial Super Intelligence - Again, you only see this in SciFi movies and it's not remotely possible currently, but this is where algorithms exceed human capabilities, the robots take over and we start to have real problems 😅

A.I. enters the SaaS product mainstream

For me, A.I. entered the SaaS Product mainstream when OpenAI opened up its APIs and reduced the cost of using/creating LLMs (Large language models).

We can see a race of people building products with A.I. as the backend - image generators, document editors and more.

A.I. in products - the early adopters

While there are a few early adopters making money from their efforts, as soon as some of the big products/tech firms add in A.I. as a baseline feature, the value of said products diminishes hugely, as once a technology is everywhere, it becomes more of a baseline than a differentiator.

The real winners are the A.I. API System owners - A.I. becomes a utility that we will subscribe to as product creators, so there's a real advantage to the big players who get there first and make things easy to use.

For those thinking "Should I add A.I. to my product?" I'd say consider it, but make sure that your product provides clear value without it, as soon this will be likely integrated at an operating system level I think, so if this is all you do, you will eventually be surpassed.

I think we are past the point where you can say it's a feature that differentiates you, as so many people have it now.

I use Notion at work, and Craft at home for writing, and both have really good A.I. integration. I think the real differentiator is how products use the prompts.

Some examples of how Notion has integrated prompts within its tool
Some examples of how Notion has integrated prompts within it's tool

For a test, I ran this post through the 'Fix spelling and grammar' option in Notion, and it did a pretty good job, though I had to check it line by line as it changed a few words around that changed the meaning, so I switched to my preferred tool Grammerly which guides you through grammar changes more clearly and precisely.

The more control A.I. tools give the user the better.

A.I. goes mainstream in Google, Microsoft and Apple's products

Google at their most recent developer conference (Google IO in 2023) launched several new previews of how they are integrating A.I. into their products, along with several different-sized language models, some of which are small enough to be bundled in mobile phones.

PaLM2 LLM looks especially promising, especially when it comes to the more customised models.

There’s a version of PaLM trained on health data (Med-PaLM 2), which Google says can answer questions similar to those found on the US Medical Licensing Examination to an “expert” level and another version trained on cybersecurity data (Sec-PaLM 2) that can “explain the behavior of potential malicious scripts and help detect threats in code,” said Petrov. Both of these models will be available via Google Cloud, initially to select customers. James Vincent - reporting in The Verge.

Google has been using A.I for some time: 'Smart Reply' came out in 2017, then 'Smart Compose' which was rudimentary in suggesting canned responses, but now with generative AI, it has enter the mainstream.

How Google will be generating images in Google Slides
How Google will be generating images in Google Slides
How Google will be generating text in Google Docs
How Google will be generating text in Google Docs

Microsoft hasn't been sitting on their laurels - they write often on what they are up to in the A.I. space and have been working on some impressive products in the developer space like Github Copilot for A.I. driven coding as well as the search space in Bing.

Apple is a harder one to predict - while they have been using A.I. and machine learning successfully in their products for a while, with a real focus on in-device AI (like the Neural engine inside iPhones, that powers Face ID, camera features and Augmented Reality), they keep their cards very close to their chest.

Apple hasn't been particularly vocal about what they are working on, especially since there have been controversies with early iterations of Microsoft Bing AI and Google Bard, so I suspect that they will release when they feel they are ready.

WWDC, Apple's annual developer conference is right around the corner in June, and if they don't double down on A.I. or at least show they are doing so, then I feel they will be missing a huge opportunity, and be seen as lagging behind their competitors.

A.I., design and the creative arts

So what does A.I. mean for design and designers? Well, there are several areas of design that I feel are being impacted to a lesser and greater degree.

The real change is coming as A.I. gets integrated more tightly into the existing design tools we use and new domain-specific ones come onto the market, like Studio AI, UI Wizard and Genius.

Check this great overview of how A.I. impacts design from DesignCode (they have some great playlists also)

While AI generates some quite impressive results, it's still a way off from integrating with say an existing design system for a company, but all the signs are there that this will happen as tooling matures.

A.I. and digital illustration

If I was a commercial illustrator I'd be concerned, much like photographers were at the rise of stock photography sites.

I think there will always be a market for high-quality unique illustrations, especially when it comes to keeping with a company's brand, however there's a huge mid-section of the market out there where people are looking for 'good enough, and that's just going to be replaced by things like Midjourney and StableDiffusion.

I've found it great for simple images, like a photo or something abstract like a painting, but trying to be nuanced and describing something complex and getting a good output has eluded me so far.

My attempt at generating a post image for this before doodling one myself
My attempt at generating a blog post image in stablediffusion before doodling one myself

Illustrators are safe for now, but the real step-change will be when companies master prompts, and we see integrations in products with more domain-specific LLMs - the quality is just going to increase.

Even Adobe is making huge strides with project Firefly, which is already showing promising results with generative AI models, and soon you will just be able to tell Photoshop what you want to create and it will give you a decent starting point.

A.I. and photography

I think things like Product photography and Stock photography will become a thing of the past, and will all be generated going forward.

While you can tell if you look closely that a photo is generated close up, at a distance AI generated photos are fooling people.

Photographers still have a strong role in cases where humans are involved, like documentary, reportage, wedding or portrait photography.

As a photographer myself, A.I. has been in the tools we use for some time - think content aware fill and erasing things, replacing skies and so on, but recent changes in generative photography take it to the next level.

For more, Sean Tucker has a nuanced take on A.I. and photography.

A.I. and art

This is an interesting one - while a lot of the early A.I. demos centred around examples using art, where you might generate a picture in a certain artistic style, A.I. just lacks the human element, fundamental to art.

For me, art (whether digital or physical) is really about the process the artist has gone through, and I think there will always be a space for physical art, like having something painted hanging on your wall.

A.I. and copywriting

Here things get more challenging. I work with some immensely talented copywriters, and I don't think we are quite at the point where a computer can write with the style or nuance yet that they do, with all the domain expertise and brand tonal awareness.

However, this is mainly down to the large language models in play that are trained to be all things to all people.

Once training language models become more accessible on a per-company and domain-specific basis (something Google announced as a future service at Google IO) I see the job of copywriter becoming part writer, part expert prompt wrangler.

In the future, companies will feed in all their writing and documentation, and train a language model to write copy in their voice and tone.

As a designer will A.I. take over my job?

Not yet, but it's on the horizon for some jobs.

I think if your job is to churn out the same kind of content again and again (i.e. a repeatable process), then there's a good chance that eventually, tools will replace you.

People are already using A.I. to write job adverts, or marketing copy.

The more value you add day to day (in communicating, understanding business and customer needs, and working strategically) the harder you will be to replace.

There are already design tools that you can ask for a design and they create one. The thing that's missing however is the human touch and attention to detail.

It's a bit like looking at some of the beautiful designs up on Dribbble. Sure they look nice, but a lot of them would fail accessibility checks, or not work responsively.

The more layers of obfuscation there are in the creative process the harder things are to change, so currently I feel that a design A.I. tool would have a hard time doing what I do day to day.

In a nutshell, things are getting better and better.

As a designer how can I take advantage of A.I?

While I and a lot of other folks are sceptical of the ethics and morals of A.I. generated imagery (which has often trained itself on other artists work), it's a technological step-change that I think we can't really avoid. It's coming whether we like it or not.

While I'm only using A.I. to generate things like text headlines,I'm keeping an eye on where A.I. tools are heading.

I've seen a lot of designer, artists and illustrators use A.I. as a starting point or time saver, and use the ideas generated from this to fuel their work.

If you want to giving these tools a try, use them for inspiration and a starting point, and (in the case of text) fact-check things are correct before quoting them, and cite your sources.

Get used to how to write good prompts. Try out Google Bard and Chat GPT's open research preview.

Some think that we are safe for now... 😂

image of tweet about AI
To replace UX/UI designers with AI, clients will need to accurately describe what they want. We're safe

...however A.I. is not going away, and as creative peeple and designers, I think we'd be foolish to ignore it.

Those who will succeed in future are those who learn to embrace it and work with it, not against it.


  1. https://www.ibm.com/topics/artificial-intelligence ↩︎

  2. My Friend Luke gave a great UX A.I. talk at UXCampBrighton from which I've derived this. ↩︎


Check out other posts tagged: design