UX Writing Weekly #214

Will the robots take our jobs and destroy civilization?
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

And the winner of "word of the year" of the year for 2022, as chosen by writers in tech (and not by robots) is … gaslighting!

Permacrisis came in a close 2nd, with goblin mode not too far behind. Homer struck out and there was even a write-in vote for "Huh? What? Was this really a thing?", which likely captured how many felt.

OK, enough WOTY already—bring on the AI! This special issue is a deepdive into AI, so grab a snack and get comfy.


Issue #214 (Dec 14th, 2022)

  • AI: the good, the bad, & the terrifying 🤖

  • AI meets UX 🦾

  • AI: where are we now? 🤷

  • Articles, microcopy, and conversations 💬

For the past few years, we’ve been following the development of AI, particularly AI writing tools and how they might affect our field. Are they a fad, a gimmick, a fancy toy we’ll soon grow tired of? That’s almost certainly not the case.

But how will they impact professional writers? And is AI itself a threat to human civilization?

We read through tons of AI articles so you don’t have to. In this issue, we’ll let the headline and bullet points tell the story, and give you our take on the matter.

(Created using Midjourney)


Even if you have been living in a cave, you’ve probably heard of OpenAI’s new chatbot, ChatGPT. Using the new generative text mode davinci-003, ChatGPT can understand complex instructions and produce remarkably high-quality long-form texts, rhyming poetry, and even functional software code. 

It is game-changing technology that, in an instant, became available to all, and people from all fields are now scrambling to process the implications

The robo-cat is out of the bag, and there’s no putting it back in. AI is now part of our lives in a direct way.


Artificial intelligence dates back to the 1950s but has recently seen exponential growth in its abilities. And while it’s been a boon to fields like science and medicine, the potential for misuse looms large with consequences ranging from troublesome to downright terrifying. 

AI in science and medicine

‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures

  • Google’s deep-learning program for determining the 3D shapes of proteins stands to transform biology.

  • The ability to accurately predict protein structures from their amino-acid sequence would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.

How AI found the words to kill cancer cells

  • Using new machine learning techniques, researchers have developed a virtual molecular library of thousands of "command sentences" for cells, based on combinations of "words" that guided engineered immune cells to seek out and tirelessly kill cancer cells.

Google Creates AI that Detects Lung Cancer Better than Doctors

  • The AI model detected lung cancers 5 percent more often than the experts and was 11 percent more likely to decrease the rate of false positives.

Using Machine Learning AI to Detect Schizophrenia

Stanford algorithm can diagnose pneumonia better than radiologists

The potential benefits of AI are limitless. Yet many warn of its dangers including deepfakes and synthetic media that could increase disinformation, erode trust in institutions, and hinder our ability to distinguish between what’s real and what isn’t.

And, not to alarm you, but some predict AI will cause massive economic disruptions and even threaten human civilization. So there’s that.

AI: problems and dangers

OpenAI's new ChatGPT bot: 10 dangerous things it's capable of

  • It can write phishing emails, It can write software... and malware.

  • It's capable of being sexist, racist, and immoral.

  • It could redefine supply, demand, and the economy

  • It's convincing even when it's wrong.

AI is finally good at stuff, and that’s a problem

  • Students [are] using all sorts of artificial intelligence … to complete their assignments.

  • The system can say biased and offensive things. 

  • Answers provided by the AI were recently banned from the coding feedback platform StackOverflow because they were very likely to be incorrect.

Artificially (un)intelligent: An AI search engine for science spits out climate denialism and covid misinformation

  • A new tool says it can pull out "consensus" scientific findings from across peer-reviewed literature.

  • But experts say that in its current beta version, this AI provides answers that can range from wrong to incoherent.

Is ChatGPT a ‘virus that has been released into the wild’?

  • Some critics fear that [OpenAI’s tech] could be our undoing, especially with more sophisticated tech reportedly coming soon.

  • "Shame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society … ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions."

  • "The most disruptive change the U.S. economy has seen in 100 years," and not in a good way.

  • People are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’

  • The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose.

OpenAI’s attempts to watermark AI text hit limits

  • It's proving tough to reign in systems like ChatGPT.

  • OpenAI is developing a tool for "statistically watermarking the outputs of a text [AI system].

  • "I think it would be fairly easy to get around it by rewording, using synonyms, etc. … This is a bit of a tug of war."

Deepfakes and synthetic media

How ‘synthetic media’ will transform business forever

  • Synthetic media is any kind of video, pictures, virtual objects, sound or words that is produced by, or with the help of, artificial intelligence (AI).

  • Over the next 10 years, AI will improve, accelerate and transform the creation of video, sound, and words.

  • In the book "Deepfakes: The Coming Infocalypse," author and synthetic media analyst Nina Schick estimates that some 90% of all online content may be synthetic media within four years.

  • From a disinformation perspective, this is a game-changer.

Why deepfake phishing is a disaster waiting to happen

  • In 2021, cybercriminals used AI voice cloning to impersonate the CEO of a large company and tricked the organization’s bank manager into transferring $35 million to another account to complete an "acquisition."

  • Threat actors can not only mimic an individual’s physical attributes to fool human users via social engineering, they can also flout biometric authentication solutions.

Skynet: science fiction or science fact?

Artificial Intelligence Will ‘Likely’ Destroy Humans, Researchers Say

  • AI can eliminate humanity, according to scientists at Google and the University of Oxford.

  • Researchers argue that humanity could face its doom in the form of super-advanced "misaligned agents" that perceives humankind as standing in the way of a reward.

(Jeopardy GOAT Ken Jennings after being defeated by Watson, IBM’s question-answering computer, in 2011)


Assuming that civilization survives, let’s look at AI’s impact on UX. New developments could impact the entire product team: designers, writers, and developers. 

AI-generated art

Programs like Midjourney and Dall-E2 let users create stunning digital art simply from a text prompt. The use cases are vast, but (surprise, surprise) there are concerns. 

An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.

  • An AI-generated piece of art won the Colorado State Fair’s contest for emerging digital artists.

The Wes Anderson artbot craze is a fun trend, but it clarifies AI art’s ethical issues

  • ‘What if Wes Anderson directed Star Wars’ is a great prompt, but it highlights how far AI goes in stealing styles.

  • No artbot is going to actually replace Wes Anderson: His work comes from a distinctive voice and artistic mindset.

  • It’s easy to … see how readily AI art generators can devalue an individual artist’s style and voice, by making endless creative variations easily available at the push of a button. 

  •  Future AI bots may guard against exactly this kind of specific stylistic imitation.

Nightmarish cereal boxes show the limits of AI image generators

  • While AI image generation technology has advanced at a terrifying speed, it still faces challenges when confronted with things like text and logos in its training data. 

Lensa AI and the Trap of Otherworldly Beauty

  • The app, which prompts users to upload photos of themselves then spits out 50+ AI-generated portraits, has been decried by artists as predatory to real, human-made artwork.

  • Lensa AI is able to transform innocent photos of fully-clothed women or children into AI-generated nudes.

Generative art tools can be lots of fun—we’ve been using them at the HUB for a while now. And the rate at which they’ve improved is crazy! In a few short months, we went from this avocado chair (Jan ‘21):

To this (Dec ‘22):

Yet, it still has limitations. For instance, it’s very difficult to reproduce the look of a custom character or create a series of consistent images. For one-off images, it certainly threatens artists. But when it comes to things like illustrating a book with consistent characters and style, the technology falls flat.

As for its impact on UX design, so far it’s minimal. Check out the discussion around this AI-generated gym app.

(Gym app created using Midjourney)

AI-generated texts

GPT-3 is a computer program that uses advanced algorithms and a vast amount of data to understand and generate human language. At least, that’s what ChatGPT said when asked to explain GPT-3 at a 7th-grade reading level.

There are now many AI writing tools on the market, and we at the UX Writing Hub have used them in the past. But does this new davinci-003 model represent a sea change in how writers will work? 

Why We're All Obsessed With ChatGPT, A Mind-Blowing AI Chatbot

  • This artificial intelligence bot can converse, write poetry and program computers. Be careful how much you trust it, though.

The 5 Best Uses (So Far) for ChatGPT's AI Chatbot

  • A Chrome extension … to negotiate lower bills from internet providers, hospitals and more.

  • Make a diet and workout plan.

  • Generate the next week's meals with a grocery list.

  • Create a bedtime story for kids.

  • Prep for an interview.

Five Remarkable Chats That Will Help You Understand ChatGPT

  • The powerful new chatbot could make all sorts of trouble. But for now, it’s mostly a meme machine.

OpenAI's ChatGPT is scary good at my job, but it can't replace me (yet)

  • Letting an AI write product reviews (for an iPhone 14).

  • The flow of the review is textbook; ChatGPT sets the tone with what it thinks about the iPhone 14 Pro, talks about the design, performance, cameras, and battery life, and even concludes with a definitive recommendation.

  • The main problem with the AI-generated product review is that it's inaccurate.

We asked Artificial Intelligence to review Master Of Puppets and now we may be out of a job

  • Review Metallica's album Master Of Puppets favorably, emphasizing the bass playing of Cliff Burton.

  • Review Metallica's album Master Of Puppets unfavorably, emphasizing the drumming of Lars Ulrich.

  • Both reviews were quite good.


The technology is impressive—staggeringly so. AI can also write software, film scripts (complete with character descriptions and dialogue), and is getting better at complex games

There’s no doubt that AI writing tools can help us improve our workflow. Yet, AI-generated texts still have a big problem—they are, far too often, full of shit. They lie, they B.S., they make stuff up. And what’s more, they do so without knowing that they’re lying. 

This is because, as impressive as the technology is, AI cannot yet truly think, at least, not in the way humans can.

And since writing is, above all, thinking, AI cannot yet replace human writersit is not currently a threat to most writing jobs. And whether or not AI will ever be able to think like humans do is still up for debate.

In the meantime, one thing AI can do is write blog posts that are, in some cases, good enough for SEO purposes. So SEO-focused content writing is probably the most threatened writing job for now.

As for misinformation, B.S. on the internet is nothing new—people have been dealing with mis- and dis-information for as long as we’ve had written language. And although AI-generated texts will likely bring an increase in misinformation, perhaps they will force us all to become better judges of information, vigilant and hyper-aware of the need to carefully discern fact from fiction, truth from lies, news from fake news, human from robot.

Perhaps, ChatGPT will give our collective B.S. detector a much-needed fine-tuning.


Though most of our UX jobs are safe for now, we must start preparing for the future. UX writer/content designer Sophie Strosberg shares what she’s learned about putting a human touch on your work.

Here are five ways to show your humanity in your work and stay ahead of the AI curve.

We. Are. Not. The. Robots. (Working humanely in the age of AI)



Community answers to trending topics. Join the conversations below.


AI writing tools:


OpenAI’s playground

Other AI writing tools

AI image tools:



Other AI image tools


So what’s your take on all of this? Join the communities below and have your say:

Enjoying UX Writing Weekly? Share it with your UX besties.

Next week, back to normal.

Join our FREE UX writing course

In this FREE industry-leading course, you’ll learn about:

  • UX writing processes 
  • Testing
  • Research
  • Best practices