Your Creativity Won't Save Your Job From AI

Robots were once considered capable only of unimaginative, routine work. Today, they can write articles and create award-winning art.

In 2013, Oxford researchers published an analysis of jobs most likely to be threatened by automation and artificial intelligence. At the top of the list were occupations such as telemarketing, hand sewing, and brokerage clerking. These and other at-risk jobs involved doing repetitive and unimaginative work, which seemed to make them easy pickings for AI. In contrast, the jobs deemed most resilient to disruption included many artistic professions, such as illustrating and writing.

This assumption was always a bit dubious. We built machines that mastered chess before we built a floor-cleaning robot that wouldn't get stuck under a couch. But in 2022, technologists took the conventional wisdom about AI and creativity, set it on fire, and threw its ashes into the waste bin.

This year, we've seen a flurry of AI products that seem to do precisely what the Oxford researchers considered nearly impossible: mimic creativity. Language-learning models such as GPT-4 now answer questions and write articles with astonishingly human-like precision and flair. Image-generators such as Stable Diffusion Midjourney and DALL-E 2 transform text prompts into gorgeous—or, if you'd prefer, hideously tacky—images. This summer, a digital art piece created using the text-to-image program Midjourney won first place in the Colorado State Fair; artists were furious.

AI already plays a crucial, if often invisible, role in our digital lives. It powers Google search, structures our experience of Facebook and TikTok, and talks back to us in the name of Alexa or Siri. But this new crop of generative AI technologies seems to possess qualities that are more indelibly human. Call it creative synthesis—the uncanny ability to channel ideas, information, and artistic influences to produce original work.

Articles and visual art are just the beginning. Google's AI offshoot, DeepMind, has developed a program, AlphaFold, that can determine a protein's shape from its amino-acid sequence. In the past two years, the number of drugs in clinical trials developed using an AI-first approach has increased from zero to almost 20. "This will change medicine," a scientist at the Max Planck Institute for Developmental Biology told Nature. "It will change research. It will change bioengineering. It will change everything."

In the past few months, I've been experimenting with various generative AI apps and programs to learn more about the technology that I've said could represent the next great mountain of digital invention. I've been drawn to playing around with apps that summarize large amounts of information. For years, I've imagined a kind of disembodied brain that could give me plain-language answers to research-based questions. Not links to articles, which Google already provides, or lists of research papers, of which Google Scholar has millions. I've wanted to type questions into a search bar and, in milliseconds, read the consensus from decades of scientific research.

Consensus is part of a constellation of generative AI start-ups that promise to automate an array of tasks we've historically considered for humans only: reading, writing, summarizing, drawing, painting, image editing, audio editing, music writing, video-game designing, blueprinting, and more. Following my conversation with the Consensus founders, I felt thrilled by the technology's potential, fascinated by the possibility that we could train computers to be extensions of our own mind, and a bit overcome by the scale of the implications.

Let's consider two such implications—one commercial and the other moral. Online search today is one of the most profitable businesses ever conceived. But it seems vulnerable to this new wave of invention. When I type "best presents for dads on Christmas" or look up a simple red-velvet-cupcake recipe, what I'm looking for is an answer, not a menu of hyperlinks and headlines. An AI that has gorged on the internet and can recite answers and synthesize new ideas in response to my queries seems like something more valuable than a search engine. It seems like an answer engine. One of the most interesting questions in all of online advertising—and, therefore, in all of digital commerce—might be what happens when answer engines replace search engines.

On the more philosophical front, I was obsessed with what the Consensus founders were actually doing: using AI to learn how experts work, so that the AI could perform the same work with greater speed. I came away from our conversation fixated on the idea that AI can master certain cognitive tasks by surveilling workers to mimic their taste, style, and output. Why couldn't some app of the near future consume millions of advertisements that have been marked by a paid team of experts as effective or ineffective, and over time master the art of generating high-quality advertising concepts?

If you frame this particular skill of generative AI as "think like an X," the moral questions get pretty weird pretty fast. Founders and engineers may over time learn to train AI models to think like a scientist, or to counsel like a therapist, or to world build like a video-game designer. But we can also train them to think like a madman, to reason like a psychopath, or to plot like a terrorist.

We may be in a "golden age" of AI, as many have claimed. But we are also in a golden age of grifters and Potemkin inventions and aphoristic nincompoops posing as techno-oracles. The dawn of generative AI that I envision will not necessarily come to pass. So far, this technology hasn't replaced any journalists, or created any best-selling books or video games, or designed some sparkling-water advertisement, much less invented a horrible new form of cancer. But you don't need a wild imagination to see that the future cracked open by these technologies is full of awful and awesome possibilities.

James Huang August 11, 2024
Share this post
Tags
Guide To Writing Prompts For Midjourney