I recently experienced a mind-bending presentation by futurist Samantha Jordan. Her insights revealed a stunning picture of how emerging technologies, particularly AI, will likely revolutionize how we will live and work in the years ahead.

Futurists aren’t clairvoyants. They’re not fortune-tellers who see the future through crystal balls. Rather, think of futurists as savvy chess players, capable of analyzing all possibilities and thinking several moves ahead. Think preparation, not prediction.

A futurist’s way of thinking is what Jordan calls strategic foresight, which involves identifying present signals—observable facts or trends such as research papers, expert interviews, patent filings, and emerging job titles. Once you’ve gathered enough signals, the key is to distinguish between fleeting trends and significant developments, combining known trends with uncertainties to explore potential future scenarios.

For example, Jordan highlighted the surge of new prompt engineer roles, offering salaries above $300,000 without needing fancy coding skills or college degrees, as a clear signal of future trends.

In other words, a futurist’s job is to “know what is plausible, so you can know what is desirable.” Ultimately, to help us make informed decisions today to influence or adapt to these potential outcomes.

2022: An Inflection Point In AI Accessibility and Capability

We hit an inflection point in 2022 when Open AI released ChatGPT. The disruption wasn’t so much the technology itself (as somewhat similar services like Jasper were already on the market) but how such powerful technology was made available to everyone

For free

With no expertise in coding or technology required. 

My 73-year-old parents could use it easily— indeed, they have.

Generative AI’s ability to emulate human communication has made it a groundbreaking tool for communications practitioners and accelerated the shift toward more efficient communication strategies.

While this technology holds great potential for positive impact in the world, there are obvious dangers to privacy, leading to many ethical and moral dilemmas. And we should confront these issues head-on, says Jordan. Halting the progress of technology is unlikely, if not impossible. Rather than shying away from harmful ideas and uses, they should be challenged and counterbalanced with positive, constructive alternatives. 

With this approach in mind, let’s explore some signals Jordan flagged that indicate how our world as communicators is poised for disruption: in both how we work and where we work.

You Can Dynamically Engage and Deeply Personalize With AI

Jordan explained that we’re all starting to “dynamically engage through deep personalization.”

Up to this point, when we send a message—such as an email, image, PDF, or putting text on a website—that message generally does not change after it is published, depending on who is consuming that content. Even for A/B testing, content does not change dynamically at an individual consumer level. While technically possible, historically, it would be foolish to invest the time or money to make a unique ad or message for each intended recipient when there could be thousands or millions of people in your audience.

This is beginning to change, as messaging is set to become highly interactive and personalized—and cost and resource-efficient at scale. Jordan pointed to one example signal, Meta’s new chatbot that allows users to converse with virtual representations of celebrities like Snoop Dogg and Kendall Jenner. 

This technology marks a shift towards providing experiences tailored to individual preferences while potentially gathering more detailed personal information. Although there’s a risk of these interactions feeling manipulative, they also present opportunities for more tailored and relevant communication. 

For example, chatbots could transform internal corporate communications, providing employees with personalized responses to announcements based on their specific roles and departments. If your company issues a news release, employees could be notified not only of the news, but how it affects their specific role. After all, what’s interesting to someone in marketing or sales in the U.S. may be completely different from someone working in HR or accounting in Brazil. AI is beginning to make it possible to engage different audiences dynamically in this manner.

The convergence of deep personalization and facial or emotion recognition technologies opens up even more possibilities for applications, like overcoming language barriers. Take, for example, HeyGen, a new tool that allows speakers to make videos appear as if they’re speaking a language they don’t speak, using their actual voice and facial gestures—effectively, a useful deepfake.

These advancements point to a future where it can adapt to offer a more inclusive and diverse range of interactions. “With AI, we’re no longer limited to a one-size-fits-all approach, and we’re no longer limited to a one-language-fits-all approach,” explained Jordan.

Verifying Authenticity Will Be a Challenge

One problem stemming from the increasing ability to personalize comes the simultaneous challenge of deepfakes becoming almost indiscernible from reality. Verifying authenticity is about to become increasingly more difficult. As communicators, we must figure out how to manage this contradiction effectively.

AI can mimic someone’s appearance and voice with just a 30-second recording. With the abundance of publicly available videos, from teenagers streaming online to CEOs in interviews, everyone is at risk of being imitated.

And we’re not just talking about individual people here. Entire organizations are at risk of being deepfaked. A recent example highlighted by Jordan involved a scandal where Southwest and American Airlines found that nearly 100 of their planes contained fake parts from a fake company with fake employees (complete with LinkedIn profiles) that disappeared overnight.

Despite the expected increase in deepfakes targeting individuals and companies, some fabrications could be beneficial in engaging with key audiences.

For example, imagine employing AI to develop digital personas, each tailored to communicate with specific audiences more effectively. Suppose you’re doing PR for a martial arts business. When sharing martial arts techniques through social media videos, a deepfake of a fit 28-year-old could be perceived as more appealing to some audiences than a real-life middle-aged instructor with a less athletic build. Like it or not, perception influences credibility

Of course, there are obvious ethical dilemmas involved in this example. But if our goal is to communicate for maximum effectiveness, we must start looking at how to engage dynamically and personalize our messages. This means adjusting for geography or language and incorporating traits that may be preferred or idealized by the audience, possibly using digitally simulated speakers or influencers indiscernible from reality.

Disruption In Where and How We Work

As Jordan pointed out, our work environment will likely look much different. Not only geographically but also as technology interfaces evolve to become more human-centric.

Although many workers have become accustomed to working remotely since the pandemic, it’s not for everyone. Data shows that workers increasingly value the ability to work hybrid as much, if not more, than a pay raise. “Americans are embracing flexible work—and they want more of it,” reads the headline of the recent McKinsey American Opportunity Survey.

A major driver for that is the desire for human interaction. For many of us, the promise of fully remote work and flexibility comes at the cost of direct human engagement. We may choose to go to the office less frequently, but when we do, we yearn to socialize.

We’re seeing businesses adjust to this new reason we come to work in person, like embracing more open concepts and creating more spaces for conversations and group activities rather than cubicles or offices designed solely for focus and individual work.

But we also need to be aware of the draining effect staring at a computer screen has on us every day when we choose to work from home. The future of work, says Jordan, is one where technology like our computers will adapt to meet us on our terms, with interfaces that will likely reduce screen fatigue. 

Think: computers without screens or keyboards. Computers where the individual is the form factor. Emails are sent via voice recording, with embedded sentiment in each message. This can carry over in communications from individuals and corporate communications; imagine news releases with embedded sentiment to convey excitement, remorse, or urgency.

The technology is closer to reality than you may think, as functional MRIs today can accurately reconstruct human thoughts. While these learning models have biases to overcome, this technology could become critically valuable to help teams brainstorm remotely.

Peer Reactions

During my discussions with other conference attendees following Samantha Jordan’s eye-opening session, several questions emerged:

  • How will agency billing models need to change? As AI takes on more routine tasks, agencies might need to move away from traditional hourly billing to value-based pricing, focusing on human professionals’ strategic and creative contributions. Additionally, the emergence of AI-centric agencies could disrupt the market, offering similar services at a fraction of the cost. This could lead to a competitive landscape where the emphasis is on unique creative insights and strategic depth rather than task execution.
  • How can we do a better job of being prepared to manage disinformation in a viral age? Effective crisis management in the future will need to go beyond quick response strategies. It will require predictive analytics to identify and mitigate potential crises before they blow up. Expect to see more companies like Alethea emerge that specialize in proactively mitigating disinformation and social manipulation and cater more down market.
  • How culturally sensitive is AI capable of being? While it can create a digital version of a person speaking another language, it’s crucial to know if and when it can identify and flag potentially offensive content—especially when the creator does not speak the language produced by the output. Assuming the intention is not to offend, we’ll need it to produce multilingual content and understand and respect nuanced cultural sensitivities.

Optimistic or Pessimistic About the Future?

It’s clear that our industry is at the cusp of a transformative era. The challenges are as daunting as the opportunities are exciting. Our role will be more critical than ever as these technologies become mainstream. 

It’s easy to get carried away thinking about how all this new technology can be used for bad. But as Jordan explained, we can’t let that shadow our pursuit of optimism because “while it’s important to have something to run from, it’s much more important to have something to run towards.” 

What is certain as communications professionals is that we have to become futurists ourselves, look at all of the signals around us, and consider all the potential scenarios on how the future will unfold. Then, know what is plausible to build what is desirable. 

Because it’s on us, after all, whether we have an optimistic or pessimistic future depends on how we apply this technology.

Will we navigate these changes with agility, creativity, and a commitment to ethical principles? Or with fear, timidity, and a desire to maintain the status quo?

I, for one, choose to be optimistic about that.

Mike Tomlinson

Mike Tomlinson is the Global PR & Communications Lead at Certinia, whose software seamlessly integrates all aspects of service operations, including service estimation, delivery, customer success management, financial planning and accounting.

View all posts by Mike Tomlinson