It’s hard to believe that it’s only been just a little over a year since OpenAI’s ChatGPT was released, taking the world by storm. Not to be outdone, this past February Google announced the launch of Bard (now known as Gemini). And in March, AI intelligence safety and research company Anthropic released Claude.
So it’s no surprise that terms like “prompts” and “hallucinations” have become part of the popular lexicon.
At this point everyone is excited about the vast economic opportunities that generative AI brings to the table. Global management consulting firm McKinsey noted that, “Generative AI’s impact on productivity could add trillions of dollars in value to the global economy” and “will have a significant impact across all industry sectors.”
On the other hand, global market research company Forrester believes that “Automation and AI overall will replace 4.9% of US jobs by 2030.” And if you want to get a sense for how things can go wrong, the AI Incident Database has been gathering data about mishaps since 1983.
If you’re curious about what all of this means for PR and marketing professionals, then check out some of the highlights from a recent livestream I had with communication and management professional Andrea Weckerle, who teaches in the Master’s in Integrated Marketing Communications program at Georgetown University’s School of Continuing Studies.
Everyone Needs to Get Comfortable with AI Because Things Are Moving Fast
Pointing to the urgency of learning how to use AI tools, Andrea says, “We need to be conversant in this immediately and learn the tools and use them on a daily basis – business, personally.” This is all the more important considering its ubiquitous nature. “It’s literally baked into everything we do,” she notes.
But trying to keep up with AI while staying on top of import issues surrounding privacy, copyright, and other things can be challenging. My recommendation is that “We have to move really quickly, but contemplatively.”
Trying to find the right balance between innovation and caution isn’t easy. Thankfully the Biden administration is trying to do that with the recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It outlines eight guiding principles and priorities, among them the need for AI to be safe and secure; promoting responsible innovation, competition, and collaboration; creating AI policies that advance equity and civil rights, while also protecting privacy and civil liberties; and ensuring that responsible development and use of AI supports workers.
Practical Guidance for Communications Professionals
As someone who earned my Accreditation in Public Relations many years ago, I was excited to see the Public Relations Society of America come out with the guidance document Promise & Pitfalls; The Ethical Use of AI For Public Relations Practitioners.
Andrea agreed, saying, ”I was actually really, really impressed with what they did,” noting, “it was really comprehensive and really insightful.” Its practical value lies in serving as a guiding framework for best practices in preventing and managing potential ethical challenges that may arise from the improper use of AI.
“What I liked about the framework is that it really nicely balances the exciting opportunities that AI presents communications professionals with the very real dangers that irresponsible or self-serving use can lead to,” she added.
We talked about the document’s long list of examples that feature both improper AI use and guidance on proper use, which I recommend all communicators look at. It’s really worth your time to read.
Generative AI in the Workplace
We also covered the Wall Street Journal’s recent article The Do’s and Don'ts of Using Generative AI in the Workplace. These include:
• Awareness of the inherent bias of generative-AI models trained on publicly available data sets
• Making sure you don’t share sensitive business information with public programs
• Not blindly believing AI-generated content without verifying what it’s putting ou
• Not using AI-generated content without disclosure
• Always being wary of copyright infringement
For a broader discussion about AI and copyright law, I recommend people look at The Congressional Research Service’s report Generative Artificial Intelligence and Copyright Law that came out in September. It talks about the circumstances under which works containing AI-generated material may be copyrighted and that authors can only claim copyright protection for their own contributions to such works.
The bottom line is that right now things are still rather unsettled, so creators and communicators of all types need to keep their finger on the pulse.
Ethics and Critical Thinking Skills Are Important
In addition to demanding that tech companies and governments set parameters around responsible AI use, as individuals we also play an important role.
Andrea drives this point home with her comment that, “You can have the best tools out there…but it doesn't abdicate our responsibility to hone our own critical thinking muscles.” At every single point we should “look at things critically and say, well, wait a minute, is this something I can stand behind? So critical thinking skills even in an AI world – or possibly even especially in an AI world – are going to be more important than ever.”
Building and safeguarding trust is vital too. “The last decade has seen a huge degradation of trust across all spectrums and that's really really bad because it creates even greater division and pretty soon you don't know who to believe,” she says. “So building trust is going to be super super important.”
Trust is important on both a personal and professional level. “Those organizations and brands and companies that are building trust now in the AI world, they're going to have such a competitive advantage over others that are footloose with that sort of thing,” she adds.
To hear more insights from me and Andrea, watch the full livestream.