AI content marketing do’s and don’ts
AI content marketing do’s and don’ts
Tools for AI-generated content can provide a valuable resource for financial advisors. But advisors need to understand how to create AI content effectively and responsibly.
Let’s face it. It takes time and talent for advisors, or their support staff, to create effective marketing and educational content.
Alternatively, advisors could either hire a writer to create this content or subscribe to third-party, ready-made content providers.
But now an increasing number of advisors are bypassing these often expensive “human” resources to use the free or low-cost content-creation capabilities of generative AI chatbots like ChatGPT and Copilot.
For advisors who are looking for low-cost ways to produce content, AI chatbots and image creators can generate competent verbiage and visuals for use on websites, blog posts, email marketing messages, social media posts, print and online ads, and marketing materials.
In many cases, AI-generated content may deliver higher levels of engagement than content written by humans. For example, one email marketing platform claims that messages sent through its platform that were written using a chatbot delivered higher click rates than those written manually.
But just because AI can make it easier and cheaper to get marketing content out there, should you be using it?
The answer may be a qualified “Yes”—but only if you fully understand the potential reputational and legal risks of not using AI effectively or responsibly.
Going beyond generic
Raw chatbot-generated content isn’t known for originality or style.
But you can help the bot inject a little more personality into its responses if, when asking the bot to generate content, you direct it to craft the output in a way that reflects factors like the following:
- Your level of professional experience and knowledge.
- The demographic factors of your target audience (their age, educational level, professional status, financial investment knowledge).
- The tone the response should have (conversational, formal, technical).
- The desired length of the response.
A more productive use of AI is to improve the quality of content advisors create themselves.
For example, at Virginia-based Monument Wealth Management, AI is often used to improve original content written by members of its team, according to Chief Marketing Officer Emilie Totten.
“In most cases, we’re asking AI to optimize our content for things like readability, conciseness, and length. We can even maintain our brand ‘voice’ by prompting the bot to deliver its result using the same tone of voice as the original. At the end of the day, it’s about the bot improving the content our people create, not the other way around,” she says.
AI can also be used to provide ideas that can help advisors shape their thoughts for articles they write themselves.
For example, Ivan Illan, founder and chief investment officer at California-based Aligne Wealth Advisors Investment Management, uses Vega Minds, an AI platform built for financial advisors, to help flesh out ideas for articles he writes for Forbes.
He doesn’t rely on the platform to provide market or economic research for his queries. Instead, he feeds data he’s collected on his own into his query along with the overall theme of the article and asks the bot to provide insights on the subject based on its interpretation of the research.
“Using AI this way is valuable because it can often give you context or a framework, based on its perception and collection and ingestion of information, that you hadn’t necessarily thought about,” he says.
Questionable accuracy
Just because a chatbot’s response seems to be accurate doesn’t mean it is. After all, chatbots operate on a “garbage in, garbage out” methodology. The quality and reliability of their responses are only as good as the information they’re trained on, which can come from anywhere on the web—even places where information is incorrect and out of date.
Chatbots can’t reliably fact-check their responses, which makes them just as likely to provide incorrect or false information as fake news sites and social media platforms.
Don’t believe me? Here’s an example.
Let’s say I wanted to use ChatGPT to write an article that provided a historical perspective for investors concerned with market volatility. I started by asking, “Since 1900, how many times has the S&P 500 posted annual negative returns at least two years in a row?”
This was its unedited answer:
“As of August 2024, the S&P 500 has posted negative returns in consecutive years a total of four times since 1900. Here’s a detailed breakdown:
- 1939–1940: The market faced declines due to the early impacts of World War II.
- 1973–1974: This period was marked by the oil crisis and a severe recession.
- 2000–2002: The burst of the dot-com bubble and the subsequent recession led to negative returns over these three years.
- 2007–2008: The financial crisis of 2007–2008, triggered by the subprime mortgage crisis, caused the market to decline significantly.”
If you didn’t have S&P 500 historical data at your fingertips, you might be willing to accept this answer at face value.
But you shouldn’t, because it’s wrong.
ChatGPT left out the four consecutive years of negative S&P 500 returns at the height of the Great Depression between 1929–1932. The S&P 500 also posted a negative year in 1941. And it incorrectly stated that the Index posted negative returns in 2007, when in reality the year ended with a small gain, according to Slickcharts and Macrotrends. (Note: While the current format of the S&P 500 was introduced in 1957, comparable historical data has been developed for prior years.)
Imagine if you had posted an article about historical market downturns based on this answer and one of your best clients, or a member of your professional network, or—heaven forbid—a competitor pointed out your error. If you use a chatbot to deliver this kind of market data or other statistical information, make sure you ask it to provide a link to the source of this data. Then double-check it to make sure the bot’s answer is citing correct information. Even better, footnote the answer in your article with a link to that source.
Does using AI-generated content verbatim online hinder search engine optimization (SEO) results?
It’s not easy for most advisors’ websites to appear on the coveted first pages of search results.
That’s why many have embraced various search engine optimization (SEO) techniques that load website content with targeted keywords and phrases that increase the chances that these pages will appear closer to the top of search results.
Chatbots make it easy to create and publish SEO-heavy content verbatim with relatively little effort. And while Google doesn’t necessarily penalize websites that use AI-generated content, it is trying to train its minions of web-crawling bots to give higher search result positions to online content created by humans.
At the heart of Google’s approach is its Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) framework.
Without getting into the weeds, Google is more likely to give higher E-E-A-T ratings (and thus higher placement in search results) for online content it can “prove” was written by a human being whose subject matter expertise makes them a trustworthy and reliable source for this information.
Ironically, it’s humans, rather than AI bots, that make these E-E-A-T judgment calls.
Google claims it can identify unaltered or slightly tweaked AI-generated content, which might lower a website’s E-E-A-T score.
Wordsmithing—it’s the right thing to do
If you use AI to create content, go through the output and tweak it to reflect your firm’s voice (or your own) while you’re fact-checking its accuracy. If you don’t have the time or desire to do it, hire a freelance writer to do it for you.
What about AI images?
The great thing about AI image generators is that they often produce very compelling photographs and illustrations you might not be able to find on your own (or, at least, for free).
For example, I recently co-wrote an educational blog post providing answers to common questions terminated employees have about their unvested stock options. For the banner image, I used Microsoft’s AI image generator to produce one based on my prompt, “Create a photograph of people leaving their office after being laid off from a company and worrying about their unvested stock options.”
Here is an example of one image the bot created.
Image created by Microsoft Designer
I personally have fewer objections to the use of AI-generated images in marketing materials than I do with AI-generated text. As long as the information shown in the image isn’t factually false, it should be generally OK to use.
However, one ethical question that has been raised about AI image generators is whether they’re trained using existing copyrighted images or unauthorized images, such as those of children.
For example, Human Rights Watch has reported that one popular AI image dataset “scraped” images of children from websites and YouTube videos even when these platforms prohibited scraping and parents used privacy settings. Advisors who are concerned about this might want to limit their use to reputable AI image creators like Microsoft Designer and Google’s Gemini platform.
Disclose or not disclose?
It’s tempting to simply take an AI-generated article or image, toss it on your website with alterations, and put your byline on it.
But this is risky. While you can’t be sued for copyright violations (since the content is created by software rather than humans), savvy readers can identify AI-generated work and could publicly call you out for taking credit for content you didn’t create yourself.
If you don’t have the time or skills to vigorously rewrite content created by a chatbot, then it’s important to give AI authorship credit where it’s due if you use it verbatim. This can be as simple as adding a line at the end that says something like, “This article was written by ChatGPT.”
If you slightly tweaked the chatbot’s content, you might use a co-authorship disclosure, such as, “This article was authored by [your name], using source content provided by ChatGPT.”
Using AI for client communications
Chatbots offer an easy way for advisors to create content for email and other client communications. While this may be fine for blast emails, such as invitations to client events, you should think twice about using AI to create emails addressing specific client situations.
Why? Because the chatbot captures everything that you enter and potentially stores it for later use.
So, let’s say you asked ChatGPT to write an email to a client summarizing items on their to-do list for initiating a backdoor Roth IRA conversion. This confidential information would be permanently stored in the bot, which might repurpose it for an answer to questions such as “What are the usual steps an investor must take when initiating a backdoor Roth IRA conversion?”
Some firms have established ground rules on this issue.
For example, at Monument Wealth Management, the rules around using AI for client communications are clear, according to Vice President and Partner Jessica Gibbs.
“Our AI policy states that no one should feed ChatGPT or any AI bot any sensitive client information or ‘secret sauce’ around how we invest money. Only content made for public-facing marketing purposes can be optimized with AI, and it should never be used to create or optimize any personalized advice to clients,” she says.
Beyond content creation, a growing number of advisory firms are using AI-powered marketing tools to automatically generate and execute targeted email and social media campaigns based on demographic and account-level information stored in CRM systems.
For example, when certain clients and prospects reach age 65, the platform might automatically write and send them several messages with links to blog posts on Medicare enrollment and record their engagements.
While these tools make it easier for advisors to deliver more personalized contacts with minimal effort, they need to be careful. The SEC’s and FINRA’s concerns about advisors’ use of AI mainly focus on curtailing potential conflicts of interest if and when advisors use predictive data analytics when interacting with clients and prospects. The SEC is also going after firms that make false claims in their sales and marketing materials about their use of AI tools in their investment processes.
The AI review process starts with your Compliance department
As of this writing, neither the SEC nor FINRA have as yet laid down definitive ground rules for responsibly and ethically using chatbots or image creators to generate client-ready content.
So, for now, your firm’s advertising compliance department needs to have the first and last word on properly using AI for marketing purposes.
Make sure you document AI usage when submitting these pieces for advertising review. Then the ball is in their court to tell you what you can, can’t, and should do.
If they haven’t published AI usage policies and best practices (understandable, since this is a brave new world for just about everybody), ask them to get into gear.
In the meantime, if you’re thinking about using AI-generated text or images, consider adopting the mindset of an educational fiduciary. Ask yourself: Who will benefit most from this machine-made content: Your clients and prospects or your practice? While both sides can benefit when AI content is used intelligently, it behooves you to always put the interests of those you serve first and foremost.
The opinions expressed in this article are those of the author and the sources cited and do not necessarily represent the views of Proactive Advisor Magazine. This material is presented for educational purposes only.
Jeffrey Briskin is a marketing director with a Boston-area financial-planning firm. He is also principal of Briskin Consulting, which provides strategic marketing and financial content development services to asset managers, TAMPs, and fintech firms. Mr. Briskin has more than 25 years’ experience serving as a marketing executive and financial writer for some of America’s largest mutual fund companies, DC plan record keepers, and wealth-management firms. His articles have appeared in Pensions & Investments, Advisor Perspectives, The Wealth Advisor, Rethinking65, and Alts.co.
RECENT POSTS