The Tech Marketer’s Guide to ChatGPT and Bard: Are AI Generative Tools All They’re Cracked Up to Be?

About 25 years ago, I attended a conference hosted by then-industry leader Kodak. During the event, company reps boldly predicted that digital cameras would not diminish the demand for traditional photo printing. On the contrary, they boasted, traditional photography would only soar ever higher as a result.

Just a decade later, Kodak ended up filing for bankruptcy, proving that even the biggest companies are at risk if they can’t adapt to disruptive technological advancements.

I’ve been thinking about Kodak a lot lately as I observe the new breed of generative AI and large language models (LLMs) like ChatGPT, Bard, AutoGPT, and others. For me, the takeaway from Kodak’s story is a lesson in the importance of keeping up with technology and embracing change.

I’m as excited as anybody else about the promise of this technology to take away the hard parts of my job. Who wouldn’t be? But I’m also aware—based on experience—of the need to be cautious. The phrase “not quite ready for prime time” comes to mind, even as these tools are touching on more and more distinctly prime time aspects of our lives: customer service, investment advice, even medical diagnosis.

When it comes to tech research, blogging, and content creation, what I’ve found through extensive experimentation is that there are solid reasons not to abandon human content creators just yet. 

For the last three months, I’ve made it a rule to use AI everyday, and to modify my own search and writing routine so that I turn to AI first before other solutions. And while I’ve been occasionally impressed by the amazing capabilities of this technology when it comes to generating text, often, the results range from fluff to highly problematic, as we’ll see.

And yes, in case you’re wondering, this article was actually produced by me, one of our writers, and an editor—with a little help from ChatGPT and Bard.

The Promise of LLMs for Tech Content Creation

It’s just like magic. 

You sit down with a prompt and ask it to generate headlines, original content, blog post outlines—and a few seconds later, you have what you need, with perfect spelling and punctuation, saving hours or even days of research, writing, and editing.

Unlike older-generation content spinners that created barely readable text, LLMs create awesome text that’s giving every high school teacher nightmares, even as it saves their students hours of sweat, tears, and Wikipedia-based research.

While it offers convenience, it also brings with it potential risks that may harm production efficiency and alienate professional target audiences, not to mention plagiarism.

For example, asking Bard for a “750-word blog about cloud resource optimization and cost savings (FinOps)” produced a post that was only 500 words long (Bard is notoriously terrible at counting words.) but was reported as 34% plagiarized by Quetext, a leading originality tester.

Obviously, when you’re discussing FinOps, certain phrases are going to come up again and again, but a more appropriate percentage might have been closer to 5%.

Beyond originality, what are some of the other pitfalls of relying on LMMs for content creation? And what exactly has led me to conclude that there are no shortcuts to great content—that you still need to do the legwork and thorough research to keep audiences nourished by your content? Let’s take a look…

The Pitfalls of LLMs for Tech Content Creation

While AI tools provide convenience, they fall short in a number of important ways that you’ll probably agree with me are critical when it comes to tech content marketing:

Shallow

Today’s LLMs lack the in-depth industry knowledge, experience, and creative finesse required to produce highly accurate and compelling technical content. 

For instance, asking Bard for article topics dealing with vulnerability remediation, one of its suggestions was “What is vulnerability remediation?” ChatGPT suggested “The Role of Patch Management in Vulnerability Remediation” among several other topics that were all equally basic.

At a time when bottom-up adoption rules and it’s more important than ever to reach tech practitioners in an organization, alienating them with basic articles like “What is vulnerability remediation?” would be a serious misstep.

Oversimplified

Relying solely on AI-generated content can result in oversimplification, inaccuracies, and a disconnect from your intended target audience

For example, asking ChatGPT for “the #1 thing stopping IT organizations from achieving an optimal mean time to remediation when it comes to vulnerabilities,” it responded that the biggest challenge was “the lack of a streamlined and efficient vulnerability management process,” which is a circular argument at best (In fairness, it did provide some details to explain why the process might be inefficient.).

AI tools often make authoritative-seeming pronouncements and it’s tempting to take these as fact—forgetting that they were never vetted by a single industry expert.

Unreliable

LLMs, in this case both Bard and ChatGPT, tend to lie under pressure, generating total misinformation. As a new report by OpenAI’s researchers puts it, LLMs “exhibit a tendency to invent facts in moments of uncertainty.”

This phenomenon, known as “AI hallucination,” has become widely known recently through some well-publicized misinformation, like factual errors in Bard’s first public demonstration. The problem is that the LLMs are generally quite confident in the results they provide and often sound authoritativeas they have been trained to doand this could easily mislead non-expert users. 

Beyond plagiarism, which we discussed above, LLMs are not always great at looking up timely, up-to-the-minute data (for instance, lawsuits against marketing agencies using AI for content purposes), as we’ll see in the next section.

Sketchy Statistics 

Technical articles are different from opinion pieces because they usually rely on data, including statistics, to back them up. Finding the data to back up your claims is one of the toughest parts of any tech writer’s job. Yet current studies, articles, and statistics are essential because they lend authority to any content.

When I asked ChatGPT, “Please give me 5 current statistics on cloud resource optimization (FinOps)” (Hey, it’s hard to break the habit of saying please and thank you!), I received a long and impressive list of statistics, including:

Study: “FinOps: The Key to Unlocking Cloud Potential” by Flexera

  • On average, organizations waste 35% of their cloud spending due to inefficient resource utilization and poor cost management practices.
  • Only 31% of companies have a mature cloud strategy that includes financial management and optimization practices.

These statistics sound great! At last, I thought I’d found the perfect use for LLMs in content creation. Normally, it takes a long time to find such useful statistics.

But when I started poking around on Google, things immediately started looking fishy. For one thing, I couldn’t find any reference to a Flexera report called “FinOps: The Key to Unlocking Cloud Potential.” Eventually, I asked ChatGPT if it actually existed.

ChatGPT said, and I quote:

 

…please note that I generated that report title as an example and it may not exist in reality. I apologize for any confusion caused. I recommend conducting your own research to find relevant reports and studies on cloud resource optimization to access the most up-to-date and accurate information.

(!)

 

All apologies aside, if a human writer tried something like that, they’d be out of a job.

When I gave Bard a similar request for statistics on wasted cloud resources and cloud overspend. It did a little better.

 

  • 49% of cloud-based businesses struggle to control cloud costs. – Source: Anodot 2022 State of Cloud Spend Report


It came very close with this one. The report is actually called the 2022 Anodot State of Cloud Cost Report, and indeed, 49% of surveyed execs said it was “fairly difficult” (44%) or very difficult (5%) to get cloud costs under control. 

Then again, Bard also said:

  • Nearly half of respondents said managing multicloud environments is challenging. – Source: CloudZero 2022 State of Cloud Cost Management Report 

When I investigated this statistic, it was indeed listed at CloudZero’s site—in fact, I found the exact phrase on this page. However, CloudZero isn’t citing their own report; they actually give credit for the statistic to Anodot.

So if LMMs can’t be trusted to source meaningful statistics, what are they good for? Trying out a different avenue, I decided to ask Bard to create a blog post outline on the topic of cloud cost control:

 

  • Here is a blog outline for the top 3 tips for cloud cost control (aka finops):

Introduction

  • Cloud computing is a popular choice for businesses of all sizes.
  • However, cloud costs can quickly add up.
  • This blog post will discuss the top 3 tips for cloud cost control.

Tip 1: Implement a cloud cost management solution

  • A cloud cost management solution can help you track your cloud spending and identify areas where you can save money.
  • There are a number of different cloud cost management solutions available, so you should be able to find one that meets your needs.
  • Some popular cloud cost management solutions include CloudHealth, CloudCheckr, and RightScale.

Tip 2: Right-size your cloud resources

  • You should regularly review your cloud usage and right-size your resources to match your needs.
  • This will help you avoid overprovisioning and paying for unused resources.
  • To right-size your cloud resources, you can use a cloud cost management solution or manually review your usage data.

Tip 3: Enable cloud governance

  • You should implement cloud governance policies to ensure that cloud resources are used efficiently and securely.
  • This will help you avoid security breaches and data loss.
  • Some common cloud governance policies include:
    • Resource tagging
    • Access control
    • Cost budgeting

Conclusion

  • By following these tips, you can reduce your cloud costs and improve your cloud financial performance.
  • Cloud cost control is an ongoing process.
  • You should regularly review your cloud costs and make adjustments as needed.

 

While this level of content might blow away a time traveler from 1990, it’s not going to do much to win over IT and ops professionals in this century.

The Research Writer Is Here to Stay

Relying solely on LMMs without verifying any information you receive can not only harm production efficiency, it can also damage brand credibility and degrade trust. While the results of today’s LMMs can range from serviceable to downright hilarious, the quality will likely start improving fast. 

That said, with improvement will come new sources of unreliability. For instance, monetization may begin to skew results. In the previous cloud cost management example, Bard (currently experimental and non-monetized) listed three current solutions. However, future results may well be skewed by financial considerations such as paid promotions (for instance, citing one company’s “authoritative” study as a way of generating leads). 

Results may also drift to become more inaccurate over time, or tend towards bias; as with any AI platform, the quality of the output depends heavily on the data ingested, training, and models. 

There are also growing privacy concerns that are keeping more than a few businesses awake at night with the thought that employees and third parties like freelancers are handing over sensitive data to be used by the tool for its own purposes.

For now, based on these preliminary investigations, it’s clear that producing high-quality tech content still demands skilled writing, in consultation with subject matter experts (SMEs), along with diligent research—which may well include ChatGPT and Bard these days, but still includes Google search and other tools.

Any veteran tech marketing writer or blogger familiar with the topic, the ecosystem, and the relevant players will usually know right away whether a source is reliable. They can also assess the quality of a particular resource and fact-check by cross-referencing multiple sources and validating claims against authoritative industry publications, academic papers, and reputable experts. With any type of research, this is the only way to ensure that you’re delivering content that’s accurate, up-to-date, and aligns with the expectations of a professional target audience.

Another benefit of using human writers? Your technology is unique, not generic; to get an AI to generate accurate, detailed content about your product or service, you would need to sit down and explain it to the AI first. That’s not very efficient. When a human writer sits down with your subject matter experts, who can walk them through how the product works, they’ll be better equipped to share your unique selling points with your most important audiences.

Let’s see an AI try to do that. (Good luck!)

How to Use LLMs Responsibly for Tech Content Creation

Realistically speaking, most tech content creators are going to embrace LLMs to some extent. As I mentioned earlier, who can resist any technology that promises to make our work easier?

So since you’re probably going to embrace LLMs anyway, why not use them for what they’re best at? Based on our experience, here are three of the best ways we’ve found at working with LLMs and where they can generally do pretty good work.

1. Generating headlines

Headlines are often an afterthought, but they are super important. Here’s an example using Bard: “Give me three spunky headlines for an article about creating B2B tech marketing content using large language models (LLMs)”:

 

  • How to Use Large Language Models to Create High-Converting B2B Tech Content (SEO Value: 95)
  • LLMs: The Future of B2B Tech Marketing Content (SEO Value: 90)
  • 5 Ways to Use LLMs to Improve Your B2B Tech Marketing (SEO Value: 85)


2. Generating outlines

If you’re not sure where to begin with a topic, LLMs can do a serviceable job of breaking it down and letting you know where to start. Here’s an example using ChatGPT: 

“Give me a point-form outline for a short blog post about regulatory standards in healthcare IT, including compliance, fines & penalties, and finding suitable tools to manage compliance.” [Initial attempts that listed only “regulatory standards in healthcare IT” were far too vague and uninteresting.]

 

  1. Introduction
  • Briefly explain the significance of regulatory standards in healthcare IT
  • Set the context for the challenges and consequences of non-compliance
  1. Common Regulatory Standards in Healthcare IT
  • Overview of key regulatory bodies and standards (e.g., HIPAA, GDPR, HITECH Act)
  • Highlight the specific requirements and focus areas of each standard

III. Importance of Compliance

  • Discuss the benefits of compliance in terms of patient privacy and data security
  • Explain the role of compliance in building trust and reputation in the healthcare industry
  1. Fines & Penalties for Non-Compliance
  • Provide examples of notable fines and penalties imposed on non-compliant healthcare organizations
  • Highlight the potential financial and reputational consequences of non-compliance
  1. Challenges in Managing Compliance
  • Discuss the complexities of navigating and interpreting regulatory standards
  • Address the resource and expertise constraints faced by healthcare organizations
  1. Finding Suitable Compliance Management Tools
  • Explore the importance of leveraging technology and tools for compliance management
  • Discuss key features and considerations when selecting compliance management solutions

VII. Benefits of Compliance Management Tools

  • Highlight how compliance management tools can streamline processes and reduce manual effort
  • Discuss the benefits of automation, reporting capabilities, and centralized control

VIII. Conclusion

  • Summarize the importance of regulatory compliance in healthcare IT
  • Emphasize the need for organizations to prioritize compliance and leverage suitable tools

3. Summarizing Text

Sometimes, you need to read large amounts of text in order to create smaller content—for instance, creating a short explainer video script based on a lengthy white paper, distilling key points. For example, here’s an 11-page paper from Google called “Unlocking the value of cloud FinOps with a new operating model.” Thrilling reading, but if all you want to do is distill a list of 3 do’s and 3 don’ts, Bard did a really good job (we pasted in the entire article along with the request):

Do’s

  • Establish a cross-functional team. FinOps is not just a finance problem, it’s an organization-wide problem. To be successful, you need a team that includes representatives from finance, IT, and the business units.
  • Get buy-in from leadership. FinOps requires a cultural shift, and that won’t happen without the support of leadership. Make sure your CEO and CFO are on board with the FinOps initiative and are willing to champion it throughout the organization.
  • Start small. Don’t try to do too much too soon. Start by focusing on a few key areas, such as cost visibility and cost optimization. Once you’ve had some success in these areas, you can expand your efforts to other areas of your cloud environment.

Don’ts

  • Don’t treat FinOps as a one-time project. FinOps is an ongoing process. Once you’ve established a FinOps team and implemented some initial changes, you need to continue to monitor your cloud costs and make adjustments as needed.
  • Don’t expect to save money overnight. It takes time to implement FinOps best practices and see results. Don’t get discouraged if you don’t see immediate savings. Just keep working at it, and you will eventually start to see the benefits of FinOps.
  • Don’t give up. FinOps is not easy, but it’s worth it. By following the do’s and don’ts listed above, you can implement FinOps successfully and achieve your cloud cost management goals.

But for all their strengths, remember to keep in mind the key weaknesses of today’s crop of LLMs, and fact-check all information provided by AI-powered writing tools, especially when it comes to citing sources. And originality checking is a must; even when you ask for original content, LLMs may include long fragments of text from a variety of sites in their response.

Overall, AI-powered writing tools should be seen as a starting point, not a finished product. Make sure all content is finalized by someone who knows the field and checked by a professional editor.

Final Notes

I keep coming back to that story about Kodak that I began with. Because we’re confronted with that question every single day. Which of today’s emerging technologies are the ones that will have the most influence on how we’re doing business five, ten, twenty, and more years into the future? Which can we laugh off and which are here to stay?

There’s no doubt at this point that AI is here to stay, regardless of alarmist headlines (Time Magazine, The New Yorker, and others) calling on ChatGPT and its peers to put the brakes on generative AI.

As tech marketers and content creators, we should be constantly seeking efficient ways to enhance our productivity. Yet we must navigate the dynamic landscape of AI tools with caution and mindfulness. While AI-powered tools offer convenience and speed, we have to navigate their present and future limitations very carefully.

I’ve heard numerous stories from our clients about their attempts to generate content themselves using ChatGPT and other AI tools; in the end, they have almost always gone back to “traditional” content creation. When I asked one of our startup founders about their experience and what made them come back, they noted that AI tools weren’t a replacement for the “creative human mind.” 

 

At IOD, we have also encouraged our writers to experiment with tools like ChatGPT. One of our senior tech marketing writers sent an article back with the comment, “I tried using ChatGPT to simplify and summarize the article, but ended up rewriting 90% of it. It just didn’t have any pizzazz.”

By understanding the role of AI tools as additional resources and placing emphasis on trusted references and human expertise, we can keep on delivering exceptional content that upholds the highest standards of integrity, originality, and quality while delivering value to audiences at all levels.

Related posts