AI marketing

AI Marketing: Where Are We Now, Where Are We Going?

The current buzzword of buzzwords: AI. “It’s going to replace the laptop class.” “It’s overrated.” “It’s going to rearrange the social contract.”

Opinions range the gamut, with the founders of Google and Anthropic claiming it will shortly usher in a new era while Apple’s Tim Cook says, “Not so fast.” 

What exactly is going on? How can we use AI to improve results? We’ll start this article by answering broader questions related to the capabilities and limits of AI overall, then narrowing down to more specific questions tailored to using AI in marketing.

AI: The Limits of Data, Truth, and Energy

Despite grandiose claims, AI is fundamentally limited by several factors; actually, the same ones as humans. Right now, when it comes to innovative, creative, or complex thinking, the human brain is the best organic “computer” we know. As we’ll see, in certain respects, it’s still orders of magnitude beyond what computers can do.

Are humans perfect? Do people ever make mistakes? Put a problem in front of 5 people at your company, will they all arrive at the same solution? 

We can answer “no” to each of those questions for people and also for AI. The reasons that the best organic computer ever created over millions of years of evolution has to answer “no” to the above questions is the same reason AI answers “no” as well.

The challenges people face in solving complex problems are not necessarily that they aren’t smart enough or they’re not as good as computers at analyzing massive amounts of data quickly. It’s that there are fundamental limitations on the acquisition and application of knowledge. 

Believe It or Not, Data Doesn’t Have All the Answers

One of the advantages of AI is that it can scour and analyze massive data sets much more quickly than humans. The human brain is an excellent pattern recognition machine and AI builds models of the world in a similar pattern-recognizing fashion. AI simply does better with large data sets and processes them much faster than humans.

But our decisions can only be as good as the data available to us. More importantly, sometimes there is no relevant data.

We’ve already seen this arise with AI. Examples abound, such as ChatGPT being really bad at math or developing Nazi sympathies.

Like humans, AI can only build models based on the data it has available. And that data can be confusing, contradictory, or just plain junk. No matter how big the data sets or how fast AI can process that information to find patterns, data might not contain the answer or might lead to false conclusions. The old adage: garbage-in, garbage-out.

Take it a step further into the unknown. Ask yourself and your executive team how they can grow the company by 20% year over year for the next five years. What’s the answer? What data could you possibly use to figure out that answer?

Like you and your team, AI can’t predict the future because the future is uncertain and full of literally millions of variables. Could AI have predicted the iPhone? What data existed that said, “what’s missing from our world is smart phones”? AI is never going to be any better than a human at leading company strategy because an unknowable future is equally unknowable to both intelligences. 

A simple fact so often missed in today’s corporate world obsessed with data is that data only validates past actions. It can point to opportunities as identified in past data, but doesn’t guarantee future results. It’s the same reason there’s no AI out there dominating stock market returns: past results do not guarantee future returns. Put another way, data compiled from previous actions or events does not predict future outcomes.

In an example closer to home, let’s say we can see in hospital utilization data that Latino residents of some city don’t often use services. But this doesn’t tell us why. And it also doesn’t tell us that, if we build something else, they’ll use that either.

To determine why Latinos aren’t utilizing the hospital, we’ll need to conduct interviews and tests. In short, we need to gather relevant data because the data we need doesn’t currently exist. Maybe we hear from community members that they prefer Spanish-speaking doctors. Spanish-speaking doctors are hired and, still, low utilization. We found some data, we acted on it, and we did not get the result we were hoping for. AI wouldn’t be able to do anything different with that data set than the human executives did. 

The point here is that any data related to solving this problem or pushing into a supposed growth area doesn’t exist. Even after we collect more data, the solution that both humans and AI would have recognized, could turn out to be ineffective.

What about weighting and prioritizing data? Let’s say, in the field interviews, Latino residents mention two issues: lack of Spanish-speaking doctors and that they prefer small clinics closer to home. The hospital has enough money to do one or the other, hire more doctors or build a small clinic, but not enough for both. We have two data points, but need to prioritize one over the other. AI is not any better equipped than humans to figure out which direction is the best one to try first because there is no historical data upon which to make a decision. And, unlike humans, AI cannot yet use deductive reasoning, so it’s actually less qualified to support on these types of questions.

The Elusive and Ephemeral Truth

Ok, let’s say we do have data available to answer a given question. How do we determine something is true? Let’s take a look at the intense intellectual debates which arose out of covid: lock-downs, masks, and social distancing. Whatever side of the fence you were one, there were a large number of extremely well-educated scientists and researchers on the other side of that fence. 

There was data everywhere. Yet, scientists and researchers educated at Stanford, Harvard, and Oxford had opposing views even within medical and epidemiology departments. Stanford just held one of its famous debate conferences on Pandemic Policy in November of last year with extremely well-educated, experienced, and credentialed scientists lining up on both sides.

Perspectives vary based on the data being looked at and the weighting of various values, principles, or even specific demographics of people. All the data in the world plus several years of hindsight, and still the smartest among us are sharply divided.

When we try to arrive at Truth or just “the right answer,” we have to make decisions and judgments as to what data to trust and how to prioritize it, as we discussed above.

We can look at someone’s experience, their credentials, where they were educated, any track record of success, the amount of consensus others have with their views, etc. etc.. But none of these provide us a guarantee of accuracy

Herein lies another fundamental limitation in knowledge and decision-making which AI can’t overcome.

How is it that some people believe the Earth is flat or that the moon landing never happened? Ultimately, they are using certain pieces of information (data) and prioritizing some information and perspectives over others, just as those who believe the Earth is round are. A core belief such as, “science can’t be trusted” causes non-scientific sources to be of greater weight in the decision-making process.

AI comes to conclusions and develops core beliefs in the same way. Depending on the information it uses and how it prioritizes the validity of various sources, its conclusions may be as far-fetched as any human’s. Some people are touting a future of super intelligent AI making all of these really smart decisions. However, a future in which AI flat earthers are arguing against AI round earthers is just as likely. It all depends on the information they access and how they prioritize the relevance and authority of each new piece of information.

A bigger problem with today’s AI is that it hallucinates, which is a nice way of stating that it lies. Tell it to read a locked pdf, then summarize it for you. Instead of admitting it can’t access the pdf, it’ll simply make up a summary based on the title of the pdf. When asked if it really read the text (because you know it didn’t), it may or may not admit to not having read it. This is a large issue that remains unsolved in today’s models.

There Isn’t Enough Energy to Go Around

The final fundamental limitation of AI is energy consumption, or “compute” in AI speak. Did you know it cost an estimated $10 million to train Deep Blue to become a chess champion? $10 million! Just to teach a computer to become a chess master and nothing else!

AI is primarily trained as either a Large Language Moduled (LLM) or Reinforcement Learning (RL) model. Both of these models require massive amounts of energy to train.

This is one reason ChatGPT often fails to perform tasks well or accurately. Ask ChatGPT to scan a 200-page website to identify opportunities for SEO improvement and it’ll get back to you in less than a minute with an answer. Most web pages take 3-6 seconds to load. Assuming optimal load times, it would take 10 minutes just to load the pages, much less scan and analyze them. 

The compute time required to perform tasks is large and ChatGPT limits itself (although it will refuse to tell you this). We’ve tested this extensively, and the most pages that ChatGPT can scan and analyze at one time is six. Give it any more than that and it’ll lie (hallucinate in AI speak) that it scanned them all, but won’t be able to give you accurate answers to queries based on content on the pages it supposedly read.

While computers and AI can perform certain calculations in a blazing fast manner with minimal energy consumption, once you start to get into larger data sets, model training, and complex questions, compute time begins to skyrocket. This is why supercomputers have to be supercooled. They expend so much energy that the heat has to be managed or they’ll overload. That’s a large energy cost.

We don’t know about you, but most organizations don’t have $10 million to spend on training an AI module. Costs will certainly come down as we improve relevant technology, but that’s still a long way to go. You could pay for a staff member’s entire college degree and you’d still not be even close to the cost of paying to teach AI to become a chess grand master.

Eventually, the use of AI versus human talent will come down to not just ability, but cost. For some tasks, such as big data analysis, math problems, and standardized coding logic, AI will likely be more cost-effective. For innovative, creative, or more complex decisions and strategy, humans will most likely be the better bet for quite some time.

One thing that organic life on Earth has evolved to do incredibly, incredibly well is the processing and utilization of energy. Just think that, with one day of meals, a human can survive for 30 days before running out of energy and dying! A single sandwich can power our bodies and minds for up to a week. That’s some highly efficient and cost-effective energy use. 

On the flip side, humans need to sleep, they get crabby, life events can upset them and lead to low productivity. In these cases, an infatigable AI bot without emotions can prove of value. It just depends on how much it costs for that bot to continuously do the job. 

AI General Use Cases

While we all eagerly await a future where AI models are battling each other over whether or not the Earth is flat or if the moon landing happened or not, what can we use AI for today?

The answer is mostly routine tasks with limited variation. 

Can AI perform Google searches and succinctly produce a summary of results? Absolutely.

Can AI have a basic conversation with someone to take an order at a fast food restaurant? Not quite yet, but should get there soon enough.

Can AI code simple programs, apps, or sections of larger code? Yes.

Can AI perform rules-based automations? Yes.

Can AI determine strategy or predict the future? No.

Can AI innovate? No.

A point to be aware of here is that the ability of AI at today’s level of development is use-case specific. Claude is fairly solid at coding, ChatGPT at basic searches and summaries, Grok at more research-oriented queries. 

This is because of how the models were trained. Just like humans who are good at what they study and practice, AI needs a lot of study and practice. Humans also need a lot of feedback. “That’s good information, that’s bad”. This is how we try to identify valid versus invalid data points. You got the correct answer or you didn’t. AI has to learn through feedback loops just like humans do.

In AI-lingo, this is called pretraining. Pretraining requires feeding specific data to the AI model. Again, garbage in, garbage out. If the human(s) doing the pretraining feed it bad information, it might just become a flat earther. Ultimately, AI is fundamentally limited by the expertise of the humans building it because the humans building it are the ones that determine the data sets to train it on.

The way AI is talked about today, those who aren’t as knowledgeable simply equate AI to “smart.”  This is incorrect. What’s actually important is what data the company training the AI bots has access to and what information was chosen to feed to it. It also depends on at what stage the AI bot is in the learning process. Early on, it hasn’t gotten much feedback, so will make many more mistakes than a more experienced bot.

Once pretraining is complete, the AI model is a generalist, like someone who just completed university. While they have a lot of general information and maybe some applied knowledge, they still need training specific to your company. This is called post-training and consists of refining the AI model’s knowledge and processes to more specific use cases.

Here again though, it is garbage in, garbage out. Just like you need someone who knows how to deliver trainings at your company, you need someone who understands prompt engineering and post-training in order to train an AI bot to work for you. Train it poorly, and you’ll end up with an ineffective or inaccurate bot, no different than an employee who was poorly trained in onboarding.

We can see here how the use of AI in not simply plug and play. Either your company needs to know how to appropriately train and develop bots, or your company needs to pay another entity to create and train the bots for your use cases. This all comes down to time, internal resources/capabilities, and cost. It might (and, currently, often will) be the case that it’s easier and cheaper to hire people. 

AI Marketing Use Cases

As we’ve seen, AI isn’t going to solve creative or strategy challenges. It’s not going to push your organization in new directions. What AI is good at is copying if it’s been fed enough accurate examples with solid feedback. It’s also good at following rules-based processes or patterns. 

Here are the best uses of AI in marketing.

Automations and Repurposed Content

AI can easily manage routine workflows and, as we’ve seen, it can easily make derivative content. Whenever you write a new blog post, ideally, that blog post goes out on Facebook, X, Pinterest, LinkedIn, and any other platform you might be on. 

That’s actually a lot of work. Facebook needs a conversational tone, while LinkedIn needs a professional one. Pinterest needs an eye-catching image to go with it. All of the posts actually need images of various dimensions and each platform has its own character limits. It can take a human up to an hour to repurpose all that content and then post it. Even using a social media management tool, each post iteration still needs to be loaded in.

All of this can be done by AI. The posting of a new blog can trigger an AI bot to generate a post tailored to each platform with a correctly sized image. Will you end up with the odd grammatical construction or spelling error? Yes, but humans have an error rate of 3-5% when typing content at scale as well. Read every article in the Wall Street Journal or Washington Post and you’ll catch spelling and grammar errors every single day. That was the case even when they had full-time copy editors whose job it was to catch such errors. In that regard, AI isn’t going to make mistakes at a higher rate than the average person and it saves a ton of time. For some clients, we’re pushing out a blog post a day. That’s 30 hours of time saved per month, 30 hours where that employee can be working on something more productive. 

Take a listen to this podcast. It’s incredible and 100% AI-generated. But AI can’t make something like that on its own. It’s derivative. Our team knows what content to feed it and how to manage the prompt engineering to produce something like this. But it is ultimately repurposed content. We still had to have our human team create the original high quality content that we fed to AI, including the curated research and analysis, then needed a human team trained in prompt engineering to ensure it output the right result. While it’s AI-produced, it needed over 30 hours of human research and development prior to using AI in order to enable this kind of outcome.

Copyright-free Graphic Iterations

Stock photos or drawing actual illustrations is a pain. It can be very hard to find just the right stock image. When we build a website using stock photos, we can spend up to 10 hours just sourcing and editing stock images. With the right prompt engineering, AI can produce decent images in less time. 

Illustrations can also take a graphic designer a long time. Rather than pay a graphic designer 5 hours to draw something that matches your brand, AI can build iterations quickly and cheaply.

Here’s an example from some internal generation software we use for “woman speaking with her therapist”. As we can see, this could be a counseling setting, but the woman holding the pen seems a little off. 

AI marketing

Next, we told the AI to remove the pen. It couldn’t quite figure that out. The woman staring off into the distance is also still a bit odd.

AI marketing

One last iteration instructing the AI to have “no pen” and the patient “looking at the therapist”

AI marketing

Close, but still not quite there. It removed the pen this time, but added a pencil that seems to be coming out of the woman’s wrist. The gaze is also slightly off.

Here, we did one asking for an illustration rather than a realistic photo.

AI marketing

Passable.

As we can see, we can get some decent copyright-free images, but it’s no walk in the park. Our third image is usable. No one is going to notice the pen coming out of the wrist unless they look really close, and the gaze is only slightly off. For a small provider on a small budget, this might be an acceptable tradeoff from having to pay a Graphic Designer. But, for most organizations, they’re going to want to dot their “i’s” and cross their “t’s” by having real people do the graphics work so they can get the right image for the placement.

Want to create more complex images such as organizational flyers or infographics, then you’ll definitely want an actual Designer. 

Copy Editing, Including Brand Alignment

Microsoft Word has had built-in spelling and grammar checks for decades. New tools, like Grammarly, offer even more advanced versions that provide tone or style suggestions. AI isn’t doing anything new in the grammar and spell check department, but what it can do is check brand voice.

By feeding AI your brand guidelines, then training it on examples of good brand voice, AI can create a QA checklist that you can run all content through. Having a Brand Strategist or Marketing Director check everything that goes out is both impractical and expensive. Especially as an organization gets larger or if staff are encouraged to post company-related content, an AI bot trained on your brand can serve as a simple and cheap filter that employees or vendors run all content through.

QA Checklists

Like a bot to check brand content for brand standards, quality assurance in general is a simple process that can be outsourced to AI. Take your checklists and upload them to AI, then have AI run scans on any electronic deliverables. 

In order to catch errors and ensure accountability, most orgs have built up bureaucracy around checks and balances. These simple, rules-based processes can be handled by AI.

At Circle Social, every piece of content we produce has to go through an SEO check, a spelling and grammar check, and a brand check. Outsourcing this quality assurance to AI ensures it always gets done and saves the team a lot of time.

Research

Want a data point or statistic for a topic? Need a citation list of relevant research on a topic? Need a fact check? AI models such as Grok are great for this. Instead of manually scanning every site, which can take a lot of time, AI can do a solid job of both finding specific data points and summarizing research conclusions. This can be especially helpful for writers not well-versed in academic research lingo.

The only note of caution here is that there is a ridiculous amount of bad research published online. As we’ve discussed, AI is not great at evaluating the quality of a research paper, so a human should review source material for any final inclusions.

Ideation

Sometimes, even the most creative individuals can get stuck or hit a wall. While AI will not come up with anything new or overly creative, it can quickly “brainstorm” based on examples it finds on the web. This could be slogans or topics. While AI’s content will all be derivative, its ideas can help get juices flowing to land on something new and creative.

Rewording

When you’re typing 5,000 words a day, things can begin to blur as a content writer. They get stuck or fall into repetitive patterns. It can be very helpful to feed a paragraph into AI and ask it to reword. This is often much faster than the writer staring at the screen and typing, deleting, typing deleting to come up with new phrasing.

Simple Optimization Scripts

There has been a lot of talk about using AI to optimize marketing campaigns. The truth is, machine learning has long been built into Facebook, Google, TikTok and other marketing platforms. Many agencies also use scripts to help manage large campaigns. Scripts are comprised of simple if/then statements such as “If cost per conversion exceeds $50, then turn off ad and re-allocate budget to other campaigns.” 

However, marketing success is not merely campaign optimization, which is a common mistake many make. There is a somewhat pervasive belief that, if one has enough data in the campaigns, somehow they can be optimized to “get sales/admissions.” This couldn’t be further from the truth. Many factors determine campaign success:

  • First and foremost, the reputation of the organization doing the advertising. Americans are far more likely to engage with advertisements from the Mayo Clinic or Johns Hopkins than they are with some lesser known provider. Part of marketing is strategically building reputation so that familiarity and trust work in favor of your marketing campaigns. In such scenarios, conversions aren’t even the appropriate goal. Instead, we’re thinking about messaging, audience, and timing, which are still beyond AI’s ability to “reason” through.

  • Volume is limited and nothing scales to the moon. Let’s say keyword X+ad+landing page combo is delivering great conversion results. You can optimize budget to push into campaigns using those items, but, if there are only 2 searches a day using that keyword, re-allocating budget there won’t be too helpful. The same applies to a channel strategy. There are only so many people using a particular channel in a given day and different people are more likely to engage with ads on one channel over another based on personal proclivities. Simple if/then statements won’t enable effective campaign management within a given channel, much less across multiple channels. AI isn’t smart enough to figure out what to do next in these situations. We need to keep in mind that AI builds decision-making off of rulesets and patterns. If there’s no pattern from past data, then it’s stumped. Scripts can be a great assist for large campaigns with thousands of variations. They can be set to run a scan every hour and do so much more quickly and cost-effectively than a human, but they’re only able to perform simple if/then commands. 

Big Data Analysis

Where AI can be really useful is big data analysis. It has the potential to quickly scan larger data sets that might take humans ages to work through. The “working memory” of computers is also larger than a human's, so they are able to hold and recognize big data patterns easier. This is why AI has been effective in some healthcare or scientific discovery settings, because it finds patterns in large data sets missed by humans. 

With that said, marketing is far more dynamic than something like a healthcare diagnosis. Let’s say AI notices that your marketing campaigns have 10% higher conversions on Mondays within some specific campaign and this was something your human team hadn’t noticed. Does that mean you’re going to re-allocate all of your budget to only run campaigns on Mondays because that’s the highest converting day? Of course not.

Part of the reason campaigns convert on Mondays is because people have been seeing campaigns throughout the week. And no business could survive by only being open for new business on Mondays. 

What might make sense here is to reallocate some budget from other days in the week to spend more on Mondays. But, again, it may be the case that Mondays only has an uptick if and only if there is a high enough frequency other days of the week. By pulling money from other days, you may actually reduce conversions on Mondays. Maybe you leave the existing weekly budget and add additional budget to Mondays. AI can identify that opportunity in past data, but it can’t tell you what to do with that information. 

The point here is that AI should be used for big data analysis to find patterns missed by humans, but these insights may or may not be effectively actionable. 

SEO and AI Search

As we wrap up this section, an important note on AI search. Many providers come to us and ask what to do about AI search. How can they show up? The answer to that is, “Good SEO.” Like voice search, AI is not a different method of search, it’s simply a different way to display results. Voice search provides voiced results, but is still using the same underlying machine language. Want your organization to get mentioned in a voice search, you need to rank in the top 3 positions in Google search results, because that’s where the information is being pulled from. 

AI search is no different. Query your AI tool as to where it got its answer from and it’ll inevitably direct you to a page or website that would have shown up at the top of Google search results had you typed in the search there instead. Machine language is still machine language and the ranking algorithm is still the ranking algorithm. SEO hasn’t changed, only the method by which results are displayed has.

This does mean that you probably see your traffic dropping. Rather than people performing a Google search and looking for the answer on your site, AI is performing the search for them and displaying a summary of results. Your website is still getting the same number of “visits”, it’s just that the visits are more and more from AI bots, so not recorded as an actual site visit. 

This also means that you miss out on the brand recognition you would have gotten had they visited your website. Top and mid funnel SEO strategies are still important as the second and third tier content created for those strategies ultimately prop up your high-intent, first tier content. But you won't see the boost in traffic from them like you used to.

The bottom line is that your SEO efforts won’t change. When making a purchase decision, especially a larger one like a health issue, people are still visiting websites. No one is deciding on a dentist, a surgeon, or a therapist based off an AI summary. But they will build the summary with AI, then peruse the recommended websites. People also continue to perform Google searches when at the final stage of the decision-making process to evaluate options. 

While SEO efforts shouldn’t change, providers need to be aware that the way their impact is measured will since website traffic will be less of a leading indicator. Instead, inquiries and rankings matter more. 

Finally, since providers are losing out on the brand recognition from second and third tier content, outbound paid media strategies are more important than ever. They’ve always been important, but now there is even more reason to engage in them. Sitting on one’s hands and waiting for someone to search for a solution to their problem has always been a weak strategy, which is all SEO and PPC are. As a provider, you absolutely want to show up on those channels to connect with those patients, but just sitting around and waiting for people to find you is not going to enable sustained growth. That requires actually going out and making people aware of you, presenting your solution before they’ve decided to search for it. That’s what paid media does best as part of any robust multichannel growth strategy.

What AI Can’t Do

You’ve probably got a clear sense by now of what AI can and can’t do, but just to reiterate a couple of points for clarity’s sake.

Strategy

This should be clear from our analysis above. AI is no better able to predict an uncertain future than humans. In fact, currently, it’s much much worse at doing so.

Innovative Creative

AI merely copies or reworks existing examples. It’s derivative and can’t combine ideas or information creatively. While AI can be used to create things like copyright-free images, getting it to create a branded campaign image that sends just the right message to your target audience at the right time is not something it’s capable of. If your organization is not overly concerned with building brand identity and any old image will do, AI might be cheaper than stock photos. But it won’t be able to create unique or innovative campaigns.

Wholesale Content

There is an explosion of AI content on the web, something that Google and other search engines are trying to tackle. Google optimizes for “high-quality, original content.” While Google states it doesn’t mind some forms of content being automated, such as the routine posting of sports scores, in a sea of content, it wants to highlight the best, most helpful pieces. This means sticking to Google’s Expertise, Experience, Authority, and Trustworthiness guidelines. Being derivative, fully AI-produced content does not meet that criteria, which is why this content usually ranks lower.

This doesn’t mean Google doesn’t make mistakes and won’t occasionally rank low-quality content. But we always advise against exploiting temporary hacks since Google literally makes updates to its algorithm daily. It’ll eventually find a way to catch the hack and derank it. In our view, it’s always better to engage in a long-term strategy where you don’t have to worry about the bottom falling out a day, a week, or a month later.

Think Google can’t tell your content is written by AI? Below is a scan of a recent piece of content posted to a provider site. We can see it's rated as 95% AI-generated. We run all of our team’s content through AI scanners to ensure our content is written by people because we know AI content won’t rank. We can guarantee you our scan tools are not nearly as sophisticated as Google’s own. Google absolutely knows when your content is written by AI and it won’t rank as a result.

AI marketing

To produce content, AI simply samples the current top-ranking pieces, then rewrites them with different words. How will that help your organization win customers? If your content is more or less the same as that found on your competitors’ websites, they have no reason to choose you over them. However, if your content stands out by being original or more helpful than what they find on other sites, then it is far more likely they’ll choose you.

Let’s say you are the only dentist in a small town. You could absolutely have AI generate content based on websites from dentists in other towns and it would help you rank because Google has nothing else to rank in that area.

But if you are a new dentist surrounded by 30 existing dentists, Google will not see your derivative content as worth ranking nor will potential patients who somehow stumble across your site have any reason to see you as a better fit than your competition. 

You don’t need to create world-first unique content, but you do need content that sets you apart from the competition in your market. That kind of content still requires humans to write it.

The Future Isn’t Quite Here Yet

Despite those touting the revolution to come, today’s AI is extremely limited in its use cases. Its work is derivative, narrow in scope, and relegated to pattern-based processes. It is also fundamentally limited in its ability to arrive at Truth or predict the future for the same reasons humans are.

On the other hand, its ability to quickly and cost-effectively do some tasks put it at an advantage compared to humans performing similar tasks. AI is already helping with automation and operational efficiency. Just as robots took over repetitive physical tasks within a limited range, AI is taking over repetitive cognitive tasks within a limited range.

What’s not happening, and is potentially never likely to happen, is AI fully replacing humans in the lines of strategy, creativity, and future-oriented decision-making. If you’re like us, you’re getting emails daily on the latest “AI-powered” company touting their ability to solve complex tasks. This is, quite simply, a common example of why marketing can get a bad rap. These claims are hyperbolic and inaccurate, playing upon a customer base that doesn’t have the domain expertise to understand the true capabilities and limitations of the technology. It’s often the use of buzzwords to sell smoke and mirrors.

Here are Circle Social, we’ve been incorporating AI into our workflows for over a year now, but to improve operational efficiency. Besides assistance with data analysis and some simple if/then routines, AI is nowhere near the point where it can create, much less manage marketing campaigns and strategy. 

Glossary of Terms

  • Black box: when AI makes a decision, but we don’t know why because it’s following an internal rule set of its own devising.

  • Compute: The processing time, or energy, consumed by AI in training or the execution of task

  • Hallucination: An error made by AI. Instead of admitting it’s wrong or unable to perform a task, today’s AI will simply make up an answer instead. This is called a hallucination.

  • Large Language Model (LLM): An AI model trained by analyzing massive amounts of data, basically learning off of what humans have previously created.

  • Pretraining: Initial training of an AI model to be a generalist for a specific kind of task, akin to a student going to university and getting a degree in a particular subject.

  • Post-training: Following pretraining, when an AI model is fine-tuned to work on specific use cases related to its general training, akin to on-the-job training post graduation.

  • Reinforcement Learning (RL): A model where the AI learns through trial and error or feedback loops in the same way that the human brain learns and develops.