When to use AI tools, how to use them responsibly and when to avoid them at all costs
➕Disclaimer 1: I started writing this article long ago and completed it in batches. Work and life get in the way, and I guess I could have used AI to completely generate an article in 10 minutes, but then that wouldn’t be me, would it? 🙂 So, since the beginning of writing this article I have learned of at least 10 new AI tools that can do all kinds of magic things…. maybe more. But the bottom line remains the same and I stand by every word I wrote in this article.
➕Disclaimer 2: I did not dive deep into actual AI tools, and I did not refer to image creation tools. However, the principle is the same.
The rise and fall of Generative AI
From the first time I saw outputs from ChatGPT, even before I tried it myself, I thought it was hype. I still do, despite its evolution and the many additional AI tools that have popped up since. What triggered me back then was seeing bad content produced by people I know are unprofessional and inexperienced. It was clear that the output is only as good as the prompt, and if the prompt is poorly crafted because the person writing it has no idea what they are doing, the result reflects that.
But as hype goes, that did not stop people from using ChatGPT for anything and everything from simple Linkedin posts to legal documents and depositions, scientific articles and even planning mega marketing campaigns. It quickly became noticeably clear that this was not real “AI,” it was not intelligent, and the outputs were inaccurate and incomplete at best if not completely false. The machine learns based on models on which it is trained. The more tainted the data it is trained on and the lower the level of the prompts in grammar, vocabulary, and overall coherence – the worse the outputs. And they got worse and worse instead of better.
It was funny at first, and then scary to see complete “ChatGPT courses” and books pop up within 1-2 months of its appearance. How could people become such experts in such a brief time? They could not, but the hype was so strong that people fell for it left and right and spent lots of time and money on learning how to perfect their prompts, only to produce the same mediocre results repeatedly. To me this was a sign that these tools may be automated and have some ML algorithms behind them, but they were also limited in their pool of potential answers and output.
Most people nowadays can spot an AI generated piece of content from miles away just because of the way it is written, the words used, the heavy use of emojis, the irrelevant hashtags and the mistakes. I have even seen GPT generated content with spelling mistakes. That is how “intelligent” these machines are.
Moreover, a few things happened along the way that only reinforced my feeling about these tools and how they are not the innocent helpers they are portrayed to be. Venturing being called a “conspiracy theorist”, I will just remind a few: AI “Inclusiveness” algorithms preventing it from accurately portraying historical facts (think black Vikings and European medieval royals for example), ChatGPT going from open source to fully closed and controlled for profit, Erasing a whole term of presidency and claiming that a certain person was not president of a certain country (you know who I am talking about).
I even saw a marketing campaign for a credit card in which people had to publicly post a selfie holding the card showing their name and details on it and tag someone (security / fraud protection anyone????). The media quickly caught on and shit hit the fan. I was sure the marketing person or agency that pitched it used ChatGPT to generate the idea and sure enough, a couple of weeks after the media storm it was revealed in a small sidenote on an article that indeed it was a GPT generated campaign idea. A few things immediately jumped at me: first, did no one check the output? Did they just submit it as is? Second, even if they did submit as is – was there no one at the credit card company to check this from a regulatory point of view?? And third, why would the machine suggest something like this, unless it was trained to ignore regulation (intentionally or unintentionally)?
Not long after this case I started seeing news about Google detecting AI content and not prioritizing it in search results (this has of course changed with AI Engine Optimizations – AEO), Scientific magazines putting together AI rules and regulations to limit or even prevent the use of AI in writing academic papers, the EU’s AI ethics regulation code and lately a plethora of AI detector tools.
The final nail in the coffin for me was an article from Neil Patel in which he and his team presented the results of research they did on AI Vs. Human content. Neil’s research found that Human content takes longer to produce than AI content (69 minutes vs. 16 minutes respectively) but at the same time Human content outranks AI content 94.12% of time. Neil and his team ran an empirical experiment on 68 websites with 744 articles – half AI and half human – to validate the results.
You can read his excellent article about AI content being a waste of your time.
Here’s what happened since I first looked at this: privacy worries affect 49.5% of AI adopters amid hyper-personalization, with 56.4% more negative incidents like deepfakes and misinformation reported in 2025; EU AI regulations and Google detection tools have tightened, penalizing unchecked AI content. Security risks escalated with data leaks from public tools, reinforcing needs for private platforms like Hugging Face; 39% of marketers lack safe genAI knowledge, underscoring human review for biases and IP protection. Mental health and crisis comms misuse cases rose, highlighting AI’s emotional intelligence gaps. In addition the latest Linkedin 360brew algorithm updates specifically mentions penalties for AI generated content that has not been humanly edited and does not have an authentic human point of view added to it.
I also see more and more prompt engineering tips, all amounting to the same basically: the more details you provide and the more instructions you give the generators, the better chances you have for an acceptable outcome, which you then still must check, validate and edit.
In other words, better just write your own 😊

What are you saying then? Are you telling me not to use AI to create content?
No, I am not. I am using AI tools myself, but I use them sparingly and responsibly, with a clear workflow.
What I am saying to you is: Invest the time to learn about each tool and focus on what they cannot do, or do badly, so you can map out the risks. Then assemble your own AI toolkit. Do not just use any tool simply because it is new, shiny, and hip. Use what works for your needs and your work. Then, create your own workflow for how to use the tools, in what order. Finally, always include a last step of human review and editing.
Will this save you time? Yes, it will, albeit not as much time as you hope (see Neil Patel’s article mentioned above). It will become easier and faster as you use your toolkit and workflow. Will it deliver results? Yes, it will. Especially if you insist on the last step of human review and editing.
First things first. Let us look at what you can and should do with generative AI tools and what you cannot and/or should not do.
What you can and should do with generative AI tools
Take a R.I.S.K!
Generative AI tools like ChatGPT, Perplexity and others can help you a lot in preparing and strategizing your content. This is more important than actually creating it and it is also the part that takes the longest to perform, and where AI can help you save time. So, I am all for taking a R.I.S.K. on AI:
- Research
- Inspiration
- Structure
- Know-how
When & How to Use AI Tools in Marketing
The RISK approach I coined above is an acronym meant to make it easier for you to remember that AI tools can be useful in marketing when used responsibly and with a clear understanding of their limitations.
- Research and Inspiration: AI tools can help you generate content ideas and research topics, which can save you time and effort. For example, you can use AI to generate a list of potential keywords for a blog post or to find relevant articles and news stories. You can also use it to do market research (but beware of how much proprietary information you put in the prompt) and more.
- Content Structure and Organization: AI tools can help you organize your content and create a structure for your blog posts, articles, or social media updates. This can be particularly useful for creating a consistent tone and style across your content. It can help prepare templates that human writers can later use to create a series of articles for a website or posts for a campaign.
- Know-how and Expertise: AI tools can provide you with information and insights on diverse topics, which can help you create more informed and authoritative content. For example, you can use AI to generate a summary of a complex topic or to provide statistics and data to support your arguments. This can save time and simplify scientific content, for example, so that you can then read it yourself, understand it, prioritize it, and decide how to use it.

What not to do with generative AI tools
Now that I have covered the benefits and responsible applications of AI tools in marketing, let us turn our attention to the crucial aspect of understanding what not to do with these tools. While AI offers exciting possibilities and new tools sprout like mushrooms after rain, it is imperative to recognize that these tools have limitations and avoid using them in situations where they could lead to negative consequences. For example:
- Blindly Trusting AI Outputs: One of the most common mistakes people make with AI tools is assuming that the output is always accurate and reliable. Remember, AI models are trained using data, and the quality of that data directly impacts the quality of the output. The data comes from the prompts. In other words, the AI tool is as smart (or as stupid) as the people that feed it information. Therefore, always critically evaluate AI-generated content for accuracy, relevance, and potential biases. Remember the last step – human review and editing!
- Using AI for Sensitive Tasks Without Human Oversight: AI lacks emotional intelligence and nuanced understanding of human interaction required for tasks that involve sensitive or critical communication. Avoid using AI for situations like crisis communication, customer complaints that require empathy, or crafting messages that deal with complex social issues. Human judgment and empathy are crucial in these areas. This is a good place to talk about an example I have just recently seen on a mental health professionals’ group where someone mentioned they want ChatGPT to counsel people (answer questions and offer advice) with PTSD and depression. They complained that “ChatGPT” only offers answers based on the prompts. In other words, they were expecting it to be real “artificial intelligence” and produce its own answers… to counsel mentally ill people. There are not enough face palms in the universe for this.
- Replacing Human Creativity and Critical Thinking: AI tools can be valuable assistants in the creative process, but they should never replace human creativity and critical thinking. AI can help with brainstorming, generating ideas, and even drafting initial content, but the final product should reflect human ingenuity and insight. Remember the credit card selfie campaign I mentioned above? That is what happens when critical thinking and human creativity are taken out of the equation and blind faith is put on these tools.
- Ignoring Ethical Implications: As I have already discussed, AI usage in marketing raises several ethical concerns. It is crucial to be mindful of these implications and avoid using AI in ways that could perpetuate bias, violate privacy, or manipulate consumers. Always consider the ethical ramifications of your AI-driven marketing strategies. But it is not just the consumers that may be damaged. The brands using AI tools irresponsibly can suffer severe consequences too. Primarily – Intellectual property and security. If you use any of the most common tools out there to create business documents and reports, grant applications, investors decks – keep in mind all your information, patented data and ideas are out in the world for all to see. This information is used to train the AI model, and, in most cases, it will come up as answers for other people’s prompts in the same space or on the same topic. There are solutions for that, but they require a deep understanding of not only AI tools but also security. See for example https://huggingface.co/ (you can build your own internal secure “generative ai platform” to be used safely by you and your employees without letting the outside world access your data and prompts).
- Treating AI as a Set-it-and-Forget-it Solution: AI tools require ongoing monitoring and adjustment. Do not fall into the trap of assuming that once you have set up an AI workflow, it will run flawlessly without any human intervention. Regularly review AI outputs, assess performance, and make necessary adjustments to ensure alignment with your goals and ethical considerations. Once again – the last step of human review and editing is crucial!
Final words
The hype surrounding GenAI made them look like a magic solution for all content, marketing and even business development needs, but the reality is that AI output is only as good as the prompt it receives, and even then, it requires careful scrutiny and editing, as I said, oh so many times before. It is evident to anyone with a working pair of eyes that these tools are not truly “intelligent” but rather rely on vast datasets and algorithms, often producing generic, repetitive, and sometimes even erroneous content. When wielded by good, professional, and smart people, these tools can be a powerful ally to any marketer (or business developer or CEO). But the sad truth is that they gave way to even more mediocre, inexperienced, and sometimes even fraudulent people to portray themselves as experts and flood the internet with bad content and data.
The situation has grown more concerning as instances of AI-generated content causing real-world problems emerged (from security and privacy risks all the way to suicides). The credit card marketing campaign I mentioned earlier, where people were encouraged to share their personal information publicly to win a prize, exemplifies the dangers of unchecked AI. This incident highlighted the need for human oversight and critical evaluation of AI-generated ideas.
Then there are all the (often hysterical) steps and tools to combat the inaccuracy of these tools: the emergence of AI content detectors, regulations from organizations like Google and scientific journals, and the EU’s AI ethics code, all of which demonstrate a growing awareness of the potential pitfalls associated with unchecked AI use. Neil Patel’s research, which found that human-generated content consistently outperforms AI-generated content in search rankings, further solidifies my belief that while AI can be a helpful tool, it should not be seen as a replacement for human expertise.
At the risk of sounding like a broken record, the key to successfully incorporating AI into your workflow is to understand its limitations and use it strategically and responsibly. Focus on tasks where AI can excel, such as research, generating inspiration, structuring content, and providing industry knowledge, but do not be lazy. Always fact-check AI-generated information and be wary of potential biases embedded in algorithms. Most importantly, never treat AI as a substitute for human creativity, critical thinking, or ethical considerations. Again, do not be lazy!
I am not saying do not use AI. I am encouraging you to take a R.I.S.K.!





