For our latest LinkedIn Live, SALT’s Head of Content, Andy Jessop, and Content Consultant, Natalya Stecko, spoke about the ethical application of AI in content marketing.

A hot topic in the digital marketing industry, the pair addressed some of the biggest questions and concerns surrounding AI ethics, including:

  • navigating the risk of false information
  • ensuring responsible AI use
  • best practices for deploying AI
  • how AI content influences trust
  • the impact of bias in AI algorithms.

Here’s a summary of their timely discussion.

How can we mitigate AI’s risk of generating convincing but false content and ensure its authenticity?

AI is advancing at an extraordinary rate. As we’ve seen, the technology is coming before we can figure out how best to use it or what regulations need to be implemented.

These rules will appear as AI develops — just as we created the right to privacy or to be forgotten on search engines, regulations will be put in place to mitigate the risk of AI.

For example, the EU is proposing new legal frameworks around AI use. The AI Act divides the risk of AI into four tiers, ranging from ‘unacceptable’ to ‘minimal’ risk. These assess how harmful the technology might be to people, infrastructure, and our decision-making.

The regulations will impact how software is developed, rolled out, and used. For example, the ‘minimal risk’ label indicates an application’s unrestricted use.

On the other hand, higher risk identification warns that the AI is a threat to people and could be unacceptable.

Rules like this will enable people to make more informed decisions about using AI in content marketing.

Misleading content on social media

Misleading content is a considerable concern of AI — it’s scary for marketers and consumers alike.

One example is social media. Many instantly believe what they see on social platforms without questioning it — AI will only inflate that risk.

We’ve seen influencers in China live streaming on TikTok for hours, sometimes days, selling and promoting their products. However, they’re AI-generated clones. People give companies a short video of the streamers, which AI companies turn into cloned footage for them to use as they wish.

Technology like this has contributed to the problem of deepfakes, where footage of people is used and manipulated without the actual person’s knowledge or consent.

Celebrities including Taylor Swift and YouTuber Mr Beast are just some high-profile people to be targeted. This has brought the issue into the spotlight, involving governments and politicians.

Labelling AI content

When using AI content, marketers are responsible for understanding the risks and how consumers may perceive it.

Many leading tech companies are already starting to label their AI content. Meta has recently begun labelling AI-generated images on its platforms. This means users can distinguish between what is human-created and what is not.

Understanding this distinction can completely change how you perceive a photo or video. Disclosure isn’t just good practice anymore — it’s essential for maintaining audience trust.

Labelling will become even more crucial as we move towards significant political events, like elections. With voice and face cloning, candidates can flood platforms with misinformation. AI risks disrupting elections if used manipulatively.

Guidelines for brands and platforms need to be put in place to mitigate the damage. This brings us to our next question.

What ethical guidelines are essential for using AI content?

As we’ve briefly mentioned, companies must install standards and practices surrounding AI content.

We recently saw Google’s generative AI, Gemini, encounter problems with racist and misogynistic images being created. Google responded by taking down Gemini’s picture-generation capabilities to make vital algorithm changes.

Many companies will likely face similar issues as they implement new AI technology — the rules and guidelines they put out must change as it evolves.

IBM’s five pillars of trust

One company approaching AI in an ethical, considerate manner is IBM.

The organisation has implemented five pillars of AI trustworthiness: fairness, explainability, robustness, transparency, and privacy.

  • Fairness: This concerns the equitable treatment of individuals and groups.
  • Explainability: People using AI need to know how it gets the results and where they’re coming from.
  • Robustness: It must be defendable from attacks and built to withstand new technologies.
  • Transparency: Users can see how the service works and comprehend its strengths and weaknesses, including algorithm biases.
  • Privacy: AI must safeguard the privacy and rights of the data it collects.

IBM also shared its principles concerning AI. Most interestingly, it says it builds technologies to augment human intelligence, not replace it.

This is a massive fear in content marketing, where we’ve already witnessed companies attempt to replace specific roles with AI. Even in 2020, before the widespread rollout of AI, Microsoft let go of dozens of journalists in favour of AI content creation.

Organisational accountability

Companies must take ownership and accountability for the content AI produces.

It is crucial to have human review systems, policies, and fact-checking to ensure all AI content aligns with their core values and ethical standards. This makes sure whatever you’re putting out isn’t damaging. Building on your internal ethics and bars is an excellent place to start.

Ongoing training is another essential factor. AI systems are only ever as good as the people running them — as new features are released, companies need to grasp how to use them responsibly.

Being transparent is also key. Brands will want to know they are working with agencies they can trust, including understanding if they’re using AI and what they’re doing with it.

How does AI in content creation affect brand trust, and what strategies can brands use to maintain trust while considering consumer demand and ethical AI use?

We know AI gives us the potential to improve engagement and produce content on a massive scale.

However, bulk generic posts that have nothing to do with your audience won’t provide meaningful value. These are created for quick engagement, which doesn’t offer long-term, sustainable results.

Using AI just to post something and not benefit your readers won’t work. The key is to be transparent about AI in content creation to help set the right expectations for your business.

Quality control is non-negotiable. You need human input to ensure the content is accurate and safe. All content — AI or otherwise — represents your brand. If it doesn’t meet your brand values, you risk losing the sense of what you set out to be.

Holding brands accountable

Today’s consumers are more informed than ever before. They’re holding brands accountable, causing a demand for transparency and ethical brand behaviour.

People voice their expectations — brands should listen to what they’re saying regarding AI and adapt their approach to maintain credibility.

Some brands in certain spheres may have audiences less concerned with their use of AI. Gambling audiences, for example, might not care as much as followers of an independent arts or music company. But this isn’t a hard rule — being open with your audience is vital.

Whatever your niche is, you should always consider the implications of your AI-assisted marketing campaigns.

Look at privacy bias and discrimination and how your values and ethics extend to AI. Offer continued training to establish expectations within your teams.

If you’re an agency working for a brand, ensure you understand all their AI expectations and guidelines. Take care with their information, keeping everything segmented and separated.

Allocating the time

The data you put into AI to get unbiased results takes hours — as it should. Put the resources in to ensure that, when crafting content, you get something that’s viable and credible.

You’ll then need to test, test, and test again. The answers won’t appear on their own — you need to put the effort in every time.

How can biases in AI algorithms affecting content creation and distribution be addressed to ensure fairness, minimise biases, and consider their impact on market competition and consumer perception?

Bias is AI algorithms can skew content creation and distribution, leading to unfair representation. It perpetuates stereotypes and can influence customer behaviours.

This can be very subtle. Say you ask AI to generate an image of a CEO or talk about a CEO — the result will most likely be a picture of a man or referring to the CEO as “he.” AI assumes a CEO is a man without you providing contextual information.

Content creators must consider biases like this when using AI to produce content. Take care to eliminate biases within your data input — this process is long but ensures a fairer output.

You should also regularly audit the AI’s algorithms. Analyse the training datasets, ensuring the data used is diverse and representative of human experiences and perspectives. Technology constantly evolves, meaning the data you input should, too.

Continuous monitoring by humans is also important. Check for biased outcomes and make regular updates based on your findings.

Teams have an ethical and social responsibility to highlight anything unfair or incorrect within their AI and make improvements.

AI’s future investment

We saw how the marketing industry funnelled resources into social media and SEO when these platforms and tools first appeared — AI is the ‘next big thing’ for companies to invest in.

How this will develop will be incredibly interesting. We’ll likely see new roles being created, such as the ‘Head of AI,’ alongside dedicated AI teams.

However, no one is sure exactly the direction this technology will take. AI is advancing quickly — we haven’t yet fully grasped its potential or how to integrate it into our everyday work safely.

Approaching AI ethically is vital. Human review will always be essential to maintain an element of control and to guide the responsible use of the technology.