Don’t let an algorithm decide what you read »

Thought Leadership and AI: Maintaining Authenticity and Credibility

April 15, 2024
The Recognized Authority Podcast Cover

The podcast that helps experts & consultants on the journey to becoming a recognized authority in your field, so you can increase your impact, command premium fees, work less hours, and never have to suffer a bad-fit client again!.

In this episode of The Recognised Authority podcast, Alastair examines the impact of artificial intelligence (AI) on the world of thought leadership.

Alastair shares insights from his conversations with renowned thought leaders, including Mark Schaefer, Jonathan Stark, Debbie Jenkins, and Erin Austin.

The episode explores the complex balance between leveraging AI tools to enhance productivity and efficiency, while preserving the authenticity and credibility that are the hallmarks of true thought leadership. Alastair highlights the potential risks of AI-generated content, including issues of bias, discrimination, and the spread of misinformation, and provides practical strategies for navigating these challenges.

Key topics discussed include:

  • Defining thought leadership in the age of AI
  • Navigating the ethical considerations of using AI tools
  • Maintaining authenticity and personal branding in an AI-driven landscape
  • Leveraging AI for content creation, ideation, and repurposing
  • Understanding the evolving legal and intellectual property implications

Whether you’re an established thought leader or aspiring to become one, this episode offers invaluable insights to help you thrive in the rapidly evolving world of AI and thought leadership. Tune in to learn how you can embrace the opportunities presented by AI while safeguarding the integrity of your personal brand and thought leadership.

Key Insights:

  • Thought leaders must grapple with the tension between using AI tools to enhance productivity and maintaining authentic, credible content
  • Defining clear guidelines and policies for the use of AI is crucial to preserve personal brand and thought leadership integrity
  • AI can be leveraged for content ideation, generation, and repurposing, but care must be taken to avoid generic, inauthentic output
  • Ethical considerations around privacy, bias, and misinformation must be addressed when incorporating AI into thought leadership strategies
  • Developing AI literacy and staying up-to-date on the rapidly evolving technology is essential for thought leaders


  • Establish a clear AI usage policy and be transparent with your audience about how you are leveraging AI tools
  • Utilize AI for tasks like research, data analysis, and content formatting, but maintain human oversight and approval for final outputs
  • Experiment with AI-powered tools for ideation, brainstorming, and repurposing content, but avoid over-reliance on AI for core thought leadership content
  • Continuously educate yourself on the capabilities and limitations of AI to make informed decisions about its integration into your workflows

Topic: thought leadership, artificial intelligence, AI, authenticity, credibility, content creation, intellectual property, ethics, personal branding


Show Notes

Guest Bio


Alastair McDermott 0:00
It’s hard to talk about AI without sounding like it’s hyperbole. For example, I think that AI is going to impact on the entire future of our species on every aspect of our lives. And it sounds crazy, but that’s where it’s at. And then I know that there’s a lot of people really sick of hearing about AI right now, because there’s just so much so much about it. That seems like hype. There’s just so many people talking about it. And it seems like just a lot of noise about AI. And that’s why I started a completely separate podcast called The AI powered thought leader. And the reason why I did that was because I wanted to talk with a lot of experts about AI, thought leaders, talk to them about how they’re using AI. And I didn’t want to make it so that every episode on The Recognized Authority is about AI. But I do think that it is super important that we that we just acknowledge that this is something that is going to be probably as impactful if not more so than the development of the Internet. And so, at this point, I have, I think, seven episodes known, recorded and published on the AI powered thought leader. And at this stage, I’ve started to draw some conclusions and started to, if not figure out the answers, at least figure out what the problems might be. And so today, what I want to do over here on The Recognized Authority, is just give you some of the key insights from the first seven episodes of the power of thought leader. And this will give you an idea about what we’re talking about over there. And if that is interesting to you, then I encourage you to go and listen to the full discussions. If you go to the recognized and click on podcasts, you can click on the AI powered thought leader there. But what I want to do is talk about some of the conclusions and insights and maybe questions that are coming up based on my discussions with people like Mark Schaefer, Jonathan Stark, Debbie Jenkins, Aaron Austin, who’s a lawyer, an IP lawyer, and I was able to talk to her about the intellectual property issues. And so I want to talk to you today about some of the patterns that I’m seeing in the conversations, and maybe some of the issues that that’s going to bring up for people like you and me, in terms of creating content, intellectual property, running a business, all of those things. So I’m going to keep it fairly short today, it’s just going to be kind of a concise summary of the issues that I’m seeing in the patterns I’m seeing in the conversation so far. Welcome

Voiceover 2:44
to The Recognized Authority, a podcast that helps specialized consultants and domain experts on your journey to become known as an authority in your field. Here’s your host, Alastair McDermott,

Alastair McDermott 2:57
the first thing that we pretty much all agreed on, is what defines a thought leader. And thought leaders are typically people who are recognized for expertise, unique perspectives, and valuable contributions to their respective fields. And they usually inspire and educate and influence others, through their ideas, and through their knowledge and through sharing their knowledge and ideas. And so that’s what that’s what a thought leader is. And this is where the question of bringing in AI assistants introduce this complexity? Because how do we stay authentic and credible as a thought leader, while we use AI tools to generate some of our content? And this is the tension. Where is the line when we do use these tools, in our knowledge, in our ideas in creating the content that we create that is supposed to inspire and educate and influence other people? That’s the real crux of the matter for me is how do we do that? And I know that thought leader, Thought Leadership isn’t just about content creation. And there’s like the human elements of the expertise and the perspective and connection and the connection that we make with each other. But we have to think about the content that we produce, because this is usually for the way that I see it. This is the interface that we have, as thought leaders to the world is through our content, maybe through our speaking through things like podcasts or books, white papers, reports, meetings, in person meetings, anytime that we’re facing the audience in some way. That is where the, that is where that’s the interface. And now we can use AI to help us with that in To face to help us produce a larger body of work, to help us to take and reformat and repurpose the same thought leadership content that we have in one format, into another or into many others. And then we can also use it for things like ideation and brainstorming, I find it really fascinating all of the different perspectives that people have in that I have spoken to people who are 100% certain that having the AI right for them is wrong, and that they don’t want to do that. But using it for brainstorming, and ideation is fine, but not for the writing part. And then others have completely the opposite. Or they view that the ideas and the brainstorming the ideation, that is the key part of thought leadership, and actually formatting the text that secondary and so that’s not really a key part and, and so they’re happy to use AI for that. And this is where things are really super blurred. Where we, you know, we you can take any perspective on this that you want to it’s very nuanced. AI is very powerful at making us more efficient, making us more productive, and changing between different types of content. And it’s generative AI, it’s great at generating content from other content in particular. And that can allow us to focus on our thinking. And that can even help us to express our ideas more eloquently more effectively. But there’s also the risk that if we over rely on it, it creates this generic on original content that has no personal touch and no unique voice. And where’s the line on that, and then it comes back to authenticity and credibility. And if our audience and if the people who follow thought leaders are starting to see them as lacking credibility, then that that has a massive negative impact on somebody who’s who wants to be seen as an authentic leader. So it’s complex and nuanced. One of the interesting things is the conversation I had with Jonathan Stark about the entire notion of being an author is now being reevaluated we have to reevaluate that, because there are so many different levels of, of interaction that we can have with an AI that can help us to write from simple things like we had back in Microsoft Word in 1997, like spelling and grammar checks, all the way up to maybe dictating something and have it have turned into a chapter or generating chapters from a book from previous writings that we’ve input. And then to the kind of the more generic end of the spectrum, generating, writing like chapters of a book, from content that we did not create, and just giving it some some basic input, and ask him to generate based based on that. And that, for me, is where we see the really bland content. So here’s how I would approach this. So I think that we need to have a good understanding of what our own personal brand is, we need to figure out what our comfort level is with AI, and establish some guidelines for ourselves as to how we’re going to use it, and then be transparent about how we’re doing that. And so one thing that I created quite a while ago, and I put it in the front of every one of my books is an AI usage policy, and an AI usage statement. And I would encourage you to do that is create some kind of AI policy for yourself and for your business. And also, if you use any subcontractors, who are creating content for you, I think it’s really important that you have this conversation with them. And then that you’re you’re transparent with your audience about how you’re using it. You can listen to my conversation with Mark Schaefer, we talk a lot about the transparency and the disclosure part. So there are some other kinds of ethical considerations that we need to think about. And one of those is simply privacy and confidentiality. So when we input data into these systems, it goes in and those systems are then trained on it. So we need to be careful about what we put in there, particularly when it’s confidential data from our clients, from other people, for example, if it’s transcripts of conversations, particularly if they’re off things like sales calls, we just need to be aware of what is going in there. Another ethical issue is we need to be very aware of the potential for bias and discrimination, particularly if AI, the AI data set is trained on content that is already biased, which a lot of what we do is. So for example, I know that when, if you asked an AI image generator to generate an image of a beautiful woman, that almost always would have been a blonde, white woman in this would be when the systems were set up based on the training data that was available to it. And that’s because that’s the way that the media that it was trained on. That’s the way that that media was skewed. And I know that they’re trying to put in place things to actively stop perpetuating biases, because otherwise what will happen is it’ll become a self fulfilling prophecy. So because the training data that it will be trained on will then generate content, which the next level of AIs are generated, are trained on. So we need to be careful about that. So there are there is the possibility for AI to have bias and discrimination built in. So we need to be very mindful of that. And just be careful about how you’re using AI and in a way that it’s not biased, not discriminatory, and that it’s promoting inclusivity and fairness. The other obvious issue is for the potential of AI to create misinformation, deep fakes, things like that. And that is something that as thought leaders, we need to be very careful to check the work that we use, if we use the systems to create anything that we were checking all the work that we should be checking all the work anyway. But it is important to think about.

Alastair McDermott 11:47
Okay, so I think that that brings me to one major point, which is that I think that as somebody who is in a thought leadership position, you have to understand what the potential risks are. And I think this comes down to, for me, it comes down to AI literacy, and that we need to as leaders, as business leaders, as thought leaders, we need to become more literate about AI. And that’s one of the reasons why I’m doing the AI part thought leader is to teach myself to, to learn, and hopefully to spread that as well to people who want to listen to that as well. So, I think that we have to learn more about how these systems work. And, you know, that will help us then to develop clear policies for how we’re using them, and making sure that they’re aligned with our values. And that we’re aware of, you know, that we need to fact check information before we share it, for example, very simple thing. But there is the ability for these to have these hallucinations, which is the technical term for where they where they make things up. So on the positive side, the ability for us to use AI as a tool to make us more efficient, and even help us be more creative. It’s incredibly efficient at analyzing large datasets, finding trends, patterns, and pulling out insights. And so we can use it, for example, I’ve used it with research data, where I surveyed over 1000 consultants, and I put all of that data into a spreadsheet. I use that as like a little mini database, I fed that into the AI and I was able to pull out a lot of different insights trends. And to be honest, if I have time is probably the the outline of an entire book in that that one dataset, and the AI will probably be able to help me create that. It can help with content generation, like, first of all, coming up with ideas, brainstorming, creating outlines, writing first drafts, and maybe even producing different types of content format, like you can use to create blog posts, articles, LinkedIn posts from your content. So it can be used for all of that. That’s the really obvious stuff. You can use it as a kind of creative sounding board helping you to explore new ideas. If you have, you know, writer’s block or creative blocks, you can get it to help you with that. And you can also use it in a way where you can ask her to look at problems from a completely different perspective that you might not have, and use it to kind of provide a diverse perspective and that’s a really interesting way of doing it as well. But the other the other thing is just coming back to efficiency and productivity. Ultimately, we’ll be able to use these tools to automate repetitive tasks, email management, probably social media posting, things like that. I’m, I’m a bit wary of using it for in particular for social media posting Because I think that as a thought leader, or as an aspiring thought leader, that’s something you need to have a lot of control over. But I think that you can certainly use it for a lot of the heavy lifting of drafting and rewarding things. So there are a lot of tools that can really help us and and that’s why I think it’s really important to not just put our head in the sand, and you know, hope that this is going to go away, it’s definitely not going to go away. But it is going to get better and better. And it is possible to use these tools in a way that increases our efficiency, creativity and productivity, without compromising our credibility and authenticity, is just a question of finding where the line is for that. And just being aware of the potential for this to diminish the, our credibility, the value of our thought leadership, if we’re, if we’re seem to be using these in a way that is lacking originality, you know, they’re really bland, you know, in a, in today’s blog world. In today’s blog landscape, you see this kind of content where, where it’s obviously written by AI, you see the word DALL-E views over and over again, there are certain words that make it really obvious, and that can hurt us. So we need to be aware of that. But AI can be used as an assistance in this creative process, not replacing our expertise, but what it can really help us to generate and create great content. So I think it’s really important. And I’m glad that you’re listening. If you’re listening so far, I’m glad that you’re listening to this, I think that it’s really important that we develop AI literacy, and figure out how to use these tools, they are brand new, so there is no manual for these, even the people who are developing them don’t even know how they work. So they they’re, they’re at the forefront of this. And if you’re listening to this, we are at the forefront of this. So we need to be the people to figure this out. So that we can then help people who are looking to us for that leadership. So I think that I think that it’s important to explore and experiment with all of the various AI tools, figure out what you like using what works best for your workflow, test different use cases, see if you can use it to automate some of the really tedious tasks that you have. And free up some of your time. And see, can you use it for maybe you’ve got some, some research data already in a spreadsheet or in a database that you can export and use it on something like, for example, on AI studio from Google, where you can upload a million tokens, which is about I think, 30 business books worth of content. So you can upload quite a lot of quite a lot of data in there and ask it to to look for trends, do some analysis, pull out any key insights and just ask it to do that on some data and just experiment with it. And then experiment with with some of the content creation tools. But for that, I think that in the world that we’re in, which for me is is b2b, professional services, expert services space, I think that we have to be a bit more critical about the quality of content than, for example, somebody in the b2c consumer space. Because I think that in the different market, we have a different level of requirements for credibility and authenticity. But in our space, we can still use those tools. And we can in particular, use them to create first drafts, we just need to be wary of using them to create final drafts. So one of the things I talked to Jonathan Stark, and Erin Austin about is the writing process and ownership of the final products. And there are both legal and ethical questions here. So the US Supreme Court has ruled that AI generated works are not protected by copyright, as copyright law is designed to incentivize and reward human creativity. But the lines become blurred. When we provide as humans, we provide substantial input and direction and editing to the AI generated content. And you’ll you should listen to both of those conversations, in particular, the legal perspective from Aaron Austin, who is an IP lawyer, but an intellectual property lawyer based in the US and I have to say I was I was reassured after talking with her about this I was a lot more I thought that things were a lot more clear coat about not having ownership if the if the AI is used in certain ways, but it looks like it’s it’s nuanced, particularly where in the case that I mentioned a Where we use something like dictation tools, and then ask AI to tidy those up. But it’s not it’s not clear cut. It’s still evolving, and there’s going to be new regulations, there’s going to be new rulings. And we haven’t figured it all out yet. I think that if we, as the thought leader, stay as the driving force behind the content behind the ideas, and we provide direction and approval, then I think that we can justify claims to authorship, and my friend, Jocasta, Mona, who was in episode two, episode two of this podcast. I think he’s coming from a different perspective, he thinks that saying that we used AI to write a book is saying, like, we used a car to run a marathon. And I think that it’s there’s a bit more nuanced from from my side. But I see where he’s coming from with that. So this, this is something that we need to figure out. And it’s not straightforward. The level of human input. And the specific use case of the content is crucial. Because it’s not just about the authorship. It’s about the intellectual property rights. And that’s something that Aaron talked about, in when, when she talked about, you know, if we’re using this great something like social media posts, it’s not such a big deal. But if we’re using it to create something that we want to earn money from, for example, like a book or framework or something, or a training course, then we need to be more wary of the use of AI. So again, listen to my conversation with Aaron about that.

Alastair McDermott 21:43
Another conclusion that I’ve drawn from the conversation so far, is the importance of a strong personal brand, because that is the only way that we will differentiate ourselves and maintain credibility, when there is so much AI Generated Content everywhere. Because that personal brand is the human connection. And that is, you know, our personal story. So, authenticity, human connection and storytelling, are what will make us credible and make us a personal brand. That will help us to retain our audience, build trust with people, and establish ourselves as authorities, which is what I’m really interested in. And we need to be aware of relying on AI generated content, because that will come across as inauthentic and self promotional. And we want to build a strong personal brand that engages with their audience that provides true value. So I would think about your core values and your messaging, as a thought leader, and developing a voice and a style that is consistent across all of your content. And then sharing your personal stories and your experiences. I think this is where this is something I’m not so great at and I need to get better at doing this is using storytelling in content to connect with people to connect human to human at this more deeper level. And I think that this is what will go beyond what an AI can do. Because it can never create that that personal brand. And it’ll try and replicate. And we will see. And we have already seen you know, AI generated personas as news news readers, online news readers and things like that. But I think that the personal brand, and if you listen to my conversation with Mark Schaefer, and to some degree Christo, both of those episodes will be interesting from that perspective. Okay, so the future. What’s what’s next, so, okay, AI is evolving rapidly. And what we’re seeing, particularly in terms of, you know, the bland content that we’re seeing the low quality stuff that’s being created, just bear in mind, most of that has been created with current with AI tools that are about two years old. And all of it is usually being created with the free versions of these tools, not the premium version, not the paid version, and certainly not the latest models that people are using, like ChatGPT, five will be coming out at some point soon. And that is going to be a very significant step forward. All of the AI tool providers are working on their new models, which are going to be a leap forward. So just bear in mind, this is going to get better and better. So what can we do then? So first thing is I think we absolutely have to learn about these tools, learn about how they work, stay literate, figure out how we can build them into our processes and into our day to day because they’re not going to go away. We should also look at our personal brand and build more of a personal brand if we don’t already have one. And that’s something I talked to people, if you look through The Recognized Authority podcast, and if you look through my website, you’ll find a lot of content around personal branding and becoming known as an authority. I think that’s super important. I think that live events are going to become more important in person interactions, because that’s the part that I can’t do right now. Maybe in the future, it will be able to, but people who have a who have who do good live events and who have good presence, those people are going to have the advantage here. I think that collaboration, or using AI assistance is going to become the norm. And figuring out how to use it and use its data processing capabilities, it’s going to lead to some really interesting things, some probably some terrible things as well. But I am hopeful that it will lead to some very good things as well. It’s a tool, we need to be aware of that it’s a tool. And like any tool, it can be used badly. It can be used incorrectly, it can be used unethically, we just have to try and use it in the right way for ourselves and figure out what that is, we need to be adaptable. We need to figure out how this impacts on ourselves on our clients, on our families on our society. But having this growth mindset and continuous learning, that will be a crucial, key skill. So think about things where AI can’t compete, public speaking, building relationships, and then figure out where it is great. increasing productivity efficiencies, and, and building that into our workflow. And try and stay up to date with where it’s going and what it’s capable of. Because it is evolving rapidly. So I am hopeful, it is also scary. There’s a lot we don’t know, we don’t know where it’s gonna go. But the potential for positive impact is huge. So we need to embrace that part. And by figuring out, and by taking like a leadership role in in figuring this out and putting in place our own our own policies, by having conversations with others about the ethics, about credibility. And then by using it to, to spread positive messages. We can use this in a way where we can continue to inspire and educate and share our knowledge, which is what we want to do as thought leaders and aspiring thought leaders. I think that the future is unwritten. So we got to write that. So embrace it, it’s not going to go away.

Voiceover 27:58
Thanks for listening to The Recognized Authority with Alastair McDermott. Subscribe today, and don’t miss an episode. Find out more at the recognized


🎙️+📺 SHOW: The Recognized Authority is the podcast & YouTube show that helps experts & consultants on the journey to becoming a recognized authority in your field, so you can increase your impact, command higher fees, and work with better clients.

📲 | SUBSCRIBE on YouTube:

👑 – BOOKS: Searching “Alastair McDermott” on Amazon

Expert Authority Builder Series: 

👑 33 Ways Not to Screw Up Your Business Podcast: A Comprehensive Guide to Planning, Recording and Launching Your Business Podcast!

👑 Quick Win Content: How to Create a Single Piece of Engaging & Effective Content That Resonates with Potential Clients and Generates Quality Leads

👑 Efficient Content Creation: A Practical Guide to Consistently Creating High-Quality Content in a Busy Schedule

👑 How to Sound & Look Good on Zoom & Podcasts: Tips & Audio Video Recommendations for Consultants & Experts

🚨 – FOLLOW Alastair and The Recognized Authority ON SOCIAL MEDIA… 👇