Don’t let an algorithm decide what you read »

AI Beyond the Hype

May 13, 2024
The Recognized Authority Podcast Cover

The podcast that helps experts & consultants on the journey to becoming a recognized authority in your field, so you can increase your impact, command premium fees, work less hours, and never have to suffer a bad-fit client again!.

Are you overwhelmed by the hype surrounding AI? Do you struggle when it comes to figuring out the practical applications of AI in business?

In this insightful solo episode of The Recognized Authority podcast, Alastair McDermott delivers a presentation to a group of business leaders, simplifying the principles ofAI and providing actionable strategies for using this transformative technology in your business

Drawing from his expertise, Alastair tackles the questions on every business leader’s mind:

  • How can we use AI to increase productivity and streamline repetitive tasks?
  • What are the ethical and legal considerations when using AI-generated content?
  • How can we stay ahead of the curve and future-proof our business with AI?

Through this compelling presentation, you’ll gain valuable insights:

  • AI is no longer a futuristic concept; it’s reshaping industries now, and businesses need to start implementing it to stay competitive.
  • AI can significantly increase productivity by automating repetitive tasks, analyzing research, reformatting content, and allowing experts to focus on their core expertise.
  • There are ethical and legal concerns around AI-generated content, such as potential biases, stolen training data, privacy issues, and questions of authorship and intellectual property.
  • Businesses should create an “AI-ready” culture by promoting AI literacy, exploring use cases, implementing policies, and continuously evaluating and experimenting with AI tools.


  • Appoint dedicated “AI teams” within different departments to explore AI use cases for the business.
  • Invest in AI education and training for employees to promote AI literacy.
  • Develop clear policies and guidelines for responsible and ethical use of AI, particularly concerning privacy and GDPR.
  • Continuously evaluate and experiment with new AI tools and frameworks as the technology rapidly evolves.

Don’t miss this opportunity to separate hype from reality and unlock the potential of AI for your business. Tune in as Alastair McDermott shares his wisdom on navigating the age of artificial intelligence through this informative presentation.

➡️ Get your free guide to getting started with AI at

➡️Book a call with Alastair at

Show Notes

Guest Bio


Alastair McDermott 0:00
I was asked to speak to a group of business leaders recently about AI and the practical implementation and going beyond the hype, which is where I think we’re at now. There is a lot of hype around AI, but I think that a lot of that hype is warranted. This is something that we genuinely need to pay attention to now.
AI is no longer a futuristic concept. It’s actually reshaping industries right now, and it’s incumbent upon us as business leaders to think about how we are going to use AI in our businesses and to start to implement, because we’re going to get left behind by our competitors if we don’t think about AI and don’t use AI. So this is something I think is very important to look at right now.
And I think the timing is important because the pace of acceleration is incredible. So this is my presentation to a local business group here in Ireland and I’ve got the slides available. If you visit, you can get a free guide to getting started with AI.
I’m going to be updating that guide and I’ll be sending that if you sign up for the email list there. I’ll also be sending the slides and a video of this presentation to that list as well, so if you visit LearnAI.Guide you’ll get all of that.
And now here is my presentation, AI Beyond the Hype.

So one of the things I’ve noticed about AI is there’s a lot of hype. And I think that the hype is damaging, because the possibilities are genuine and real and can be implemented even today in our businesses. So there was a quote that came out from Sam Altman, who is the CEO of OpenAI. They’re the ones who make ChatGPT. And this has been now referred to as the quote, by a lot of people, because this is talking about what AI is capable of doing in positive and negative sense. So he said, it will mean that 95% of what marketers use agency strategist and creative professionals for today will be easily nearly instantly and at almost no cost be handled by AI. So the implication of this is that 95% of marketers or 95%, potentially have knowledge workers, or the work that we’re doing as knowledge workers, could totally be handled by AI. So that means that people who are professional services, who are knowledge workers and experts, AI is coming for us first, which is interesting, and challenging that we need to think about that. So that’s the kind of the context to all of this. There’s been some recent events, and I just picked a few at random here, there’s so many that it’s, you know, there’s so many things happening. But here’s some recent developments that have happened, there’s a company called Alpha fold. And what they’ve done is they’ve used an AI to generate all possible proteins. So it used to be the case that researchers, they would need to try and test creating proteins. And it would be a process that would take months and years. So alpha fold have already generated, every possible protein that can be created, is modeled and mapped out. Which means that researchers just now go and look up the directory to see is a protein possible. And that is revolutionizing science. And the capabilities with that, Microsoft came up with a tool recently where they could clone a voice. But that’s been done before, but they could clone a voice realistically from three seconds of audio, which is incredible. And also scary in the sense of, you know, creating fakes and all that kind of stuff as well. So when you think about that, who the World Health Organization has talked about how they have found more accurate medical diagnosis. So that’s a real positive because misdiagnosis is an issue. And so having more accurate medical diagnosis, there was one thing that they mentioned, which was they could diagnose COVID-19, from the sound of the specific sound of a cough. So they were able to tell whether it was a COVID-19 or just a regular cold or flu from from the audio of a cough. So that’s pretty incredible as well. There’s a tool called Surah from OpenAI, which they haven’t released to the general public, but it looks very powerful, where they can generate realistic videos from a simple text prompt. And that’s going to be something else that’s going to be coming down the line. And that will be both pretty cool. You know, as a user would be able to generate videos, imagine being able to create, you know, ads for your products for Facebook, you know, creating videos, let’s say you could create a video for your kids about, you know, their bedtime story, you could you could all this, like the possibilities are endless, you know, creating educational videos, my products, so many different things that we can do with that. So those are all kind of the stuff that’s happened recently. Stuff that’s coming down the line, there are new models, and ChatGPT. Five is probably already ready. So GPT four is one that we’re using at the moment, it was a big step up from TP three GPT, three GPT. Three was the one that kind of opened everybody’s eyes to the fact that wow, this is actually a big deal here. GPT. Five is the next version after GPT. Four, which which is quite advanced, like everybody else is being measured against GPT Ford and and they say well, this is you know, this is a GPT four equivalent model, for example, the latest ones from Google, people like that. The fact that GPT five is going to be a big advance. I think that they’re holding back because it’s so advanced. So they’re kind of just getting us ready for it really. And that’s something that Sam Altman has mentioned before that and I’ll give you a quote from him. But you know, they sometimes deliberately slow down the pace because they want to allow society to adapt to the capabilities of these things. AI agents are something that is going to be you’re going to hear an awful lot more about that over the next year or two agents are so the way we’re using AI typically right now is we’re using we’re having a chat with a chatbot so we’re typing in or speaking and you can speak to it. By the way, if you installed an app on your phone, you can have a conversation literally with it. But AI agents are these more autonomous AI, where you can set it up to do multi step actions. So for example, an agent, I might be able to say, I want to travel to my Orca, and say in Parma, so I want you to book me the book me the flights and the hotel, and here’s access to my, to my email, and do all the verification stuff that you need to do just go and do that, here’s the dates that I want to do or pick the dates off my calendar. Now, the agents are going to be able to do that at some point in the future where we’ll be able to just give them tasks and get it to go and do those things. So you can see you’d be able to automate complex tasks and repetitive tasks. Robotics is also getting a big shot in the arm of investment, because of the because of development of AI and OpenAI have actually partnered with a company where they’ve put out an open ended to put ChatGPT and the the audio version of ChatGPT, where you can have a conversation, they put that into a physical robot, and they were able to get the robot to pick up an apple and, you know, put put a plate into a drying rack. And then so so the the technology in robotics is getting, it’s getting added AI is getting AI reasoning is getting added to robotics. And then AI, sorry, robotics itself is also getting a big investment. Because of this, because there’s there’s trillions of dollars being put into all of this development. Ultimately, down the line, there’s a thing called AGI artificial general intelligence. And that’s the thing that is both incredibly, incredibly scary, and the potential is very positive as well. The issue is this could this so artificial general intelligence, or AGI is where an AI gets to the stage where it is as smart as a human being and can reason in the same way. And we don’t know when it’ll get there, but it probably is going to get there. And so the issue is that that could lead to a complete change in how we think about work Education Society, because for example, if computers and if AI can do all of the knowledge work, and can do everything that we do, like how does that impact our education system? How does that impact our society? Like do human beings do we sit about on the on the couch, you know, scratching our ass and looking at TV, you know, is that is that the future of our race? You know, so there’s all sorts of really scary things about what can happen there. And then the flip side of that is, well, maybe that will, you know, maybe it will create a lot of it’ll create a lot of new jobs, it’ll create, you know, people will be able to work less. There’s all sorts of different implications of that as well. So, one example that OpenAI put out as a case study, sorry, I don’t know why that screen is flashing. I hope it’s not flashing for you guys. One example is moderna. moderna is the company that make the vaccines. And so they have implemented ChatGPT and OpenAI in their systems. And so they’re finding that a team of a few 1000 can perform like a team of 100,000. So they’ve got a team of, I don’t know, maybe three, four or 5000, employees, something like that. And they’re saying that they can have the same productivity as a team of 100,000. So this is pretty incredible in terms of productivity. And after two months of adoption, so only two months, after 60 days, employees were having 120 conversations per week, on average, that’s each employee is having 120 conversations per week. So that’s these chat conversations with the chatbots. So that’s a round one conversation every 20 minutes that you’re having. And these the bots that they’re using are helping them to do their job better. Now, for example, one of the things that they use as an example in the case study was they have a compliance, illegal compliance bot, their own version, which has been customized just to, to scan legal documents. And they’re asked they give it a contract, and they ask you to give them a summary of the contract and call out what the things are they need to look for. And this is stuff that we can all do. By the way, this isn’t just doesn’t have to be Maderna. You can do this right now if you want to. But if that’s the power of these tools, in terms of productivity, and also then we’ll do you need to call a lawyer? Well, that’s a good question. So do you know that the issues are like for lawyers, and for all knowledge workers, a lot of the grunt work that we do as knowledge workers, as experts may be taken over by these systems as they learn more and get smarter.

Alastair McDermott 9:35
So here’s another quote from Sam Altman, we’ve been through massive technological shifts, and a massive percentage of the jobs that people do can change over a couple of generations. And over a couple of generations, we absorb that so as a society, we absorb that every technical, technological revolution has gotten faster and this will be the fastest by far. And the the part that I find production a little scary is the speed with which society is going to have to adapt. So You know, like we have seen, for example, you know, we changed, like humanity has changed over time, when we saw the development of, you know, the development of cars, you know, there would have been farriers and blacksmiths, and, and, and carriage, you know, drivers. And you know, there’ll be people managing stables, they all lost their jobs because of the introduction of cars. And so in the same way this introduction is going to change. The only issue is this is happening very, very quickly. And I think that’s the thing that people are both fatigued by and scared by, you know, how, like, how does this impact on us? So it’s, it’s interesting. So I started a podcast called The AI powered thought leader. And because it was interesting in its impact on on thought leaders in particular. And I’m probably going to rename this, by the way to the AI powered business leader, because I think that the general implications in business are very interesting. So I’ve spoken I’ve spoken with a lot of thought leaders and really smart people. I’ve spoken, I decided I wanted to talk to a lawyer about the legal issue. So I spoke to a lawyer about the legal perspective. And she’s an IP lawyer. So she had a good good perspective on that. I spoke with people like Mark Schaefer, and Christo and Jonathan Stark, Mark Schaefer is an author of I don’t know, 1015, he’s 15 books, he’s, he’s a leader in marketing, Christo has, I don’t know, maybe 4 million people following him on YouTube. So that’s the type of people I’ve been talking to about this just to get their perspective on this. So here are some takeaways that I’ve taken from speaking with some really smart people at this, about how we can implement this in our business. So AI is increasing our productivity. And it’s doing that by helping us to automate repetitive tasks, it can help us to do things like analyze research, reformat content, and it allows us to focus on our core expertise. That’s the really interesting thing about this. And so that’s, that’s the positive side of this is we can increase our productivity and do more, and not have to work on the grunt work. So the repetitive tasks, that’s a big one, for me personally, the analyzing research is really interesting. So I did a survey of over 1000 consultants a couple of years ago. And I pulled out like a read through that all the data and I pulled out all of the the insights manually myself, originally when I did it in 2019 2020. Now I took all of my all my survey data, and I just fed it into into Google’s AI studio. Google’s AI studio is interesting, because the there’s a version of it, a beta version of it, that we can access that can take up to 1 million tokens, what that means that it can, it can read a lot of data. So the one of the big limitations right now with these systems is the amount of information you can feed into them, the more information you can feed in, the better the output, usually, the Google AI studio is really interesting, because you can put in a million tokens, which is about the equivalent of three or four business books. So you can give it three or four business books worth of of input and ask it to process that. So I was able to give it all of the survey data that I had from this big survey and asked it to pull out insights. And it was really interesting, was able to pull out the same insights that I had spent hours, days, weeks doing manually, it did it in 25 seconds, it was incredible. So doing things like that analyzing research and reading documents, all of that stuff, reformatting content. So stuff that we’ve already done. So for example, I’m talking to you now and I’m recording this. And so I can take the recording, I can take the transcript of that. And I can ask it to reformat that as a blog post, for example. And so I find that really interesting that it can just instantly reformat content. And that gives me more time back allows me to take something that I’ve done, and reuse it in different ways. So that’s the productivity side of things. One of the issues that I found, particularly in terms of thought leadership is how do we stay authentic. So one of the things is, if we’re, if we’re creating for writing, we have to be careful that we don’t compromise our own credibility. So for example, if you post something up on LinkedIn, that you got AI to write for you. And people see that it’s clearly written by a because it’s, sometimes it’s easy to tell by the word choices and things like that, the formatting even that could potentially compromise your credibility as a as an expert, because people see that you didn’t write it yourself. So where is the line for using AI to generate stuff for us? Or when when we when we add our own thoughts into this, so where does that line lie? And it can come across, it’s clearly not in your voice. So you have to be careful about how you use it to generate and then there’s also potentially a danger of becoming over reliant on these systems. One of the things that I’ve noticed a knife written a couple of books, I’ve got two to two regular size books. And then I’ve got four or five booklets, and they’re all up on, on Amazon. And one of the things that I’ve noticed is that the process of writing is how I develop my own thinking. So, when I write, that helps me to formulate my thinking, and it helps me to think better, because I’m trying to explain a concept. And so if, if we’re using AI to do more of our writing for us, we have to be careful, because it may take away from that thinking that we normally do. So be careful of the overreliance of it as well. Another issue is authorship, credit and intellectual property. So first off, the concept of being an author is being redefined. And so this is, this is something like to say that I wrote that. So my friend Jocasta, bona, we had a phone conversation about this actually, about six or eight months ago. He said that using ChatGPT, to write a book, saying that you wrote a book using AI, is like saying that you used a car to run a marathon. And it’s an interesting kind of comparison. So where is the line in claiming credit for writing something? So if I sit down, and I type out and I write something, then I’ve clearly written it. If I use the dictation tool, and I’m driving to Dublin, and I dictate and record that dictation, and then I say, Please transcribe that. And it takes my words verbatim, then I put, you probably say that I wrote that. What happens if I dictate it? And then I say, I want you to tidy that up and pull out the key points, and then reformat it as a chapter of a book. Now, the line is kind of getting blurred, did I write that? Or did I not write that because I provided the ideas, but I didn’t write the specific words that are there. So the line gets more and more blurred, the more you use this assistance, and, and then there’s legal and ethical issues around ownership. So in the US, the Supreme Court ruled that works that are generated by AI, are not copyright protectable. They’re not copyrightable. And so it’s, it’s and, again, all of this, it’s like it, it’s not as simple as it sounds on the surface. And it’s worthwhile listening to the conversation I had with Aaron Austin, if you’re interested in that side of things, but the lines, so if we put, if we put ideas in to AI, and then get it to give us back, then it may be be somewhat protected by copyright, but it’s not a straightforward thing. And then claiming credit, so are like, do we say like, like Joe said, you know, you can’t claim credit if the AI wrote it for you. But what’s the definition of wrote it, you know, because if you if your input is, if you’re if you’re putting a lot of stuff in, I’ll give you an example. So I have a custom ChatGPT bot, Custom GPT, which is when you create your own version, and you put your own text into it. And so in one of the ones that I’ve created, I put the text of all of my books, and all of my blog posts, so I put all of my content in there. And, and I taught it, how to speak how to write in my voice and my writing style. Now, if I use that bot, to write something, can I claim ownership for what it writes because I provided the input? Or, you know, where, like, where’s the line there? It’s interesting. And it’s like, this is all challenging. There are also ethical issues. So for example, there may be inbuilt bias biases,

Alastair McDermott 18:58
stolen training data that should say not date, and then there’s privacy and GDPR. So, for example, an inbuilt bias. So one of the issues is the what what are these trained on? So these, these models, these, these large language models are typically trained on billions of pages of text. And there are built inbuilt biases. So for example, when these systems came at first, I think these these have been fixed. If you said, for example, give me an image of a beautiful woman, every time 100% of the time, if you ask that over and over and over again, he would always give you a blonde white woman. And that’s just an instinctive inbuilt bias because what it was trained on and what the what what our media is consists primarily off is the idea that a beautiful woman is a white woman. And that is, and that’s just because of, you know, cultural reasons. So if there’s inbuilt biases like that. And that’s only one example. There are there are so many other, you know, potential racism, all of those political leanings, all of those biases that could be present in training data. And so we need to be aware of there could be potentially biases in the systems that we’re using. And just Just be careful about how we use that. A second thing is there’s a lot of this stolen training data. So basically, these systems, when they, when they train them, they just took entire libraries worth of books, and use them to train. They took, for example, the entire archive of the New York Times, put all of the articles in there, and the New York Times sued OpenAI for it. And I think I think that lawsuit is complete, I think they were found guilty. I’m not sure. But those are just examples. So if you’re an author, and you, you know, that, that, like, let’s, let’s say, let’s say Terry Pratchett, or something like that. So if you said, you know, write a story in the style of Terry Pratchett, and here’s what I want you to do. Now, Terry Pratchett is no longer with us, but his estate could say, hey, you know, you can’t use like, you can’t train on all of Terry’s books, and then use it to create a book based on that, because that’s his intellectual property. And then the question is, is it? And so that’s what what the lawyers are fighting about. There’s also privacy issues. And GDPR is one of the reasons why in Europe, I can’t use Google AI studio, for example, the tool I just mentioned, unless I VPN, so unless I pretend I’m coming in from a US IP address. And one of the reasons for that is because we have a lot of privacy protections, in terms of GDPR. And so the AI companies are trying to figure out how to how to do that. So right now they’re just blocking access on some of the tools, we don’t get the same access, as if we were coming from the US, we have more access when in using these tools. And that’s because of privacy and GDPR. So for example, I use an AI notetaker. Quite often, when I’m doing calls with my clients when I’m doing calls with people. Now, the AI notetaker that I personally use has a notice. So when somebody joins, they see us McDermott is recording this, for note taking purposes. Now that notetaker takes the conversation that we have it processes the audio, it transcribes it and puts it into its database, which I can then ask it to give me you know, a summary of the call. Now, for example, if I’m having a call with a client about their website, and I say, look, here’s what we’re going to do, and they tell me about their website issues. And maybe they tell me something about their business. Maybe they tell me something personal on that call. All of that is gone into the API’s and being processed and gone into the as memory, the AI companies may train their next models on that. There are serious privacy implications in terms of if we record conversations, and put put that information in there. For example, if I got a an email from a client, where it for, let’s say, it’s a website inquiry, and I copy paste that email into the into the AI system, and I say, I want you to give me I want you to process this, and I want you to give me a response, I want to give them a quote, or let’s set up a meeting or tell me what the important points are that they really care about. I can I can do lots of different things with that information once I put it in. But if if I don’t take their name, or their business name off that email before I feed it into the AI, the AI may have a copy of that. And that’s something that we need to be careful of is the privacy implications of this. So those are all things that we need to think about. So here’s what an AI ready business might look like. One thing I suggest that every business does, is to become AI ready in terms of culture and structure. So kind of best practice for this, I think is to pick people in different teams in your business, and point them to say, Look, you are the AI team. We want you to explore how we can use AI in the business and give them give them the tools to do that. Promote AI literacy. So this is one of the most important things for all of us, whether we’re solo business owners, or whether we’ve got large, large business is investing in education, for training for ourselves and for employees to have a basic understanding and understand what the potential obligations are. And to create this culture where we’re being innovative and being responsible about how we use AI and being aware of the things like the privacy and ethical issues and place you can start do is to look at use cases for specific things. And you can look at, you know, where can we be more efficient? How can we increase our productivity? Or how can we give a better customer experience using AI. So start looking at case studies and, and ways that you can use AI today and start adopting it’s use. And I would strongly suggest having a policy, so having guidelines about how you use these tools, and it’s particularly important where we’ve got things like GDPR, and privacy issues. So and, you know, one of the issues that Aaron Austin, the lawyer talk to me about is, when we’re creating content, there is a big difference between creating content, where we intend to sell that content versus where we don’t intend to sell it. So for example, if I was to use AI, in helping me to write a book that I want to sell, that’s a very different use case, in terms of the the implications, the legal implications of ownership, versus if I was to use it to write a LinkedIn post. So I would be much more comfortable using it to write a LinkedIn post, I would be much more careful about using it to write a book that I wanted to sell, for example. So creating policies around how we use it and thinking and talking to people about those policies. And then continuous evaluation, this is really important. Ai space is moving rapidly. So we have to continuously evolve, and continuously evaluate and experiment. So like there’s no place for head in the sand here. Like we have to continuously evolve and look at these things and and talk to people about it. So that’s it for me. I would love to talk to anybody. I’m particularly looking to, to do some small case studies with some businesses who are interested in doing AI and I’m happy to talk to anybody about it. That’s my email address AMD at If you scan the QR code, it will set up a virtual meeting with me. And I’d love to chat with anybody who’s I’d love to chat with anybody anytime. But specifically about AI, I’m really interested in talking to people!