Artificial Intelligence is certainly THE buzzword of the year, but only a fraction of those talking about it have actually managed to use it in ways their customers or employees can appreciate. For now, most employees have to make do with external generative AI tools, while customers are left to deal with chatbots who are often far from perfect. So how do you implement AI that provides real results and meets all legal and ethical requirements?
According to KPMG’s Generative AI study, 77% of business leaders from large companies believe generative AI to be the most relevant emerging new technology they have ever seen, more important than 5G or extended reality. And since AI is likely here to stay, you need to decide: do you jump on the front or the back of the bandwagon?
As an “early adopter”, you could attract customers who actively seek new technologies, but you would be also running a risk of having to deal with loads of tiny bugs before your tool is perfect – and that can actually end up disappointing and discouraging the tech geeks.
Just watch this video to get an idea of how much stress an imperfect voice bot can create.
As a later adopter, you can learn from others’ mistakes and create something more polished. But there is also a risk – that your tool, albeit better, won’t get as much recognition because customers will have already gotten used to something else.
„Development of smart tools and technologies is just one piece of the puzzle of AI integration – say, 5%, maybe even less. In reality, most time and effort – around 60% of what we’ve seen – will be used to identify and select opportunities where AI can be truly useful. And then you will also need to train and educate your employees and convince them to change the way they work so that they actually use these new tools,” says David Slánský, KPMG partner in charge of data and technology. Because your people, employees and customers alike, need to know how to use new technologies and understand their benefits if they are to create any real results. “You will use the remaining time to prepare data. Because data must be balanced and have a certain level of quality for AI, machine learning, or advanced data analysis tools to assess them correctly,” explains Slánský
„Today, artificial intelligence is seen as something very progressive, thanks to the generative AI hype. But knowing what you want to achieve in customer or employee experience is crucial for successful AI implementation. That means you need to know your customers well. You need to know their needs. You must understand how faster processing of their requests or more personalized products and services will benefit them. A risk analysis is also key to make sure you don’t end up creating problems – for instance, problems related to how data are used – instead of excitement about new or improved capabilities,” says Lukáš Cingr, KPMG’s Head of Customer & Digital.
For most people, artificial intelligence is a black box. They have no idea how it works and how it could make their work easier. And until they have a basic understanding of how algorithms operate, what data they use, what output they create, and how it transfers to real life, there will be push-back, naturally. So, your first task is to convince employees that AI can be beneficial to them. Explain that it’s not here to replace them. But that someone capable of being more efficient thanks to AI might! Whether you choose to motivate your employees to embrace change by showing them the opportunities, or by lighting a proverbial fire under them is up to you.
When you do convince your employees that AI works, make sure that the whole company is united in how these smart tools are perceived. Your employees need to see these tools through customers’ eyes to understand if they really help their needs.
Keep in mind that having your employees on board with these new tools is key before you can move on to rule number two. Because behind every app that reached a broad pool of Czech customers, like Lidl Plus or George by Česká spořitelna, are employees who know how to use them and who can teach customers to do the same.
To ensure a significant, positive impact of an AI tool on customer experience, it needs to benefit customers in the following areas:
Personalization
Time and Effort
Expectations
Integrity
Resolution
Empathy
Asking the right questions in each of these areas is key. Let’s take time and effort. Don’t ask customers if they prefer to communicate via email, a call centre, or a chatbot. Instead, ask how quickly they need an answer, what time of day is convenient for them, and which communication channel they would prefer in different situations.
We’ll give more examples of how you can apply AI to different pillars in the next chapter, so read on.
Keep in mind that customer education needs to happen outside of the digital channels, too, because these only allow you to reach the tech nerds – and it’s unlikely that they make up the majority of your client base. And if direct, in-person communication – at counters or in stores – is your strongest communication channel, your employees must be at the front line of customer education.
The EU is already working on harmonized rules for artificial intelligence: the AI Act. It is expected to be finalised in early 2024 (depending on how successful the current trialogue debate will be), introducing new obligations for AI producers, importers, distributors, users, and any other third parties. The draft proposes 4 categories of AI based on risk level, with corresponding responsibilities for every risk category, and the proposal also contains a penalty for failure to meet the AI Act requirements – up to 6% of the company’s world-wide turnover.
But before the AI Act is approved, you can get a head start by focusing on self-regulation and having your own AI code of ethics, tailored to your tool. Make sure it covers:
Respect for human dignity: AI tools must not be used for discrimination, harassment, or human rights violations.
Transparency: AI developers must ensure transparency in the development process and in how algorithms are used. Users must be able to understand how AI works, and they must be informed when and how it’s being used.
Fairness: AI tools must be designed to minimize inequality and discrimination. Developers must make sure that the data used to train their AI is not biased and that the algorithm’s conclusions are fair.
Privacy: Artificial intelligence must respect its users’ privacy and it mustn’t misuse their personal data. Data should be managed and processed in accordance with legal requirements in the field of personal data protection.
Security: AI tools must be designed to withstand misuse attempts and cyber threats.
Responsibility: Smart tool developers and operators must be responsible for the results and consequences of their use and be able to deal with potential problems and errors.
Responsible AI will give you clarity on how artificial intelligence models work in your company and let you know whether they are ethical and in accordance with legal requirements. It will help you make sure that your algorithms work well, that they are secure, and that their outputs are accurate and good enough to be trusted and used in your decision-making processes.
KPMG Responsible AI is an evaluation and management method for machine learning models and artificial intelligence at all stages of their cycle – from design to implementation, operation, maintenance, and development. The method is based on the following requirements: integrity, resilience, fairness, and explainability. It evaluates AI from a total of nine different perspectives, from data and development to ethics, law, company culture, and strategy.