Martin Norris asks: Does your company need an Artificial Intelligence policy? 

AI is a big deal. The global adoption rate of Artificial Intelligence (AI) now stands at 35 per cent. And this is only set to grow, with IBM quoting that 44 per cent of organisations are working to embed AI into their processes and applications. Whereas once the public’s interface with AI may have been largely restricted to search and recommendation engines, advanced AI tools are now becoming more widely accessible to the mass market. So, this begs the question, does your business need an AI policy? 

If you joined us for our most recent event with in-Cumbria last month, you’ll know we took a deep dive into the ethical considerations of AI with thought leaders across the region. Of the topics explored, we spoke about how most companies understand the importance of responsible AI practices but that many organisations just don’t feel equipped to regulate how AI is used. If we think about the release of ChatGPT in November 2022, it seemingly came out of nowhere to the non-initiated, and the plethora of platforms following have left many of us struggling to keep up. Unsurprising then that two thirds of companies report that they lack the skills and knowledge to accountably manage the use and trustworthiness of AI within their business. For this reason alone, it seems the very least we can do is to implement an AI policy to mitigate some of the risks. Risks including: 

Bias and Discrimination 

AI can perpetuate human bias as it mirrors the leanings within the data it interrogates. What’s worse, it can sometimes intensify this bias with access to historical inequities and outdated modes thought. Essentially, rather than AI improving upon human decision making, it can scale-up some of the more problematic and discriminatory decisions that we’d rather weed out in 2023. For example, Reuters reported that Amazon scrapped its hiring algorithm after finding it was favouring applicants based on language predominantly found in male CVs. By observing patterns of bias, which in this instance was a precedent of men dominating the tech industry across the previous decade and consequently submitting the majority of job applications, it taught itself that male candidates were to be preferred over their female counterparts. It became inherently gender biased. 

Transparency and the Bottom Line 

Of the IT professionals polled by IBM, 85 per cent agreed that consumers were more likely to choose/purchase from a company transparent about its AI technology. However, 61 per cent of organisations stated that at present they wouldn’t be able to fully explain AI-powered decisions. If you can’t translate and understand the decisions your AI is making, not only is it difficult to be sure that it’s making the correct call (AI will sometimes get it wrong, don’t forget) but it’s directly impacting consumer trust and ultimately damaging your bottom line.  

Privacy and Confidentiality 

AI databases are often filled with confidential employee and/or customer data. You need to know that this data is secure and that it won’t be inappropriately leaked by an AI algorithm. Moreover, there’s also ethical considerations attached as to how appropriate it is for AI to use personal information and where we draw the line in terms of intrusion of privacy.  

Values & Culture 

Does AI generated content truly reflect your company’s values? It might sound a lot like you, but is it you or just a poor copy? We all bang on about company culture, how important it is to us, our businesses, and our colleagues, but could all our hard work be undone in our reliance on AI? AI produced material can lack authenticity, it can feel robotic and impersonal, and this is a major turn off for customers, internal and external. 

So, do you need an AI policy? In short, yes. Absolutely.