How to use ChatGPT ethically
There is no doubt that we are in the midst of a major turning point for AI. For years, virtual assistants like Alexa and Google Home, were widely scrutinised for their inability to process complex tasks and huge scope for inaccuracies.
The latest generation of chatbots, like ChatGPT, changes this. While previous versions may have produced outputs that usually, with a little editing, could be suitable for use – today’s models are able to generate deceptively human-like (though not faultless) answers to questions. They can also draft emails, blogs and proposals, write poetry, summarise lengthy documents and, to the alarm of teachers everywhere, draft essays. The result is a huge opportunity for businesses to automate key processes, streamline and enhance overall operations. This becomes even more pertinent for startups looking for ways to cull costs and gain efficiencies amid ongoing economic turbulence.
But, as with all major innovation leaps, generative AI, along with its remarkable potential for good, does carry some level of risk. Ultimately, these types of technologies are only as accurate as the information with which they are provided. But, of course, the world is constantly changing and information evolving. This can leave scope for inaccuracies and misinformation. In fact, one need only look at Google Bard’s recent debut to see this in action as the chatbot confidently spouted the wrong answer to a question about a telescope. With this too, generative AI runs the risk of inherent bias and discrimination. Experts have also expressed concerns for a new wave of disinformation and deepfakes as this type of technology becomes more widely used. For business leaders this naturally raises a lot of questions. If generative AI is an undoubtedly powerful and rapidly developing field - how can it be used both effectively and ethically?
The first thing to do is to put everything in perspective. Much of the discussion around AI is speculation and hype. Currently, although very impressive, ChatGPT and other generative AI apps are a long way from being able to do even a small percentage of what humans are capable of. They are also far from flawless. The risk of Skynet being created tomorrow is negligible. This means that when we speak about the average business using generative AI ethically, we are not talking about the big, world-changing risks - we’re talking about the small, complex actions businesses will regularly take that, if mishandled, could have undesirable consequences. These decisions can soon stack up to have big implications for a business and society at large.
As AI is developing at such a pace, businesses simply can’t rely on regulation to fully guide them. The law cannot keep up. We saw earlier in the year that the EU’s AI act had to be hastily redrafted because legislators were completely blindsided by the launch of ChatGPT. This pace of development also means that creating your own ethical framework needs to happen now - even if you do not currently have plans to use generative AI. The longer you delay the more difficult it will be to create an ethical decision-making culture within your organisation.
First steps
Data ethics is not a checklist of do’s and don’ts. It’s the creation of guardrails and principles that underpin an ethical culture which should equip decision makers with the knowledge and expertise to make the right judgement calls when presented with challenging moral issues.
A company’s approach to ESG plays an outsized role in determining whether it will have the tools to use data ethically. There’s a very simple reason for this. A diverse team is able to leverage all of its experiences and perspectives to anticipate how your use of data will impact different groups. One of the clearest risks of using generative AI is that it will be biased against a particular group of people. This is an issue that has already impacted a lot of companies when they use data and design algorithms.
Accountability and transparency
The next step is to look at the structures and policies that will enable ethical decision making to happen in practice. Accountability is a key aspect of this. You need someone who is ultimately responsible for holding your organisation to its self-stated ethical standards.
There is some debate as to who is best placed to have this task. For some companies that may be the Chief Data Officer, but this has the potential for conflict of interest (see marking their own homework), others choose the Chief Compliance Officer - however - ethics goes beyond legal compliance. Personally, I think the Chief Executive will often be the most logical fit, especially for smaller companies. Whichever individual oversees your ethical policy it’s essential that they are empowered - both to make critical decisions and to hold colleagues to account should they fail in their ethical responsibilities.
Aligned with accountability is transparency and trust. Your team and your customers or clients need to know how and why you do and do not use AI for particular purposes. Communicating your values and decision making in clear and understandable language is key.
Ethical stance
Putting pen to paper to outline your ethics is the relatively easy part of this endeavour. It should be in harmony with your company values and be framed in a way that it will support your organisation, not impede it. Think of it from the perspective of ‘what should you do’, rather than ‘what you shouldn’t do’. There are resources online that can help support you on this journey.
For example, we have collaborated with Pinsent Masons and a host of data academics and experts to create a free ethics guide that provides a lot of practice advice.
Education is key
It is impossible to comprehend the ramifications of generative AI if you do not have a basic understanding of how it works. This knowledge needs to be shared throughout an organisation for a few basic reasons. One, nearly every member of your team will end up using AI or the outputs of data to undertake day-to-day tasks. Two, having this expertise siloed in your data team creates both bottlenecks and also runs the risk of this team ‘marking their own homework’ with little oversight. And finally, innovation can come from any part of your organisation. Team members will be better able to responsibly apply generative AI in new and creative ways if they have been upskilled on data.
It’s also important to note that training is not a once and done exercise. Knowledge can be easily lost or become obsolete. Running annual, or ideally, biannual training sessions for your team will help to ensure your culture is maintained.
Certainly, generative AI offers huge potential for startups to work smarter not harder and reduce costs at a time when, perhaps, they have never needed it more. However, even despite the headlines, it’s important to remember that nothing is perfect, not even ChatGPT. Even with its phenomenal ability to generate human-like text, its scope for inaccuracies, bias and disinformation entail a certain level of risk which every business that uses it has a responsibility to mitigate via the right policies, procedures, education and training. In this way, startups can ensure they remain on the pulse of the AI revolution while maintaining their ethical integrity.