 
Why AI-assisted R&D is proving a big risk for entrepreneurs and innovators
Small businesses are embracing Artificial Intelligence (AI), utilising its accessibility to optimise processes and unlock efficiencies. Resource-intensive and time-consuming tasks such as Research and Development (R&D) can be fast-tracked, shortening time to market and accelerating growth. However, entrepreneurs and innovators need to tread carefully if they want to maintain a competitive edge.
As previously reported by startups, over a third of small businesses are using or thinking about using AI. Advances in generative AI have made the technology more available and affordable, and there’s an enthusiasm to unlock its value. A survey of 500 small businesses by VistaPrint found that 45% use AI several times per week, with many business owners viewing the tech as a supportive tool for achieving growth and operational goals.
Unsurprisingly, R&D is becoming a priority area of focus for using AI. Entrepreneurs are often in a rush to bring ideas to market, while ambitious startups tend to be hungry to scale at pace. The rather slow, laborious, and sometimes complex nature of R&D doesn’t always fit with the desire to move quickly. AI can overcome this by enabling rapid and extensive data analysis, predictive modelling and enhanced simulations, and automated research and trend identification.
AI presents many advantages for R&D and it’s positive to see businesses trying it out, but it can present risks that have to be carefully managed.
Facts and fakes
It’s worth remembering that, despite its transformational capabilities, AI is still in its infancy. For this reason, in part, AI hallucinations occur and can be especially problematic for R&D. Hallucinations refer to factually incorrect and/or misleading information that’s presented by AI as though it is fact. This tends to happen because generative AI is built to predict patterns and probabilities, which can lead to the generation of false information when data is incomplete, biased, or flawed.
Hallucinations during R&D could mislead due diligence checks and misinform decision making. There’s a possibility that AI can contradict legislation and policies or misinterpret sector-specific nuances and terminology. In such instances, inaccuracies could impede regulatory compliance. It may also compromise risk assessments and the appreciation of risk exposure.
AI is being increasingly used during the registration of patents and trade marks (although searching software in one guise or another has been available for a number of years). The tech can quickly search databases to identify potential conflicts and to verify originality. Research of this kind, when carried out correctly, forms a crucial part of trade mark and patent applications. AI hallucinations can pose a significant risk, with fake case citations, inaccurate data and misrepresentation of prior art all compromising the validity of an application and the protection of Intellectual Property (IP). There are a number of high-profile court cases in recent months where professional and non-professional advocates have been reprimanded by the court for misuse of AI.
Unintentionally sharing secrets
A lot of the debate surrounding AI and IP infringement has focused on the creation of AI generated content. If AI models replicate substantial parts of copyrighted work or produces substantially similar content to existing work, there’s a risk of copyright or trade mark infringement. The risks of infringement run much deeper when it comes to R&D.
Businesses need to be mindful that when they are using some AI tools, they are often helping to train the tech. If AI is used to test the originality of a new idea, it’s possible that this idea will be shared with third parties and used to train future models. This could mean that businesses are inadvertently leaking trade secrets, even before they’ve had a chance to protect their IP. Even worse, innovators could find that they are contributing to the creativity of competitors, as AI learns from the research and development they are undertaking.
Further down the line, when businesses come to protect their IP through registration of a patent, they may come up against issues of proving ownership. Information inputted into AI during research and development may give rise to ambiguity of the originator, a position that becomes even more complex if AI has ‘contributed’ to the development of the idea and shared it as an output with other users. Whilst copyright subsists automatically the question or originality and ownership also needs to be answered.
To reap the benefits of AI during R&D, businesses can manage the risks by checking the terms and conditions of AI tools. There will often be specific clauses relating to data input, as well as the ownership and licensing of output. Fully understanding the terms will help inform how the tech is or isn’t used for confidential research. Some AI tools allow users to opt out of training features, as well as preserving confidentiality of information that is inputted.
In some instances, businesses may opt not to input any proprietary information into AI to avoid sharing this with unauthorised parties. Or, they may decide to keep a record of the data provided to AI, to help demonstrate ownership and origination.
Before using AI for R&D, businesses are best-placed thinking about how confidential and sensitive their information is, and whether they want to share this openly. If they want to protect secrets, they may want to avoid using open AI models. Similarly, AI may be most effectively used to enhance research and development methods, rather than replacing previous tactics. This can effectively maintain human involvement and perspective, which can help avoid issues of data quality and AI hallucinations.
For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.
 
                   
                  