Technology gurus warn at DES: companies that don´t use AI by 2025 will go unnoticed in the marketplace
Digital Enterprise Show (DES), Europe's largest event dedicated to exponential technologies, kicked off today in Malaga by putting the ethical and human approach to the application of AI on the table. Leading technology gurus explored the challenges, benefits and influence the generative learning tool is having on the community and business.
Mark Minevich, advisor to the UN and co-chair of AI for the Planet Alliance, was one of the most talked about speakers in his session. In it, he stated that the application of Generative AI “must focus on human beings, ethics, sustainable models and the future of health”. In this line, he pointed out that the tool “can serve in predictions and early warning systems. There was no early warning for covid. This is where granular data and information help”.
Following Chat GPT's breakthrough in 2023, Minevich said that “2024 will be the year where we will focus on personalisation, improving predictive markets and productivity in supply chains”, while by 2025, “we will scale current use cases”. Minevich has qualified that while we are in a state of AI hype, we are also experiencing significant growth. “If you're not an AI-driven company in 2025, you won't matter”, he said. According to the advisor, recent reports highlight that investment in AI will reach $151.1bn, which will also have technical and legal challenges, such as copyright.
Another challenge he detailed is the integration of technology in companies, and the application of AI as a catalyst for growth, for which the training of people is necessary. Finally, he advocated removing bias from the solution and ensuring data privacy and security. "Innovation should drive society, not regulations. We need to balance, not over-regulate, and we need better management and governance". He also highlighted the importance of stimulating quality talent. “We need data scientists and engineers”, he said.
Dan Nechita, Dragos Tudorache's chief of staff in the European Parliament, who is the head of the special committee on AI set up by the European Parliament, spoke about regulation. Nechita shared the general vision of the regulation, saying that "it is about specifying how AI is used in Europe in ways that we do not accept", although "the ban is a tool of last resort".
The expert addressed the different risks covered by the law, from the high risk, which regulates the impact of AI on fundamental rights, to the medium risk, where it can influence people with experiences such as deep fakes or chatbots. "Banned AI can be subject to negotiation for use by governments and the state. There are arguments for certain use cases. But we wanted to make sure there are limits. For example, facial recognition in public places needs approval for specific situations", he said. He also advocated international cooperation for the implementation of flexible application models, which is why they are in contact with the US and the UK.
Following the final vote a few weeks ago on the law, he listed the next steps for the regulation, as full implementation “will take a few years”. It will soon be published in the European Journal and in 12 months the establishment of the IA Office will take place. This will be followed by a period of time for products that already incorporate the digital tool, in order not to overload the market. “The AI Act is future-proof, as many parts can be upgraded without having to go through the whole process again”.
One who has also postulated for the regulation as good news for protecting civil rights is Millán Bezosa, who was director of strategic alliances in Spain and Portugal at Meta. "We now have the AI Directive, an umbrella directive in the EU, from which individual countries must adapt. You need a 'red button' in case things don't work as expected. You can't just steal content from people".
Responsible strategies and technology integration
During the first day, there was also a debate on how to integrate AI into companies to boost their operations effectively. Idoia Salazar, president and founder of OdiseIA, has bet on having a human-centered, responsible AI strategy that addresses data privacy and security. "We have to think about how and why AI systems are used. We need team members who know ethics, legislation, humanistic profiles, with a view to being able to approach artificial intelligence in the right way".
For his part, Daniel Newman, CEO of The Futurum Group, also pointed to the responsible use and opportunity that AI has brought as a “big reset”. "Companies are now looking for new types of providers.It is changing everything". In this context, Iñigo Viti, Business Development Director, Data & AI at IBM has proposed using AI in order to be at the forefront of digitalization. "You have to be able to deploy machine learning models, curating data, eliminating biases, and watching that it doesn't go astray. That framework is missing and that's what we need to focus on”, he has stated.
In his turn to speak, Osmar Polo, CEO of T-Systems Iberia, stressed that we are at the moment of democratization and acceleration of AI, but at the enterprise level and in the sphere of public administration there are different speeds. “When responding to corporate problems, you need a platform and guidelines for your teams to test and fail with AI. Have a 'lighthouse' project for people to test and learn how to use the technology".