- Sectors
- Aerospace & Defense
- Big science
- Biotechnology
- Fintech
- Insights
In recent years, artificial intelligence or AI has revolutionized the world at all levels, and today we can say that it is part of our vocabulary and routine.
If during 2023 and 2024 we experienced the boom of Generative AI with tools such as Chat-GPT (for text and/or chat generation) or Sora (video generation), 2025 will be the year of Agentic AI (AAI).
Relying on Generative AI (GAI) techniques, AI systems are beginning to acquire the ability to create plans and act autonomously in the digital world and are becoming popular as a new way to make Internet applications and services. While GAI systems need a prompt provided by a human – that is, instructions, questions or context information to get the AI system to respond in the direction we want – AAI takes a more autonomous and less user-driven approach, taking process automation to the next level. These systems are capable of executing chained actions – from searching databases or using Internet search engines, to booking online services – and making decisions (including evaluating the veracity of retrieved information, critically analyzing it, or formatting it to meet specific criteria or standards) to achieve goals autonomously and without human supervision.
An AAI system is based on cooperation between so-called agents, which use Generative AI models to operate in unstructured environments, with high uncertainty, and cooperate cohesively to achieve a common goal. They act according to the defined objectives, the cooperation methodology defined by the designer of the application or agentic system, and the degree of freedom they have been given to organize themselves autonomously to optimize the processes.
Intelligent AI agents are still at a very early stage of development, although they are maturing at a rapid pace. That they operate with a greater degree of independence is due to advances in Generative AI, in particular, so-called large language models or LLMs of the GPT-4 type, conversational systems of the ChatGPT type, and techniques for reasoning (such as chain of thought or CoT), information retrieval (such as retrieval with augmented generation or RAG), and knowledge organization (such as knowledge graphs or KG), all of which are based on LLMs.
Imagine using an AI assistant to manage your meetings. With persistent memory, the agent remembers your preferred times, regular participants and your preferences about virtual or face-to-face meetings. So, the next time you need to schedule a meeting, the AI will automatically suggest the best options without you having to repeat your preferences each time.
Suppose you want to organize a networking event in your company. An Agentic AI could take care of finding the best available date, reserving spaces, coordinating with suppliers and sending invitations without the need for you to oversee every detail. If an unforeseen event arises, such as the cancellation of a speaker, the AI would autonomously restructure the event to minimize the impact.
Think of AI-powered customer service. If a user contacts with questions about a product and mentions that he or she has already tried a previous solution, the agent will not repeat the same instructions but will adapt his or her response based on the context. This improves the user experience and optimizes support efficiency.
Despite its enormous potential, the use of Agentic AI (AAI) presents significant challenges that must be addressed to ensure its safe and responsible development. The fact that, in this new paradigm of intelligent software applications, the process or service is provided by a team of intelligent agents cooperating with each other with varying degrees of autonomy, raises the question of what level of autonomy is acceptable from a safety and liability standpoint. Thus, the accelerated growth of its use to create a new layer of applications and services on the Internet raises key questions in terms of governance (including ethics and privacy), reliability and control, which requires adequate oversight and a regulatory framework to enable its responsible integration into society.
ARQUIMEA, from its research center located in the Canary Islands, has an Artificial Intelligence Orbital with the aim of responding to the great challenges we face as a society. Some of our research lines are: the acceleration in the search and design of new drugs, the safe autonomy of intelligent agents and robotic platforms and drones, or the implementation of AI systems on board satellites for earth observation and security in space.
In addition, all ARQUIMEA Research Center projects belong to the QCIRCLE project, funded by the European Union and aimed at the creation of a center of scientific excellence in Spain.
“Funded by the European Union. However, the views and opinions expressed are the sole responsibility of the author and do not necessarily reflect those of the European Union and neither the European Union nor the granting authority can be held responsible for them.”