Agentic AI: autonomous task automation agents
A forecast for 2025 is that Agentic AI will expand its applications. These agents manage workflows and routine tasks autonomously.
Additionally, they are capable of making dynamic decisions, adapting actions to context and delegating subtasks to other tools.
However, challenges remain. It is necessary to develop criteria for control and define responsibilities in case of errors.
This concern increases as agents begin to execute more complex tasks, such as composing and sending messages automatically.
Reasoning AI: logical reasoning in AI models
Reasoning AI brings a differential by generating solutions after considering various options and discarding inadequate hypotheses. This results in higher-quality responses.
On the other hand, the reasoning of these models is not transparent, making external validation difficult. This problem is known as the “hidden thought chain”.
Furthermore, the time and costs associated with using Reasoning AI limit its application to specific tasks, such as individual research.
General Artificial Intelligence (AGI)
AGI remains a distant and hypothetical goal. There is no guarantee that the current trajectory of AI will lead to this general intelligence.
In many cases, experts believe that AI specialization will be more important than the development of a general intelligence similar to human intelligence.
Small Language Models (SLMs)
Small Language Models (SLMs) are gaining space as an alternative to large AI models. They are adapted to specific use cases, making them more effective.
Furthermore, SLMs require less resources for training, reducing costs and environmental impact. They consume less than 5% of the energy used by large models.
Finally, knowledge graphs help validate and control these models, increasing their reliability.
Data integration as the foundation of AI
Data integration is a central point for AI performance. Companies need to organize and connect their internal information bases.
However, structured data represents only 10% of the total available. Most data is in unstructured formats, such as documents and images.
Tools like natural language processing and knowledge graphs can help structure these information, making them more useful for AI.
Standardization of language for AI
The interaction of AI with people, systems, and other AI requires advances in standardization. Currently, models work with different languages, from natural to technical.
For example, LLMs are already being used to translate technical languages into accessible texts, such as in the case of Text2Cypher.
In the long term, it may be necessary to create a standard language. However, the current flexibility is still a significant advantage.
Graphs at the center of AI development
Knowledge graphs organize structured and unstructured information, helping to make AI results more accurate and explainable.
Solutions like GraphRAG already integrate specific data into GenAI applications, improving transparency and enabling responses to complex questions.
Furthermore, graph neural networks (GNNs) have been applied to projects such as climate forecasting (GraphCast) and semiconductor design (AlphaChip).
These technologies also show potential in areas such as biological interaction prediction, with examples like AlphaFold.
The path of AI still faces challenges
Despite the growth of AI investments, the integration of these technologies faces barriers. Companies need to address regulatory issues and find practical applications.
Furthermore, the pressure for rapid results and high costs create uncertainties. The Gartner Hype Cycle indicates that technologies like GenAI still need to prove their value in 2025.