Some time ago, the software framework LangChain released a report that explored the state of “AI agents” in 2024. The report polled 1,300 experts and found that 51% of respondents are currently using AI agents. Also, 63% of medium-sized companies are using this technology in their production processes, and 78% have plans to integrate artificial intelligence in the future.
Also, this survey showed that there is a great desire to use artificial intelligence agents even in non-tech companies. According to the report, “90 percent of people working at non-tech companies have either used AI agents or plan to introduce them into their processes (compared to 89 percent at tech companies).”
Also, during a report from Research and Market about “market analysis of artificial intelligence agents”, a promising future is predicted for this technology. The report noted that “the market for artificial intelligence agents is expected to grow from $5.1 billion in 2024 to $47.1 billion in 2030. In fact, this growth will be at an annual rate of 44.8% during the years 2024 to 2030.
These statistics represent a major shift in attitudes toward AI agents, showing that views are moving toward greater acceptance and breaking down initial skepticism.
Agent or assistant?
In LangChain’s survey, the majority of respondents said they use AI agents to summarize research and personal assistance, but interestingly, 35 percent said they use the technology to perform programming tasks. Of course, companies have not yet been able to provide a precise definition of “artificial intelligence agents” and it is not clear how autonomous these systems should be.
In the past, when Google announced that 25% of newly written code will be generated by artificial intelligence, there were criticisms from users. For example, one user on HackerNews suggested that this claim is probably exaggerated and depends more on a code completion engine. Meanwhile, a user on Reddit suggested that Google was actually referring to “doing cleanup work for dependencies, removing old classes, or changing deployment configurations.”
A few days ago, payment processing giant Stripe released a software development kit (SDK) for artificial intelligence agents. This tool allows large-scale language models (LLMs) to access functions related to payments, invoicing, and transactions. With this feature, smart agents can spend money or approve and reject payments.
However, this capability has been met with some doubts. Some users on social network X (formerly Twitter) asked if this feature is more than just regular API calls or just a fancy name for the same functions.
“To me, it’s just removing a few lines of code and offering a more complex pricing model instead,” wrote one user on X. Finally, am I missing something special?”
At the Oracle CloudWorld 2024 event, Oracle announced the introduction of more than 50 artificial intelligence agents in the Fusion Cloud Application software suite. However, Steve Miranda, Oracle’s executive vice president of application development, provided a detailed definition of AI agents. “In my opinion, the initial use of these agents will not be fully autonomous and will be more human-assisted,” he told AIM.
Also, Ketan Karkhanis, CEO of ThoughtSpot, explained in an interview that many of today’s systems, such as Microsoft Copilot, only answer single-step questions and do not have the ability to reason, adapt and learn from the users’ business environment to be called autonomous.
He added: “This issue has many complications. If you can’t train a system, then it can’t be called an AI agent. I don’t think you can teach a copilot, you can only write custom commands for it.”
Salesforce CEO Marc Benioff has repeatedly criticized Microsoft’s approach to AI agents, accusing the company of exaggerating Copilot’s marketing capabilities.
Although there is still no common and precise definition for AI agents, companies claim that the use of this technology has improved many of their operations.
A recent survey about artificial intelligence agents faced criticism on social networks. “In this day and age, polls are the worst metric for evaluating actual usage,” wrote one user on X. “Instead, show real, traceable data.”
Despite the opaque definitions, many companies, even big brands, have achieved significant success using AI agents.
A few weeks ago, Freshworks introduced a new version of Freddy AI; An autonomous agent that was able to resolve 45% of customer support requests and 40% of IT service requests on its own in the beta version. Also, Salesforce unveiled Agentforce; A tool that allows the company’s customers to implement their own AI agents on their platform.
Wiley Publishing, a Salesforce customer, reported significant success with the Agentforce tool. “With the help of artificial intelligence tools and increased productivity, we were able to speed up the training process for seasonal workers by 50 percent, resulting in a 213 percent return on investment and $230,000 in savings,” Wiley wrote in a blog post.
Wiley also announced that Agentforce was able to improve customer case resolution by 40% compared to their previous chatbot. These successes are also consistent with LangChain’s survey results, where 45.8% of participants reported using AI agents in customer service and support.
Salesforce continues to see a bright future for AI agents. “In 2025, we’ll see more complex, multi-agent coordination that solves bigger challenges like simulating new product launches, marketing campaigns and making recommendations to optimize them across organizations,” said Mick Costigan, vice president of Salesforce Futures.
Companies using AI agents have been able to increase accuracy and reduce operational costs. For example, telecommunications company Amdocs managed to improve the performance accuracy of its systems by 30% using NVIDIA’s NIM Microservices AI tools.
The company also announced that it has significantly reduced costs by reducing resource consumption. In fact, Amdocs was able to reduce the use of tokens for data preparation by 60% and for final processing by 40%.
Contrary to popular belief that AI agents operate completely autonomously, there are good reasons why this is not the case. In the LangChain survey, most respondents emphasized the importance of tracking and monitoring to manage automated operations.
More than 35% of companies have prioritized online or offline evaluation of the results produced by these factors. Also, most companies have only allowed AI agents to read data, and only about 10 percent of companies have allowed them to read, write, and delete data.
Even if the concerns and risks associated with AI agents are mitigated, these systems may not be able to fully understand all the details of every part of the operation.
Speaking to AIM, Lingaro Group CEO Sam Mantel emphasized the importance of managing the flow of data between each part of an operation. He said that these parts are usually separate and we need to pay more attention to how they are connected.
“I want to know the owner of any data that may be in any of these apps,” Mentel added. “In fact, if we want to run things efficiently and smoothly, someone has to be responsible for that data, even if it moves around the organization.”
RCO NEWS