Function Calling in LLMs

May 14, 2025

Kevin Bartley

Customer Success at Stack AI

In 2023, AI chatbots could only talk to you. Today, they can talk with your entire tech stack. Function calling is at the heart of this change, letting LLMs start actions on external systems, enabling a more interactive chatting experience. This is one of the core features that also powers agentic AI chatbots, increasing integration and automation potential.

This isn’t just a flashy technical feature: it has real operational value. Function calling helps AI models connect more deeply with your business systems, providing your teams with a unified chat interface they can use to browse data, optimize workflows, and reduce cognitive load while handling complex tasks. All while maintaining security and privacy under your control.

In this article, we’ll explore how function calling works to help you implement it on your systems. We’ll use the OpenAI API as a reference, but the principles are similar for other platforms.

What is function calling?

Function calling lets your AI assistants and agents perform tasks like retrieving customer data, updating CRM records, scheduling meetings, or analyzing real-time metrics, without requiring staff to switch between multiple systems.

LLMs can detect intent during chats and use tools based on pre-determined settings, performing data operations (create, read, update, delete) or starting actions in external systems as a result. This allows interaction with enterprise systems like Salesforce or SAP via messaging, for example.

For example, whenever ChatGPT uses function calling to run web searches. Since the implementation of this feature, it became a staple in other AI chatbot apps. Whenever the model searches the internet, creates a file, or performs a calculation, it’s using function calling to perform these actions.

This transforms LLM interaction from exploring model intelligence through questions into a more interactive exchange using up-to-date information or interfacing with external systems for complex automation flows.

How does function calling work?

In summary, function calling involves:

  • Defining a collection of functions, called tools. These are the capabilities you want your AI to have, like “check inventory levels” or “update customer information.”

  • Setting up the external tools, as running most functions depends on your business systems, not on the LLM or AI provider’s platform.

  • Once deployed, your teams can chat with AI, ask for information, or request actions in conversational language.

  • The LLM detects intent to use tools, generates the code to trigger the external tools hosted on your systems, waits for results, and generates a reply.

Let’s zoom in on each step.

Where to set up function calls

An example API call to the Anthropic API containing the tool use (function calling) parameters and values.

When using a flagship model from OpenAI, Anthropic, or Google, you’ll send API requests to their platform and get responses from their AI models. These requests are made via an API call containing all the necessary parameters and values.

Function calling is set up in each API request by passing the appropriate key/value pair in the request’s body. In OpenAI, use the “tools” key to designate tools and provide usage instructions. You can also set up “tool_use” for additional settings.

Each time you send that API request to the AI provider’s servers, the receiving system will understand these parameters and activate this functionality so the LLM can use it.

You can either configure multiple API requests with different tools and configurations, or use a single call and set up dynamic values for each field, depending on your platform and desired AI functionality.

Defining a function

Defining a send email tool for the OpenAI API.

Function calling and API calls share some characteristics, but aren’t the same. For example, when setting up an API call to Mailchimp to get your newsletter data, copying and pasting the body to request your last newsletter into the AI model won’t work.

Instead, in this example, defining a function is explaining how the LLM should fill each parameter of the newsletter request so that call is successful. The actual API call will be handled by your business systems. AI just needs to fill the correct values based on the user’s request.

Make sure to use the external service API documentation to understand what’s needed and pass those instructions accurately in the function definition.

While it may be tempting to add many tools to an LLM, keep these under 20, especially if they’re similar. Too many could reduce the model’s accuracy. Instead, if you must use many, add controls in your frontend to activate or deactivate tool sets.

Setting up external tools

When chatting with AI, it may look like the model is searching the web or using external systems directly. However, this seamlessness is an illusion: the LLM doesn’t run the tools itself.

If you want your AI assistant to update customer information in your CRM or check inventory levels in your ERP system, your technical team must create the infrastructure to trigger the API calls in your business systems.

This is the same as connecting external integrations into your internal tools, for example. But the purpose here is to route the AI requests to these endpoints and pass the results back the LLM.

Listening for requests

As your teams collaborate with AI on the chat interface, your development team needs to implement a system that listens for when the LLM wants to perform actions on external systems.

They’ll have to implement a condition to listen to the AI response that communicates this. When true, the request will be sent to the external tools.

Calling a function

Based on connected tools, an LLM can understand when it should search external data sources to reply to questions.

As users chat with AI, the LLM will interpret each message and detect if the conditions to activate a specific tool from its collection are true. If so, instead of sending a response right away, it will send a reply with a system parameter signalling it wants to call a function.

In OpenAI, you can control tool activation with the “tool_use” parameter. This lets you choose one of 3 settings:

  • Auto (default): the model decides whether to call a function based on the prompt

  • Required: the model executes one or more function calls for every input received

  • Forced function: the model always executes a specific function of your choice

These settings add flexibility to tool use and offer control from your app: in ChatGPT, selecting the Search button is equivalent to setting a forced function for the web search tool.

Generating responses and acting on external systems

Your backend takes the LLM-generated code and passes it to the external system, which processes the request and returns the requested data or information to your backend.

Once all function call results are processed and the model receives them, it starts generating an answer for the user to see on the frontend.

You can keep chatting with the model, and it will continue to trigger function calls as it detects intent. If memory is activated, it will use its context window to fill out the parameters for each call.

Function calling in Stack AI

In Stack AI, you can attach tools to LLMs in the workflow automation interface.

Stack AI is a generative AI workflow automation platform for building agents and automations. It offers native tools like web search and external system integration without requiring setting API calls. You can drag, drop, and connect them on the canvas without coding.

Beyond dropping nodes on a canvas, you can activate LLM tools, which follow a similar philosophy to function calling. Instead of executing these actions outside the LLM processing stage, they’re seamlessly embedded in the request to the AI provider. You can write the structure of custom calls to any systems, but there are dozens of pre-made tools for apps like Zendesk, Airtable, or Salesforce.

Function calling requires technical understanding and has a learning curve. Stack AI reduces setup time, complexity, and maintenance needs, helping you create richer AI experiences with less effort.

Interactive LLMs

Function calling fundamentally changes LLMs interact with external systems, helping them draw data and start actions based on your commands. This adds flexibility and power, potentially transforming workflows and turning the chat window into the sole interface to interact and coordinate dozens of tools.

Start building interactive automations with function calling in Stack AI.

Make your organization smarter with AI.

Deploy custom AI Assistants, Chatbots, and Workflow Automations to make your company 10x more efficient.