INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":5}')]INFO:__main__:CALL tool multiply with {'x': 2, 'y': 5}INFO:__main__:tool multiply responded with 10INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"INFO:__main__:last message led to 1 tool calls: [('add', '{"x":10,"y":32}')]INFO:__main__:CALL tool add with {'x': 10, 'y': 32}INFO:__main__:tool add responded with 42INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"----------Result----------The result of multiplying 2 and 5, and then adding 32, is 42.
Sure, not a particularly useful agent, but enough to get you started with tool calling.
Now, let’s add a human approval step to the script to see how it works.
We’ll wrap the multiply function with @hl.require_approval() so that the LLM has to get your approval before it can run the function.
At the top of the script, import HumanLayer and instantiate it:
Copy
Ask AI
from humanlayer import HumanLayerhl = HumanLayer( verbose=True, run_id="openai-math-example",)
Then, wrap the multiply function with @hl.require_approval():
Copy
Ask AI
def add(x: int, y: int) -> int: """Add two numbers together.""" return x + y+ @hl.require_approval() def multiply(x: int, y: int) -> int: """multiply two numbers""" return x * y
full main.py script
Copy
Ask AI
import jsonimport loggingfrom openai import OpenAIfrom humanlayer import HumanLayerhl = HumanLayer( verbose=True, # run_id is optional -it can be used to identify the agent in approval history run_id="openai-math-example",)PROMPT = "multiply 2 and 5, then add 32 to the result"def add(x: int, y: int) -> int: """Add two numbers together.""" return x + y@hl.require_approval()def multiply(x: int, y: int) -> int: """multiply two numbers""" return x * ymath_tools_map = { "add": add, "multiply": multiply,}math_tools_openai = [ { "type": "function", "function": { "name": "add", "description": "Add two numbers together.", "parameters": { "type": "object", "properties": { "x": {"type": "number"}, "y": {"type": "number"}, }, "required": ["x", "y"], }, }, }, { "type": "function", "function": { "name": "multiply", "description": "multiply two numbers", "parameters": { "type": "object", "properties": { "x": {"type": "number"}, "y": {"type": "number"}, }, "required": ["x", "y"], }, }, },]logger = logging.getLogger(__name__)def run_chain(prompt: str, tools_openai: list[dict], tools_map: dict) -> str: client = OpenAI() messages = [{"role": "user", "content": prompt}] response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=tools_openai, tool_choice="auto", ) while response.choices[0].finish_reason != "stop": response_message = response.choices[0].message tool_calls = response_message.tool_calls if tool_calls: messages.append(response_message) # extend conversation with assistant's reply logger.info( "last message led to %s tool calls: %s", len(tool_calls), [(tool_call.function.name, tool_call.function.arguments) for tool_call in tool_calls], ) for tool_call in tool_calls: function_name = tool_call.function.name function_to_call = tools_map[function_name] function_args = json.loads(tool_call.function.arguments) logger.info("CALL tool %s with %s", function_name, function_args) function_response_json: str try: function_response = function_to_call(**function_args) function_response_json = json.dumps(function_response) except Exception as e: function_response_json = json.dumps( { "error": str(e), } ) logger.info( "tool %s responded with %s", function_name, function_response_json[:200], ) messages.append( { "tool_call_id": tool_call.id, "role": "tool", "name": function_name, "content": function_response_json, } ) # extend conversation with function response response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=tools_openai, ) return response.choices[0].message.contentif __name__ == "__main__": logging.basicConfig(level=logging.INFO) result = run_chain(PROMPT, math_tools_openai, math_tools_map) print("\n\n----------Result----------\n\n") print(result)
Now, when you run the script, it will prompt you for approval before performing the multiply operation.
Since we chose verbose mode, you’ll see a note in the console.
Copy
Ask AI
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":5}')]INFO:__main__:CALL tool multiply with {'x': 2, 'y': 5}HumanLayer: waiting for approval for multiply
Try rejecting the tool call with something like “use 7 instead of 5” to see how your feedback is passed into the agent.
Rejecting a tool call
When you reject a tool call, HumanLayer will pass your feedback back to the agent, which can then adjust its approach based on your input.
You should see some more output:
Copy
Ask AI
HumanLayer: human denied multiply with message: use 7 instead of 5INFO:__main__:tool multiply responded with "User denied multiply with message: use 7 instead of 5"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":7}')]INFO:__main__:CALL tool multiply with {'x': 2, 'y': 7}HumanLayer: waiting for approval for multiply
This time, you can approve the tool call.
Approving a tool call
That’s it, you’ve now added a human approval step to your tool-calling agent!