Skip to content

function call is not respecting function args in 'load_memory' #388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
shu8hamrajput opened this issue Apr 25, 2025 · 2 comments
Open
Labels
tools Issues related to tools

Comments

@shu8hamrajput
Copy link

shu8hamrajput commented Apr 25, 2025

I have a agent that uses 'load_memory' to recall previous conversation. the 'load_memory' tool uses a arg 'query' to find relevent conversation. But multiple time the function call doesn't have any args.

root_agent = LlmAgent(
    name="Supervisor",
    model="gemini-2.0-flash",
    description=(
        "You are a supervisor agent that delegates survey modification tasks to other agents."
    ),
    instruction=(
        "You are a supervisor agent that delegates survey question modification tasks to other agents."
        "all queries are about survey discussed in the previous conversation."
        "use 'load_memory' with appropriate `query` when you want to access survey questions discussed in past conversation."
    ),
    tools=[ask_query, load_memory],
)

session = session_service.create_session(
    app_name=APP_NAME,
    user_id=USER_ID,
    session_id=SESSION_ID
)

news_list: list[str] = [
    "What is the latest news in technology?",
    "What are the top headlines in sports?",
    ...
]

session_service.append_event(session, Event(
    invocation_id=SESSION_ID,
    author="user",
    content=Content(
        parts=[
            Part(
                text=news
            )
            for news in news_list
        ]
    )
))

runner = Runner(
    agent=root_agent,
    app_name=APP_NAME,
    session_service=session_service,
    memory_service=memory_service
)
memory_service.add_session_to_memory(session)

on certain queries the load_memory gets successfully called with correct args. but sometime it doesn't

** successfull function call**:

 [Event] Author: Supervisor, Type: Event, Final: False, Content: parts=[Part(video_metadata=None, thought=None, code_execution_result=None, executable_code=None, file_data=None, function_call=FunctionCall(id=<adk-id>, args={'query': 'news about India'}, name='load_memory'), function_response=None, inline_data=None, text=None)] role='model'

** failed function call**:

 [Event]  Author: Supervisor, Type: Event, Final: False, Content: parts=[Part(video_metadata=None, thought=None, code_execution_result=None, executable_code=None, file_data=None, function_call=FunctionCall(id=<adk-id>, args={}, name='load_memory'), function_response=None, inline_data=None, text=None)] role='model'

How do i ensure that function call happens correctly or request again for correct function call?

@shu8hamrajput
Copy link
Author

I am not sure if it's right way or not But I fixed this issue for myself by making query a required field in load_model declartion in file google/adk/tools/load_memory_tool.py:60

the updated class looks like:

class LoadMemoryTool(FunctionTool):
  """A tool that loads the memory for the current user."""

  def __init__(self):
    super().__init__(load_memory)

  @override
  def _get_declaration(self) -> types.FunctionDeclaration | None:
    return types.FunctionDeclaration(
        name=self.name,
        description=self.description,
        parameters=types.Schema(
            type=types.Type.OBJECT,
            properties={
                'query': types.Schema(
                    type=types.Type.STRING,
                )
            },
            required=['query'] // this makes the query mandatory field
        ),
    )

  @override
  async def process_llm_request(
      self,
      *,
      tool_context: ToolContext,
      llm_request: LlmRequest,
  ) -> None:
    await super().process_llm_request(
        tool_context=tool_context, llm_request=llm_request
    )
    # Tell the model about the memory.
    llm_request.append_instructions(["""
You have memory. You can use it to answer questions. If any questions need
you to look up the memory, you should call load_memory function with a query.
"""])

@shu8hamrajput
Copy link
Author

raised a PR with changes #389

@boyangsvl boyangsvl added bug tools Issues related to tools and removed bug labels Apr 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tools Issues related to tools
Projects
None yet
Development

No branches or pull requests

2 participants