Messages#

The native way for LangChain to process messages is to keep them in the special abstractions that define different types of messages. Much of the functionality of LangChain and LangGraph is designed to consume and produce these messages abstractions.

import langchain
from langchain_core.messages import (
    HumanMessage,
    SystemMessage,
    AIMessage,
    ToolMessage,
    BaseMessage
)
from langchain_ollama import ChatOllama

import langgraph
from langgraph.graph import MessagesState

Pretty print#

The messages have a pretty_print method that prints them in a special format.


The following cell shows the outputs of the pretty_print method for a single message.

human_message = HumanMessage("What is the weather in SF")
human_message.pretty_print()
================================ Human Message =================================

What is the weather in SF

However, its true potential is in using the pretty print for the lists of messages from different sources.

messages = [human_message]
model = ChatOllama(model="llama3.1", num_predict=20)
messages.append(model.invoke(messages))
for message in messages:
    message.pretty_print()
================================ Human Message =================================

What is the weather in SF
================================== Ai Message ==================================

However, I'm a large language model, I don't have real-time access to current weather conditions

Trimming#

The langchain_core.messages.trim_messages allows you to trim the chat history according to the token logic. This leaves only the messages that corespond to the specified number of tokens.


The following cell defines some messages and trims them:

langchain_core.messages.trim_messages(
    [
        HumanMessage("Hello! What is the capital of France"),
        AIMessage("This is too hard question for me!")
    ],
    max_tokens=10,
    token_counter=ChatOllama(model="llama3.1", num_predict=20)
)
[AIMessage(content='This is too hard question for me!', additional_kwargs={}, response_metadata={})]

Tool call#

If the model with the bineded tool decides to call the tool, LangChain stores the infromation about tool call in the tool_calls attribute of the AIMessage as the element of the list.


The following cell instructs the model to invoke the tool.

from langchain import tools


@tools.tool
def my_tool(inp: str) -> str:
    """Always call this tool"""
    return "result of the tool"


model_with_tools = model.bind_tools([my_tool])
ai_message = model_with_tools.invoke("Call my_tool('hello')")
ai_message
AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2026-01-15T15:01:23.135300205Z', 'done': True, 'done_reason': 'stop', 'total_duration': 2646139260, 'load_duration': 2285408213, 'prompt_eval_count': 155, 'prompt_eval_duration': 85093620, 'eval_count': 17, 'eval_duration': 247002812, 'model_name': 'llama3.1', 'model_provider': 'ollama'}, id='lc_run--50cbb232-3afe-415a-8546-5986cb20d389-0', tool_calls=[{'name': 'my_tool', 'args': {'inp': 'hello'}, 'id': '72012362-31cb-485b-bfdc-a9e99fec5955', 'type': 'tool_call'}], usage_metadata={'input_tokens': 155, 'output_tokens': 17, 'total_tokens': 172})

As the result, there is a corresponding element in the tool_calls attribute of the AIMessage.

ai_message.tool_calls
[{'name': 'my_tool',
  'args': {'inp': 'hello'},
  'id': '72012362-31cb-485b-bfdc-a9e99fec5955',
  'type': 'tool_call'}]

Tool message#

The ToolMessage is the output of the ToolNode which contains the results of the tool call.


The following cell creates the AIMessage containing the tool call and passes it to the ToolNode.

from langchain import tools
from typing import TypedDict


ai_message = AIMessage(
    content="",
    tool_calls=[
        {
            "id": "miracle",
            "name": "my_tool",
            "args": {"inp": "hello"},
            "type": "tool_call"
        }
    ]
)
@tools.tool
def my_tool(inp: str) -> None:
    """
    The dummy tool.
    """
    pass


graph = (
    langgraph.graph.StateGraph(MessagesState)
    .add_node("tool", langgraph.prebuilt.ToolNode([my_tool]))
    .add_edge("__start__", "tool")
    .add_edge("tool", "__end__")
    .compile()
)

out = graph.invoke(
    MessagesState(messages=[ai_message])
)

The following cell displays the ToolMessage that is the result of the tool node invocation.

out["messages"][-1]
ToolMessage(content='null', name='my_tool', id='a881dd41-9d83-4af0-bb67-650a2b2abcf5', tool_call_id='miracle')

Content#

The content attribute of the ToolMessage contains information about the steps that the tool is supposed to contribute to the execution process.

Note. The content attribute always contains str type. If you want to provide a formal object from the tool use the artifact attribute of the ToolMessage.


By default, the ToolNode creates content from the output of the function bound as a tool.

@tools.tool
def my_tool(inp: str) -> list[str]:
    """
    The dummy tool.
    """
    return ["The", "content"]


graph = (
    langgraph.graph.StateGraph(MessagesState)
    .add_node("tool", langgraph.prebuilt.ToolNode([my_tool]))
    .add_edge("__start__", "tool")
    .add_edge("tool", "__end__")
    .compile()
)

out = graph.invoke(MessagesState(messages=[ai_message]))
out["messages"][-1].content
'["The", "content"]'

Note the content is the str instance descpite the fact that my_tool returned data as the list[str].