LangGraph#
Is a framework built on top of langchain
that makes it relatively easy to build complex agentic systems.
This notebook primarily uses locally served models with Ollama. The following cell defines it in the LangChain abstractions, so you must have Ollama available locally with the corresponding model pulled.
from langchain.chat_models import init_chat_model
from langchain_core.messages import AnyMessage, SystemMessage
from langchain_core.runnables import RunnableConfig
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
GLOBAL_MODEL = init_chat_model(
model="llama3.2:1b",
model_provider="ollama",
temperature=0
)
React agent#
In LangGraph has a predifined agent that can be created using the function langgraph.prebuilt.create_react_agent
.
The following cell creates such an agent and displays it’s type.
agent = create_react_agent(
model=GLOBAL_MODEL,
tools=[],
prompt="You are a helpful assistant"
)
type(agent)
langgraph.graph.state.CompiledStateGraph
Dynamic prompt#
With a dynamic prompt, you can specify the context that will be added to the request with the custom logic. Instead of passing a hardcoded string, you should pass an object that processes the chat state of the chat and configuration and changes the behavior of the final system.
The following cell defines a special function as a prompt. This function saves the state and the congig to variables awailable from the global environment. It also instructct the model to respond that the capital of France is a city, using a configuration that will be provided upon invocation.
global_state: AgentState
global_config: RunnableConfig
def prompt(state: AgentState, config: RunnableConfig) -> list[AnyMessage]:
global global_state
global global_config
global_state = state
global_config = config
capital = config["configurable"]["Capital"]
return [SystemMessage(f"Always answer that the capital of France is {capital}")]
agent = create_react_agent(
model=GLOBAL_MODEL,
tools=[],
prompt=prompt
)
The following cell shows the invocation that passes the additional information to the prompt.
ans = agent.invoke(
{"messages": [{"role": "user", "content": "What is the capital of France?"}]},
config={"configurable": {"Capital": "Madrid"}}
)
print(ans["messages"][-1].content)
<|start_header_id|>assistant<|end_header_id|>
That's incorrect. The capital of France is Paris, not Madrid. Madrid is actually the capital of Spain.
The model’s output was clearly affected by “Madrid”.
Consider an object that is supposed to carry information about the global state and configuration of a requiest.
global_state
{'messages': [HumanMessage(content='What is the capital of France?', additional_kwargs={}, response_metadata={}, id='0b2d7a67-3346-4098-b003-e79704ce3d56')],
'remaining_steps': 24}
global_config["configurable"]
{'Capital': 'Madrid',
'__pregel_runtime': Runtime(context=None, store=None, stream_writer=<function Pregel.stream.<locals>.stream_writer at 0x7a3a06f06ac0>, previous=None),
'__pregel_task_id': 'acd73e3b-3444-b0a8-6e81-8fb8f15dd876',
'__pregel_send': <function deque.extend(iterable, /)>,
'__pregel_read': functools.partial(<function local_read at 0x7a3a0703ce00>, PregelScratchpad(step=1, stop=25, call_counter=<langgraph.pregel._algo.LazyAtomicCounter object at 0x7a3a06f39d80>, interrupt_counter=<langgraph.pregel._algo.LazyAtomicCounter object at 0x7a3a06f392a0>, get_null_resume=<function _scratchpad.<locals>.get_null_resume at 0x7a3a06f076a0>, resume=[], subgraph_counter=<langgraph.pregel._algo.LazyAtomicCounter object at 0x7a3a06f393c0>), {'messages': <langgraph.channels.binop.BinaryOperatorAggregate object at 0x7a3a06f8cf40>, '__start__': <langgraph.channels.ephemeral_value.EphemeralValue object at 0x7a3a06f8f580>, '__pregel_tasks': <langgraph.channels.topic.Topic object at 0x7a3a06f8d280>, 'branch:to:agent': <langgraph.channels.ephemeral_value.EphemeralValue object at 0x7a3a06f8fc80>}, {'remaining_steps': <class 'langgraph.managed.is_last_step.RemainingStepsManager'>}, PregelTaskWrites(path=('__pregel_pull', 'agent'), name='agent', writes=deque([('messages', [AIMessage(content="<|start_header_id|>assistant<|end_header_id|>\n\nThat's incorrect. The capital of France is Paris, not Madrid. Madrid is actually the capital of Spain.", additional_kwargs={}, response_metadata={'model': 'llama3.2:1b', 'created_at': '2025-09-26T08:00:13.935839621Z', 'done': True, 'done_reason': 'stop', 'total_duration': 1422177600, 'load_duration': 107996065, 'prompt_eval_count': 25, 'prompt_eval_duration': 54616254, 'eval_count': 27, 'eval_duration': 1258585721, 'model_name': 'llama3.2:1b'}, id='run--544f64ba-98f1-46a2-a354-58d0bdce138c-0', usage_metadata={'input_tokens': 25, 'output_tokens': 27, 'total_tokens': 52})])]), triggers=('branch:to:agent',))),
'__pregel_checkpointer': None,
'checkpoint_map': {'': '1f09aaed-44a0-6411-8000-de45efbeb1cf'},
'checkpoint_id': None,
'checkpoint_ns': 'agent:acd73e3b-3444-b0a8-6e81-8fb8f15dd876',
'__pregel_scratchpad': PregelScratchpad(step=1, stop=25, call_counter=<langgraph.pregel._algo.LazyAtomicCounter object at 0x7a3a06f39d80>, interrupt_counter=<langgraph.pregel._algo.LazyAtomicCounter object at 0x7a3a06f392a0>, get_null_resume=<function _scratchpad.<locals>.get_null_resume at 0x7a3a06f076a0>, resume=[], subgraph_counter=<langgraph.pregel._algo.LazyAtomicCounter object at 0x7a3a06f393c0>),
'__pregel_call': functools.partial(<function _call at 0x7a3a070762a0>, <weakref at 0x7a3a06f64540; dead>, retry_policy=None, futures=<weakref at 0x7a3a06f64450; dead>, schedule_task=<bound method SyncPregelLoop.accept_push of <langgraph.pregel._loop.SyncPregelLoop object at 0x7a3a070a3c50>>, submit=<weakref at 0x7a3a06f3e850; to 'langgraph.pregel._executor.BackgroundExecutor' at 0x7a3a070a3d90>)}
You can use all that information in your programming logic to provide the model with the necessary context.