LLMs

LLMs#

This page considers LangChain interfaces for LLMs.

from langchain_ollama import ChatOllama
model = ChatOllama(model="llama3.1", temperature=0)

Invoke#

The invoke method triggers the request to LLM.

In most cases, it returns an AIMessage, but in some special cases, it can return some special output. For example, structured output langchain object return dict or Pydantic.BaseModel.


The following cell shows the kind of object langchain_core.language_models.BaseChatModel heir returns.

model.invoke("Hello world")
AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-12-02T11:01:41.276856893Z', 'done': True, 'done_reason': 'stop', 'total_duration': 2137074106, 'load_duration': 156799771, 'prompt_eval_count': 12, 'prompt_eval_duration': 712669661, 'eval_count': 10, 'eval_duration': 1258515574, 'model_name': 'llama3.1', 'model_provider': 'ollama'}, id='lc_run--4d37563e-bae5-4187-bbe2-7e7ccae56dea-0', usage_metadata={'input_tokens': 12, 'output_tokens': 10, 'total_tokens': 22})

Structured ouput#

Some providers support the structured output. The model will return the data in the specified format.

To specify the model to follow the specified format, use the with_strucutred_ouput method. It returns the modified chat object that will follow specified rules.

Check if the provider supports structured ouput in the JSON mode column of the provided features section.


The following cell illustrates how the the user characteristics are extracted from the given text.

from pydantic import BaseModel

class MyModel(BaseModel):
    id: str
    name: str

structured_model = model.with_structured_output(MyModel)
response = structured_model.invoke(
    "Extract data: 'User llm_lover with id 777 tries to acess the database.'"
)
response
MyModel(id='777', name='llm_lover')
from langchain.messages import AIMessage
model.invoke("hello")
AIMessage(content='Hello! It looks like you might have accidentally typed "helloq" instead of "hello". Is there something I can help you with today?', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-12-02T10:40:06.669223683Z', 'done': True, 'done_reason': 'stop', 'total_duration': 4541564343, 'load_duration': 151402683, 'prompt_eval_count': 12, 'prompt_eval_duration': 163564374, 'eval_count': 30, 'eval_duration': 4199649693, 'model_name': 'llama3.1', 'model_provider': 'ollama'}, id='lc_run--ca2af690-bf99-40d7-b952-0de359a374bc-0', usage_metadata={'input_tokens': 12, 'output_tokens': 30, 'total_tokens': 42})