Promts#
MLflow has its own prompts registry for storing and versioning prompts and associated metadata.
For more check the Prompt Registry section of the official documentation.
import mlflow
import logging
from mlflow.tracking import MlflowClient
from IPython.display import clear_output
import logging
from langchain_ollama import ChatOllama
logging.basicConfig(level=logging.WARNING)
DATABASE_NAME = "mlflow_prompts.db"
mlflow.set_registry_uri(f"sqlite:////tmp/{DATABASE_NAME}")
mlflow.set_tracking_uri(f"sqlite:////tmp/{DATABASE_NAME}")
chat = ChatOllama(model="llama3.2:1b", temperature=0)
Alias#
An alias is a short name assigned to a specific version of a prompt. It usually reflects the unique role or status of that version. In the code that retrieves the prompt, you only need to reference the alias, so you don’t have to modify the code when switching to a new version for a particular purpose - just assign the alias to the desired version.
The following cell registers the two versions of the prompt that will be used for the experiments.
prompt_name = "alias_prompt"
mlflow.genai.register_prompt(name=prompt_name, template="Prompt1")
mlflow.genai.register_prompt(name=prompt_name, template="Prompt2")
clear_output()
You can use mlflow.genai.set_prompt_alias to assign an alias to the second model version.
mlflow.genai.set_prompt_alias(
alias="production",
name=prompt_name,
version=2
)
The following cell shows the aliases for the second version of the prompt.
mlflow.genai.load_prompt(f"prompts:/{prompt_name}/2").aliases
['production']
It also shows that you can refer to the corresponding version of the prompt by alias.
mlflow.genai.load_prompt(f"prompts:/{prompt_name}@production")
PromptVersion(name=alias_prompt, version=2, template=Prompt2)
Format#
You can specify where something is supposed to be substituted using pattern {{ var_name }}. Use the format method with the substitutions provided as keyword arguments to get a string with a substituted patterns. Some popular framewokrs as langchain or llamaIndex support the subtitution patterns but use single bracket syntax. The prompt object’s to_single_brace_format method can be used to confirm this requirement.
The following cell creates the prompt.
mlflow.genai.register_prompt(
name="format_prompt",
template="This is {{ some_pattern }}"
)
clear_output()
And substitutes the infromation.
prompt = mlflow.genai.load_prompt("prompts:/format_prompt/1")
prompt.format(some_pattern="<inserted information>")
'This is <inserted information>'
The example of reducing to the single bracket syntax.
prompt.to_single_brace_format()
'This is {some_pattern}'
Structured output#
You can save the expected format alongside prompt by using the response_format argument. You can provide either a Pydantic model or a JSON schema.
The following cell defines the PyDantic model and saves it with the prompt.
from pydantic import BaseModel
class ExampleModel(BaseModel):
str_var: str
int_var: int
mlflow.genai.register_prompt(
name="strucutred_output",
template="",
response_format=ExampleModel
)
clear_output()
You can retrieve the response format in form of JSON-schema by using the response_format attribute of the prompt object.
mlflow.genai.load_prompt("prompts:/strucutred_output/1").response_format
{'properties': {'str_var': {'title': 'Str Var', 'type': 'string'},
'int_var': {'title': 'Int Var', 'type': 'integer'}},
'required': ['str_var', 'int_var'],
'title': 'ExampleModel',
'type': 'object'}