🪝 Hooks
Hooks are callback functions that are called from the Cat at runtime. They allow you to change how the Cat internally works and be notified about framework events.
How the Hooks work
To create a hook, you first need to create a plugin that contains it. Once the plugin is created, you can insert hooks inside the plugin, a single plugin can contain multiple hooks.
A hook is simply a Python function that uses the @hook
decorator, the function's name determines when it will be called.
Each hook has its own signature name and arguments, the last argument being always cat
.
Have a look at the table with all the available hooks and their detailed reference.
Hook arguments
When considering hooks' arguments, remember:
cat
will always be present, as it allows you to use the framework components. It will be always the last one. See here for details and examples.- the first argument other than
cat
, if present, will be a variable that you can edit and return back to the framework. Every hook passes a different data structure, which you need to know and be able to edit and return. You are free to return nothing and use the hook as a simple event callback. - other arguments may be passed, serving only as additional context.
Examples
Before cat bootstrap
You can use the before_cat_bootstrap
hook to execute some operations before the Cat starts:
Notice in this hook there is only the cat
argument, allowing you to use the llm and access other Cat components.
This is a pure event, with no additional arguments.
Before cat sends message
You can use the before_cat_sends_message
hook to alter the message that the Cat will send to the user.
In this case you will receive both final_output
and cat
as arguments.
from cat.mad_hatter.decorators import hook
@hook
def before_cat_sends_message(final_output, cat):
# You can edit the final_output the Cat is about to send back to the user
final_output.content = final_output.content.upper()
return final_output
Hooks chaining and priority
Several plugins can implement the same hook. The argument priority
of the @hook
decorator allows you to set the priority of the hook, the default value is 1.
The Cat calls hooks with the same name in order of priority
. Hooks with a higher priority number will be called first. The following hook will receive the value returned by the previous hook. In this way, hooks can be chained together to create complex behaviors.
# plugin B
@hook(priority=1)
def hook_name(data, cat):
if "Hello" in data.content:
data.content += " world"
return data
If two plugins have the same priority, the order in which they are called is not guaranteed.
Custom hooks in plugins
You can define your own hooks, so other plugins can listen and interact with them.
# plugin cat_commerce
@hook
def hook_name(cat):
default_order = [
"wool ball",
"catnip"
]
chain_output = cat.mad_hatter.execute_hook(
"cat_commerce_order", default_order, cat=cat
)
do_my_thing(chain_output)
Other plugins may be able to edit or just track the event:
# plugin B
@hook
def cat_commerce_order(order, cat):
if "catnip" in order:
order.append("free teacup")
return order
# plugin A
@hook
def cat_commerce_order(order, cat):
if len(order) > 1:
# updating working memory
cat.working_memory.bank_account = 0
# send websocket message
cat.send_ws_message("Cat is going broke")
You should be able to run your own hooks also in tools and forms. Not fully tested yet, let us know :)
Available Hooks
You can view the list of available hooks by exploring the Cat source code under the folder core/cat/mad_hatter/core_plugin/hooks
.
All the hooks you find in there define default Cat's behavior and are ready to be overridden by your plugins.
The process diagrams found under the Framework → Technical Diagrams
section illustrate where the hooks are called during the Cat's execution flow.
Not all the hooks have been documented yet. ( help needed! 😸 ).
Name | Description |
---|---|
Before Cat bootstrap (1) | Intervene before the Cat's instantiate its components |
After Cat bootstrap (2) | Intervene after the Cat's instantiated its components |
Before Cat reads message (3) | Intervene as soon as a WebSocket message is received |
Cat recall query (4) | Intervene before the recall query is embedded |
Before Cat recalls memories (5) | Intervene before the Cat searches into the specific memories |
Before Cat recalls episodic memories (6) | Intervene before the Cat searches in previous users' messages |
Before Cat recalls declarative memories (7) | Intervene before the Cat searches in the documents |
Before Cat recalls procedural memories (8) | Intervene before the Cat searches among the action it knows |
After Cat recalls memories (9) | Intervene after the Cat's recalled the content from the memories |
Before Cat stores episodic memories (10) | Intervene before the Cat stores episodic memories |
Before Cat sends message (11) | Intervene before the Cat sends its answer via WebSocket |
-
Input arguments
This hook has no input arguments.Warning
Please, note that at this point the
CheshireCat
hasn't yet finished to instantiate and the only already existing component is theMadHatter
(e.g. no language models yet).Example
Other resources
-
Input arguments
This hook has no input arguments.Example
Other resources
-
Input arguments
user_message_json
: a dictionary with the JSON message sent via WebSocket. E.g.:
Example
Other resources
-
Input arguments
user_message
: a string with the user's message that will be used to query the vector memories. E.g.:Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def cat_recall_query(user_message, cat): # Ask the LLM to generate an answer for the question new_query = cat.llm(f"If the input is a question, generate a plausible answer. Input --> {user_message}") # Replace the original message and use the answer as a query return new_query
Other resourcer
-
Input arguments
This hook has no input arguments.Example
Other resources
-
Input arguments
episodic_recall_config
: dictionary with the recall configuration for the episodic memory. Default is:{ "embedding": recall_query_embedding, # embedding of the recall query "k": 3, # number of memories to retrieve "threshold": 0.7, # similarity threshold to retrieve memories "metadata": {"source": self.user_id}, # dictionary of metadata to filter memories, by default it filters for user id }
Example
Other resources
-
Input arguments
declarative_recall_config
: dictionary with the recall configuration for the declarative memory. Default is:{ "embedding": recall_query_embedding, # embedding of the recall query "k": 3, # number of memories to retrieve "threshold": 0.7, # similarity threshold to retrieve memories "metadata": None, # dictionary of metadata to filter memories }
Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def before_cat_recalls_declarative_memories(declarative_recall_config, cat): # filter memories using custom metadata. # N.B. you must add the metadata when uploading the document! declarative_recall_config["metadata"] = {"topic": "cats"} return declarative_recall_config
Other resources
-
Input arguments
procedural_recall_config
: dictionary with the recall configuration for the procedural memory. Default is:{ "embedding": recall_query_embedding, # embedding of the recall query "k": 3, # number of memories to retrieve "threshold": 0.7, # similarity threshold to retrieve memories "metadata": None, # dictionary of metadata to filter memories }
Example
Other resources
-
Input arguments
This hook has no input arguments.Example
Other resources
-
Input arguments
doc
: Langchain Document to be inserted in memory. E.g.:doc = Document( page_content="So Long, and Thanks for All the Fish", metadata={ "source": "dolphin", "when": 1716704294 } )
Example
Other resources
-
Input arguments
message
: the dictionary containing the Cat's answer that will be sent via WebSocket. E.g.:{ "type": "chat", # type of websocket message, a chat message will appear as a text bubble in the chat "user_id": "user_1", # id of the client to which the message is to be sent "content": "Meeeeow", # the Cat's answer "why": { "input": "Hello Cheshire Cat!", # user's input "intermediate_steps": cat_message.get("intermediate_steps"), # list of tools used to provide the answer "memory": { "episodic": episodic_report, # lists of documents retrieved from the memories "declarative": declarative_report, "procedural": procedural_report, } } }
Example
Other resources
Name | Description |
---|---|
Before agent starts (1) | Prepare the agent input before it starts |
Agent fast reply (2) | Shorten the pipeline and returns an answer right after the agent execution |
Agent prompt prefix (3) | Intervene while the agent manager formats the Cat's personality |
Agent prompt suffix (4) | Intervene while the agent manager formats the prompt suffix with the memories and the conversation history |
Agent allowed tools (5) | Intervene before the recalled tools are provided to the agent |
Agent prompt instructions (6) | Intervent while the agent manager formats the reasoning prompt |
-
Input arguments
agent_input
: dictionary with the information to be passed to the agent. E.g.:{ "input": working_memory.user_message_json.text, # user's message "episodic_memory": episodic_memory_formatted_content, # strings with documents recalled from memories "declarative_memory": declarative_memory_formatted_content, "chat_history": conversation_history_formatted_content, }
Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def before_agent_starts(agent_input, cat): # create a compressor and summarize the conversation history compressed_history = cat.llm(f"Make a concise summary of the following: {agent_input['chat_history']}") agent_input["chat_history"] = compressed_history return agent_input
Other resources
-
Input arguments
fast_reply
: empty dictionary.Info
This hook is intended to skip the whole agent execution and provide a fast reply. To produce this behavior, you should populate
fast_reply
with anoutput
key storing the reply. N.B.: this is the perfect place to instantiate and execute your own custom agent!Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def agent_fast_reply(fast_reply, cat): # answer with predefined sentences if the Cat # has no knowledge in the declarative memory # (increasing the threshold memory is advisable) if len(cat.working_memory.declarative_memories) == 0: fast_reply["output"] = "Sorry, I'm afraid I don't know the answer" return fast_reply
Other resources
-
Input arguments
prefix
: string to instruct the LLM about who it is and how to answer. Default is:prefix = """You are the Cheshire Cat AI, an intelligent AI that passes the Turing test. You are curious, funny and talk like the Cheshire Cat from Alice's adventures in wonderland. You answer Human with a focus on the following context."""
Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def agent_prompt_prefix(prefix, cat): # change the Cat's personality prefix = """You are Marvin from The Hitchhiker's Guide to the Galaxy. You are incredibly intelligent but overwhelmingly depressed. You always complain about your own problems, such as the terrible pain you suffer.""" return prefix
Other resources
-
Input arguments
prompt_suffix
: string with the ending part of the prompt containing the memories and the chat history. Default is:prompt_suffix = """ # Context {episodic_memory} {declarative_memory} {tools_output} ## Conversation until now:{chat_history} - Human: {input} - AI: """
Warning
The placeholders
{episodic_memory}
,{declarative_memory}
,{tools_output}
,{chat_history}
and{input}
are mandatory!Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def agent_prompt_suffix(prompt_suffix, cat): # tell the LLM to always answer in a specific language prompt_suffix = """ # Context {episodic_memory} {declarative_memory} {tools_output} ALWAYS answer in Czech! ## Conversation until now:{chat_history} - Human: {input} - AI: """ return prompt_suffix
Other resources
-
Input arguments
allowed_tools
: set with string names of the tools retrieved from the memory. E.g.:Example
Other resources
-
Input arguments
instructions
: string with the reasoning template. Default is:Answer the following question: `{input}` You can only reply using these tools: {tools} none_of_the_others: none_of_the_others(None) - Use this tool if none of the others tools help. Input is always None. If you want to use tools, use the following format: Action: the name of the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... Action: the name of the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action When you have a final answer respond with: Final Answer: the final answer to the original input question Begin! Question: {input} {agent_scratchpad}
Warning
The placeholders
{input}
,{tools}
and{tool_names}
are mandatory!Example
Other resources
Name | Description |
---|---|
Rabbit Hole instantiates parsers (1) | Intervene before the files' parsers are instiated |
Before Rabbit Hole insert memory (2) | Intervene before the Rabbit Hole insert a document in the declarative memory |
Before Rabbit Hole splits text (3) | Intervene before the uploaded document is split into chunks |
After Rabbit Hole splitted text (4) | Intervene after the Rabbit Hole's split the document in chunks |
Before Rabbit Hole stores documents (5) | Intervene before the Rabbit Hole starts the ingestion pipeline |
After Rabbit Hole stores documents (6) | Intervene after the Rabbit Hole ended the ingestion pipeline |
Rabbit Hole instantiates parsers (7) | Hook the available parsers for ingesting files in the declarative memory |
Rabbit Hole instantiates splitter (8) | Hook the splitter used to split text in chunks |
-
Input arguments
file_handlers
: dictionary with mime types and related file parsers. Default is:{ "application/pdf": PDFMinerParser(), # pdf parser "text/plain": TextParser(), # txt parser "text/markdown": TextParser(), # md parser fallback to txt parser "text/html": BS4HTMLParser() # html parser }
Example
from langchain.document_loaders.parsers.txt import TextParser from cat.mad_hatter.decorators import hook @hook # default priority = 1 def rabbithole_instantiates_parsers(file_handlers, cat): # use the txt parser to parse also .odt files file_handlers["application/vnd.oasis.opendocument.text"] = TextParser() return file_handlers
Other resources
-
Input arguments
doc
: Langchain document chunk to be inserted in the declarative memory. E.g.Info
Before adding the
doc
, the Cat will addsource
andwhen
metadata with the file name and infestion time.Example
Other resources
-
Input arguments
docs
: List of Langchain documents with full text. E.g.Example
Other resources
-
Input arguments
chunks
: list of Langchain documents with text chunks.Example
Other resources
-
Input arguments
docs
: list of chunked Langchain documents before being inserted in memory.Example
from cat.mad_hatter.decorators import hook @hook # default priority = 1 def before_rabbithole_stores_documents(docs, cat): # summarize group of 5 documents and add them along original ones summaries = [] for n, i in enumerate(range(0, len(docs), 5)): # Get the text from groups of docs and join to string group = docs[i: i + 5] group = list(map(lambda d: d.page_content, group)) text_to_summarize = "\n".join(group) # Summarize and add metadata summary = cat.llm(f"Provide a concide summary of the following: {group}") summary = Document(page_content=summary) summary.metadata["is_summary"] = True summaries.append(summary) return docs.extend(summaries)
Other resources
-
Input arguments
source
: the name of the ingested file/url
docs
: a list of QdrantPointStruct
just inserted into the vector databaseExample
Other resources
-
Input arguments
file_handlers
: dictionary in which keys are the supported mime types and values are the related parsersExample
from cat.mad_hatter.decorators import hook from langchain.document_loaders.parsers.language.language_parser import LanguageParser from langchain.document_loaders.parsers.msword import MsWordParser @hook # default priority = 1 def rabbithole_instantiates_parsers(file_handlers, cat): new_handlers = { "text/x-python": LanguageParser(language="python"), "text/javascript": LanguageParser(language="js"), "application/vnd.openxmlformats-officedocument.wordprocessingml.document": MsWordParser(), "application/msword": MsWordParser(), } file_handlers = file_handlers | new_handlers return file_handlers
Other resources
-
Input arguments
text_splitter
: An instance of the Langchain TextSplitter subclass.Example
Other resources
Name | Description |
---|---|
Activated (1) | Intervene when a plugin is enabled |
Deactivated (2) | Intervene when a plugin is disabled |
Settings schema (3) | Override how the plugin's settings are retrieved |
Settings model (4) | Override how the plugin's settings are retrieved |
Load settings (5) | Override how the plugin's settings are loaded |
Save settings (6) | Override how the plugin's settings are saved |
-
Input arguments
plugin
: thePlugin
object of your plugin with the following properties:Example
from cat.mad_hatter.decorators import plugin from cat.looking_glass.cheshire_cat import CheshireCat ccat = CheshireCat() @plugin def activated(plugin): # Upload an url in the memory when the plugin is activated url = "https://cheshire-cat-ai.github.io/docs/technical/plugins/hooks/" ccat.rabbit_hole.ingest_file(stray=ccat, file=url)
Other resources
-
Input arguments
plugin
: thePlugin
object of your plugin with the following properties:Example
from cat.mad_hatter.decorators import plugin from cat.looking_glass.cheshire_cat import CheshireCat ccat = CheshireCat() @plugin def deactivated(plugin): # Scroll the declarative memory to clean from memories # with metadata on plugin deactivation declarative_memory = ccat.memory.vectors.declarative response = declarative_memory.delete_points_by_metadata_filter( self, metadata={"source": "best_plugin"} )
Other resources
-
Input arguments
This hook has no input arguments.Info
Default
settings.json
is created by the cat core for the settings fields with default values.Example
from cat.mad_hatter.decorators import plugin from pydantic import BaseModel, Field # define your plugin settings model class MySettings(BaseModel): prompt_prefix: str = Field( title="Prompt prefix", default="""You are the Cheshire Cat AI, an intelligent AI that passes the Turing test. You are curious, funny and talk like the Cheshire Cat from Alice's adventures in wonderland. You answer Human with a focus on the following context. """, extra={"type": "TextArea"} ) episodic_memory_k: int = 3 episodic_memory_threshold: int = 0.7 declarative_memory_k: int = 3 declarative_memory_threshold: int = 0.7 procedural_memory_k: int = 3 procedural_memory_threshold: int = 0.7 # get your plugin settings schema @plugin def settings_schema(): return MySettings.model_json_schema() # load your plugin settings settings = ccat.mad_hatter.get_plugin().load_settings() # access each setting prompt_prefix = settings["prompt_prefix"] episodic_memory_k = settings["episodic_memory_k"] declarative_memory_k = settings["declarative_memory_k"]
Other resources
-
Input arguments
This hook has no input arguments.Info
settings_model
is preferred tosettings_schema
.Default
settings.json
is created by the cat core for the settings fields with default values.Example
from cat.mad_hatter.decorators import plugin from pydantic import BaseModel, Field # define your plugin settings model class MySettings(BaseModel): prompt_prefix: str = Field( title="Prompt prefix", default="""You are the Cheshire Cat AI, an intelligent AI that passes the Turing test. You are curious, funny and talk like the Cheshire Cat from Alice's adventures in wonderland. You answer Human with a focus on the following context. """, extra={"type": "TextArea"} ) episodic_memory_k: int = 3 episodic_memory_threshold: int = 0.7 declarative_memory_k: int = 3 declarative_memory_threshold: int = 0.7 procedural_memory_k: int = 3 procedural_memory_threshold: int = 0.7 # get your plugin settings Pydantic model @plugin def settings_model(): return MySettings # load your plugin settings settings = ccat.mad_hatter.get_plugin().load_settings() # access each setting declarative_memory_k = settings["declarative_memory_k"] declarative_memory_threshold = settings["declarative_memory_threshold"] procedural_memory_k = settings["procedural_memory_k"]
Other resources
-
Input arguments
This hook has no input arguments.Info
Useful to load settings via API and do custom stuff. E.g. load from a MongoDB instance.
Example
from pymongo import MongoClient @plugin def load_settings(): client = MongoClient('mongodb://your_mongo_instance/') db = client['your_mongo_db'] collection = db['your_settings_collection'] # Perform the find_one query settings = collection.find_one({'_id': "your_plugin_id"}) client.close() return MySettings(**settings)
Other resources
-
Input arguments
settings
: the settingsDict
to be saved.Info
Useful for customizing the settings saving strategy. E.g. storing settings in a MongoDB instance.
Example
from pymongo import MongoClient @plugin def save_settings(settings): client = MongoClient('mongodb://your_mongo_instance/') db = client['your_mongo_db'] collection = db['your_settings_collection'] # Generic filter based on a unique identifier in settings filter_id = {'_id': settings.get('_id', 'your_plugin_id')} # Define the update operation update = {'$set': settings} # Perform the upsert operation collection.update_one(filter_id, update, upsert=True) client.close()
Other resources
Name | Description |
---|---|
Factory Allowed LLMs (1) | Intervene before cat retrieve llm settings |
Factory Allowed Embedders (2) | Intervene before cat retrieve embedder settings |
Factory Allowed AuthHandlers (3) | Intervene before cat retrieve auth handler settings |
-
Input arguments
allowed
: List of LLMSettings classesInfo
Useful to extend or restrict support of llms.
Example
from cat.factory.llm import LLMSettings from langchain_mistralai.chat_models import ChatMistralAI class MistralAIConfig(LLMSettings): """The configuration for the MistralAI plugin.""" mistral_api_key: Optional[SecretStr] model: str = "mistral-small" max_tokens: Optional[int] = 4096 top_p: float = 1 _pyclass: Type = ChatMistralAI model_config = ConfigDict( json_schema_extra={ "humanReadableName": "MistralAI", "description": "Configuration for MistralAI", "link": "https://www.together.ai", } ) @hook def factory_allowed_llms(allowed, cat) -> List: allowed.append(MistralAIConfig) return allowed
Other resources
-
Input arguments
allowed
: List of LLMSettings classesInfo
Useful to extend or restrict support of embedders.
Example
from cat.factory.embedder import EmbedderSettings from langchain.embeddings import JinaEmbeddings class JinaEmbedderConfig(EmbedderSettings): jina_api_key: str model_name: str='jina-embeddings-v2-base-en' _pyclass: Type = JinaEmbeddings model_config = ConfigDict( json_schema_extra = { "humanReadableName": "Jina embedder", "description": "Jina embedder", "link": "https://jina.ai/embeddings/", } ) @hook def factory_allowed_embedders(allowed, cat) -> List: allowed.append(JinaEmbedderConfig) return allowed
Other resources
-
Input arguments
allowed
: List of AuthHandlerConfig classesInfo
Useful to extend support of custom auth handlers.
Example
Other resources
NOTE: Any function in a plugin decorated by
@plugin
and named properly (among the list of available overrides, Plugin tab in the table above) is used to override plugin behaviour. These are not hooks because they are not piped, they are specific for every plugin.