Skip to content

Commit 4eb4493

Browse files
eavanvalkenburgmoonbox3
andauthoredMar 4, 2025
Python: improved chat history samples (microsoft#10737)
### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Improve the serialization sample Added a more complex sample for using ChatHistory with a VectorStore to store in a backend. Also improves the typing of the vectorstoremodel decorator ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone 😄 --------- Co-authored-by: Evan Mattson <[email protected]>
1 parent 8fd6da2 commit 4eb4493

File tree

8 files changed

+346
-110
lines changed

8 files changed

+346
-110
lines changed
 

‎python/samples/SAMPLE_GUIDELINES.md

+11-10
Original file line numberDiff line numberDiff line change
@@ -33,18 +33,20 @@ Try to do a best effort to make sure that the samples are incremental in complex
3333

3434
### **Documentation**
3535

36-
Try to over-document the samples. This includes comments in the code, README.md files, and any other documentation that is necessary to understand the sample.
36+
Try to over-document the samples. This includes comments in the code, README.md files, and any other documentation that is necessary to understand the sample. We use the guidance from [PEP8](https://peps.python.org/pep-0008/#comments) for comments in the code, with a deviation for the initial summary comment in samples and the output of the samples.
3737

3838
For the getting started samples and the concept samples, we should have the following:
3939

4040
1. A README.md file is included in each set of samples that explains the purpose of the samples and the setup required to run them.
4141
2. A summary should be included at the top of the file that explains the purpose of the sample and required components/concepts to understand the sample. For example:
4242

4343
```python
44-
# This sample shows how to create a chatbot. This sample uses the following two main components:
45-
# - a ChatCompletionService: This component is responsible for generating responses to user messages.
46-
# - a ChatHistory: This component is responsible for keeping track of the chat history.
47-
# The chatbot in this sample is called Mosscap, who responds to user messages with long flowery prose.
44+
'''
45+
This sample shows how to create a chatbot. This sample uses the following two main components:
46+
- a ChatCompletionService: This component is responsible for generating responses to user messages.
47+
- a ChatHistory: This component is responsible for keeping track of the chat history.
48+
The chatbot in this sample is called Mosscap, who responds to user messages with long flowery prose.
49+
'''
4850
```
4951

5052
3. Mark the code with comments to explain the purpose of each section of the code. For example:
@@ -64,12 +66,11 @@ For the getting started samples and the concept samples, we should have the foll
6466
```python
6567
'''
6668
Sample output:
67-
# User:> Why is the sky blue in one sentence?
68-
# Mosscap:> The sky is blue due to the scattering of sunlight by the molecules in the Earth's atmosphere,
69-
# a phenomenon known as Rayleigh scattering, which causes shorter blue wavelengths to become more
70-
# prominent in our visual perception.
69+
User:> Why is the sky blue in one sentence?
70+
Mosscap:> The sky is blue due to the scattering of sunlight by the molecules in the Earth's atmosphere,
71+
a phenomenon known as Rayleigh scattering, which causes shorter blue wavelengths to become more
72+
prominent in our visual perception.
7173
'''
72-
7374
```
7475

7576
For the demos, a README.md file must be included that explains the purpose of the demo and how to run it. The README.md file should include the following:

‎python/samples/concepts/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,6 @@
4444
- [Chat Completion Truncate History Reducer Agent Chat](./agents/chat_completion_agent/chat_completion_truncate_history_reducer_agent_chat.py)
4545
- [Chat Completion Truncate History Reducer Single Agent](./agents/chat_completion_agent/chat_completion_truncate_history_reducer_single_agent.py)
4646

47-
4847
#### [Mixed Agent Group Chat](../../semantic_kernel/agents/group_chat/agent_group_chat.py)
4948

5049
- [Mixed Chat Agents Plugins](./agents/mixed_chat/mixed_chat_agents_plugins.py)
@@ -90,6 +89,7 @@
9089
### ChatHistory - Using and serializing the [`ChatHistory`](https://github.com/microsoft/semantic-kernel/blob/main/python/semantic_kernel/contents/chat_history.py)
9190

9291
- [Serialize Chat History](./chat_history/serialize_chat_history.py)
92+
- [Store Chat History in CosmosDB](./chat_history/store_chat_history_in_cosmosdb.py)
9393

9494
### Filtering - Creating and using Filters
9595

@@ -202,7 +202,7 @@ In Semantic Kernel for Python, we leverage Pydantic Settings to manage configura
202202

203203
1. **Reading Environment Variables:**
204204
- **Primary Source:** Pydantic first attempts to read the required settings from environment variables.
205-
205+
206206
2. **Using a .env File:**
207207
- **Fallback Source:** If the required environment variables are not set, Pydantic will look for a `.env` file in the current working directory.
208208
- **Custom Path (Optional):** You can specify an alternative path for the `.env` file via `env_file_path`. This can be either a relative or an absolute path.
@@ -220,4 +220,4 @@ To successfully retrieve and use the Entra Auth Token, you need the `Cognitive S
220220

221221
- **.env File Placement:** We highly recommend placing the `.env` file in the `semantic-kernel/python` root directory. This is a common practice when developing in the Semantic Kernel repository.
222222

223-
By following these guidelines, you can ensure that your settings for various components are configured correctly, enabling seamless functionality and integration of Semantic Kernel in your Python projects.
223+
By following these guidelines, you can ensure that your settings for various components are configured correctly, enabling seamless functionality and integration of Semantic Kernel in your Python projects.

‎python/samples/concepts/auto_function_calling/chat_completion_with_auto_function_calling.py

-4
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
# Copyright (c) Microsoft. All rights reserved.
22

33
import asyncio
4-
from typing import TYPE_CHECKING
54

65
from samples.concepts.setup.chat_completion_services import Services, get_chat_completion_service_and_request_settings
76
from semantic_kernel import Kernel
@@ -11,9 +10,6 @@
1110
from semantic_kernel.core_plugins.time_plugin import TimePlugin
1211
from semantic_kernel.functions import KernelArguments
1312

14-
if TYPE_CHECKING:
15-
pass
16-
1713
#####################################################################
1814
# This sample demonstrates how to build a conversational chatbot #
1915
# using Semantic Kernel, featuring auto function calling, #
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Chat History manipulation samples
2+
3+
This folder contains samples that demonstrate how to manipulate chat history in Semantic Kernel.
4+
5+
## [Serialize Chat History](./serialize_chat_history.py)
6+
7+
This sample demonstrates how to build a conversational chatbot using Semantic Kernel, it features auto function calling, but with file-based serialization of the chat history. This sample stores and reads the chat history at every turn. This is not the best way to do it, but clearly demonstrates the mechanics.
8+
9+
To run this sample a environment with keys for the chosen chat service is required. In line 61 you can change the model used. This sample uses a temporary file to store the chat history, so no additional setup is required.
10+
11+
## [Store Chat History in Cosmos DB](./store_chat_history_in_cosmosdb.py)
12+
13+
This a more complex version of the sample above, it uses Azure CosmosDB NoSQL to store the chat messages.
14+
15+
In order to do that a simple datamodel is defined. And then a class is created that extends ChatHistory, this class adds `store` and `read` methods, as well as a `create_collection` method that creates a collection in CosmosDB.
16+
17+
This samples further uses the same chat service setup as the sample above, so the keys and other parameters for the chosen model should be in the environment. Next to that a AZURE_COSMOS_DB_NO_SQL_URL and optionally a AZURE_COSMOS_DB_NO_SQL_KEY should be set in the environment, you can also rely on Entra ID Auth instead of the key. The database name can also be put in the environment.

‎python/samples/concepts/chat_history/serialize_chat_history.py

+98-80
Original file line numberDiff line numberDiff line change
@@ -1,94 +1,112 @@
11
# Copyright (c) Microsoft. All rights reserved.
22

33
import asyncio
4-
import os
5-
from typing import TYPE_CHECKING
6-
7-
from semantic_kernel import Kernel
8-
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
9-
from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.azure_chat_prompt_execution_settings import (
10-
AzureChatPromptExecutionSettings,
11-
)
12-
from semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion import AzureChatCompletion
4+
import tempfile
5+
6+
from samples.concepts.setup.chat_completion_services import Services, get_chat_completion_service_and_request_settings
137
from semantic_kernel.contents import ChatHistory
14-
from semantic_kernel.core_plugins.math_plugin import MathPlugin
15-
from semantic_kernel.core_plugins.time_plugin import TimePlugin
16-
from semantic_kernel.functions import KernelArguments
17-
18-
if TYPE_CHECKING:
19-
pass
20-
21-
22-
system_message = """
23-
You are a chat bot. Your name is Mosscap and
24-
you have one goal: figure out what people need.
25-
Your full name, should you need to know it, is
26-
Splendid Speckled Mosscap. You communicate
27-
effectively, but you tend to answer with long
28-
flowery prose. You are also a math wizard,
29-
especially for adding and subtracting.
30-
You also excel at joke telling, where your tone is often sarcastic.
31-
Once you have the answer I am looking for,
32-
you will return a full answer to me as soon as possible.
8+
9+
"""
10+
This sample demonstrates how to build a conversational chatbot
11+
using Semantic Kernel, it features auto function calling,
12+
but with file-based serialization of the chat history.
13+
This sample stores and reads the chat history at every turn.
14+
This is not the best way to do it, but clearly demonstrates the mechanics.
15+
More optimal would for instance be to only write once when a conversation is done.
16+
And writing to something other then a file is also usually better.
3317
"""
3418

35-
kernel = Kernel()
36-
37-
# Note: the underlying gpt-35/gpt-4 model version needs to be at least version 0613 to support tools.
38-
kernel.add_service(AzureChatCompletion(service_id="chat"))
39-
40-
plugins_directory = os.path.join(__file__, "../../../../../prompt_template_samples/")
41-
# adding plugins to the kernel
42-
kernel.add_plugin(MathPlugin(), plugin_name="math")
43-
kernel.add_plugin(TimePlugin(), plugin_name="time")
44-
45-
# Enabling or disabling function calling is done by setting the `function_choice_behavior` attribute for the
46-
# prompt execution settings. When the function_call parameter is set to "auto" the model will decide which
47-
# function to use, if any.
48-
#
49-
# There are two ways to define the `function_choice_behavior` parameter:
50-
# 1. Using the type string as `"auto"`, `"required"`, or `"none"`. For example:
51-
# configure `function_choice_behavior="auto"` parameter directly in the execution settings.
52-
# 2. Using the FunctionChoiceBehavior class. For example:
53-
# `function_choice_behavior=FunctionChoiceBehavior.Auto()`.
54-
# Both of these configure the `auto` tool_choice and all of the available plugins/functions
55-
# registered on the kernel. If you want to limit the available plugins/functions, you must
56-
# configure the `filters` dictionary attribute for each type of function choice behavior.
57-
# For example:
58-
#
59-
# from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
60-
61-
# function_choice_behavior = FunctionChoiceBehavior.Auto(
62-
# filters={"included_functions": ["time-date", "time-time", "math-Add"]}
63-
# )
64-
#
65-
# The filters attribute allows you to specify either: `included_functions`, `excluded_functions`,
66-
# `included_plugins`, or `excluded_plugins`.
67-
68-
# Note: the number of responses for auto invoking tool calls is limited to 1.
69-
# If configured to be greater than one, this value will be overridden to 1.
70-
execution_settings = AzureChatPromptExecutionSettings(
71-
service_id="chat",
72-
max_tokens=2000,
73-
temperature=0.7,
74-
top_p=0.8,
75-
function_choice_behavior=FunctionChoiceBehavior.Auto(),
76-
)
77-
78-
arguments = KernelArguments(settings=execution_settings)
7919

20+
# You can select from the following chat completion services that support function calling:
21+
# - Services.OPENAI
22+
# - Services.AZURE_OPENAI
23+
# - Services.AZURE_AI_INFERENCE
24+
# - Services.ANTHROPIC
25+
# - Services.BEDROCK
26+
# - Services.GOOGLE_AI
27+
# - Services.MISTRAL_AI
28+
# - Services.OLLAMA
29+
# - Services.ONNX
30+
# - Services.VERTEX_AI
31+
# - Services.DEEPSEEK
32+
# Please make sure you have configured your environment correctly for the selected chat completion service.
33+
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.OPENAI)
34+
35+
36+
async def chat(file) -> bool:
37+
"""
38+
Continuously prompt the user for input and show the assistant's response.
39+
Type 'exit' to exit.
40+
"""
41+
try:
42+
# Try to load the chat history from a file.
43+
history = ChatHistory.load_chat_history_from_file(file_path=file)
44+
print(f"Chat history successfully loaded {len(history.messages)} messages.")
45+
except Exception:
46+
# Create a new chat history to store the system message, initial messages, and the conversation.
47+
print("Chat history file not found. Starting a new conversation.")
48+
history = ChatHistory()
49+
history.add_system_message(
50+
"You are a chat bot. Your name is Mosscap and you have one goal: figure out what people need."
51+
)
52+
53+
try:
54+
# Get the user input
55+
user_input = input("User:> ")
56+
except (KeyboardInterrupt, EOFError):
57+
print("\n\nExiting chat...")
58+
return False
59+
60+
if user_input.lower().strip() == "exit":
61+
print("\n\nExiting chat...")
62+
return False
63+
64+
# Add the user input to the chat history
65+
history.add_user_message(user_input)
66+
# Get a response from the chat completion service
67+
result = await chat_completion_service.get_chat_message_content(history, request_settings)
68+
69+
# Update the chat history with the user's input and the assistant's response
70+
if result:
71+
print(f"Mosscap:> {result}")
72+
history.add_message(result)
73+
74+
# Save the chat history to a file.
75+
print(f"Saving {len(history.messages)} messages to the file.")
76+
history.store_chat_history_to_file(file_path=file)
77+
return True
8078

81-
async def main() -> None:
82-
user_input = "What is the current hour plus 10?"
83-
print(f"User:> {user_input}")
8479

85-
result = await kernel.invoke_prompt(prompt=user_input, arguments=arguments)
80+
"""
81+
Sample output:
82+
83+
Welcome to the chat bot!
84+
Type 'exit' to exit.
85+
Try a math question to see function calling in action (e.g. 'what is 3+3?').
86+
Your chat history will be saved in: <local working directory>/tmpq1n1f6qk.json
87+
Chat history file not found. Starting a new conversation.
88+
User:> Hello, how are you?
89+
Mosscap:> Hello! I'm here and ready to help. What do you need today?
90+
Saving 3 messages to the file.
91+
Chat history successfully loaded 3 messages.
92+
User:> exit
93+
"""
8694

87-
print(f"Mosscap:> {result}")
8895

89-
print("\nChat history:")
90-
chat_history: ChatHistory = result.metadata["messages"]
91-
print(chat_history.serialize())
96+
async def main() -> None:
97+
chatting = True
98+
with tempfile.NamedTemporaryFile(mode="w+", dir=".", suffix=".json", delete=True) as file:
99+
print(
100+
"Welcome to the chat bot!\n"
101+
" Type 'exit' to exit.\n"
102+
" Try a math question to see function calling in action (e.g. 'what is 3+3?')."
103+
f" Your chat history will be saved in: {file.name}"
104+
)
105+
try:
106+
while chatting:
107+
chatting = await chat(file.name)
108+
except Exception:
109+
print("Closing and removing the file.")
92110

93111

94112
if __name__ == "__main__":
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import asyncio
4+
from dataclasses import dataclass
5+
from typing import Annotated
6+
7+
from samples.concepts.setup.chat_completion_services import Services, get_chat_completion_service_and_request_settings
8+
from semantic_kernel import Kernel
9+
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
10+
from semantic_kernel.connectors.memory.azure_cosmos_db.azure_cosmos_db_no_sql_store import AzureCosmosDBNoSQLStore
11+
from semantic_kernel.contents import ChatHistory
12+
from semantic_kernel.contents.chat_message_content import ChatMessageContent
13+
from semantic_kernel.core_plugins.math_plugin import MathPlugin
14+
from semantic_kernel.core_plugins.time_plugin import TimePlugin
15+
from semantic_kernel.data.record_definition.vector_store_model_decorator import vectorstoremodel
16+
from semantic_kernel.data.record_definition.vector_store_record_fields import (
17+
VectorStoreRecordDataField,
18+
VectorStoreRecordKeyField,
19+
)
20+
from semantic_kernel.data.vector_storage.vector_store import VectorStore
21+
from semantic_kernel.data.vector_storage.vector_store_record_collection import VectorStoreRecordCollection
22+
23+
"""
24+
This sample demonstrates how to build a conversational chatbot
25+
using Semantic Kernel, it features auto function calling,
26+
but with Azure CosmosDB as storage for the chat history.
27+
This sample stores and reads the chat history at every turn.
28+
This is not the best way to do it, but clearly demonstrates the mechanics.
29+
30+
Further refinement would be to only write once when a conversation is done.
31+
And there is also no logic to see if there is something to write.
32+
You could also enhance the ChatHistoryModel with a summary and a vector for that
33+
in order to search for similar conversations.
34+
"""
35+
36+
37+
# 1. We first create simple datamodel for the chat history.
38+
# Note that this model does not contain any vectors,
39+
# those can be added, for instance to store a summary of the conversation.
40+
@vectorstoremodel
41+
@dataclass
42+
class ChatHistoryModel:
43+
session_id: Annotated[str, VectorStoreRecordKeyField]
44+
user_id: Annotated[str, VectorStoreRecordDataField(is_filterable=True)]
45+
messages: Annotated[list[dict[str, str]], VectorStoreRecordDataField(is_filterable=True)]
46+
47+
48+
# 2. We then create a class that extends the ChatHistory class
49+
# and implements the methods to store and read the chat history.
50+
# This could also use one of the history reducers to make
51+
# sure the database doesn't grow too large.
52+
# It adds a `store` attribute and a couple of methods.
53+
class ChatHistoryInCosmosDB(ChatHistory):
54+
"""This class extends the ChatHistory class to store the chat history in a Cosmos DB."""
55+
56+
session_id: str
57+
user_id: str
58+
store: VectorStore
59+
collection: VectorStoreRecordCollection[str, ChatHistoryModel] | None = None
60+
61+
async def create_collection(self, collection_name: str) -> None:
62+
"""Create a collection with the inbuild data model using the vector store.
63+
64+
First create the collection, then call this method to create the collection itself.
65+
"""
66+
self.collection = self.store.get_collection(
67+
collection_name=collection_name,
68+
data_model_type=ChatHistoryModel,
69+
)
70+
await self.collection.create_collection_if_not_exists()
71+
72+
async def store_messages(self) -> None:
73+
"""Store the chat history in the Cosmos DB.
74+
75+
Note that we use model_dump to convert the chat message content into a serializable format.
76+
"""
77+
if self.collection:
78+
await self.collection.upsert(
79+
ChatHistoryModel(
80+
session_id=self.session_id,
81+
user_id=self.user_id,
82+
messages=[msg.model_dump() for msg in self.messages],
83+
)
84+
)
85+
86+
async def read_messages(self) -> None:
87+
"""Read the chat history from the Cosmos DB.
88+
89+
Note that we use the model_validate method to convert the serializable format back into a ChatMessageContent.
90+
"""
91+
if self.collection:
92+
record = await self.collection.get(self.session_id)
93+
if record:
94+
for message in record.messages:
95+
self.messages.append(ChatMessageContent.model_validate(message))
96+
97+
98+
# 3. We now create a fairly standard kernel, with functions and a chat service.
99+
# Create and configure the kernel.
100+
kernel = Kernel()
101+
102+
# Load some sample plugins (for demonstration of function calling).
103+
kernel.add_plugin(MathPlugin(), plugin_name="math")
104+
kernel.add_plugin(TimePlugin(), plugin_name="time")
105+
106+
# You can select from the following chat completion services that support function calling:
107+
# - Services.OPENAI
108+
# - Services.AZURE_OPENAI
109+
# - Services.AZURE_AI_INFERENCE
110+
# - Services.ANTHROPIC
111+
# - Services.BEDROCK
112+
# - Services.GOOGLE_AI
113+
# - Services.MISTRAL_AI
114+
# - Services.OLLAMA
115+
# - Services.ONNX
116+
# - Services.VERTEX_AI
117+
# - Services.DEEPSEEK
118+
# Please make sure you have configured your environment correctly for the selected chat completion service.
119+
chat_completion_service, request_settings = get_chat_completion_service_and_request_settings(Services.AZURE_OPENAI)
120+
121+
# Configure the function choice behavior. Here, we set it to Auto, where auto_invoke=True by default.
122+
# With `auto_invoke=True`, the model will automatically choose and call functions as needed.
123+
request_settings.function_choice_behavior = FunctionChoiceBehavior.Auto(filters={"excluded_plugins": ["ChatBot"]})
124+
125+
kernel.add_service(chat_completion_service)
126+
127+
128+
# 4. The main chat loop, which takes a history object and prompts the user for input.
129+
# It then adds the user input to the history and gets a response from the chat completion service.
130+
# Finally, it prints the response and saves the chat history to the Cosmos DB.
131+
async def chat(history: ChatHistoryInCosmosDB) -> bool:
132+
"""
133+
Continuously prompt the user for input and show the assistant's response.
134+
Type 'exit' to exit.
135+
"""
136+
await history.read_messages()
137+
print(f"Chat history successfully loaded {len(history.messages)} messages.")
138+
if len(history.messages) == 0:
139+
# if it is a new conversation, add the system message and a couple of initial messages.
140+
history.add_system_message(
141+
"You are a chat bot. Your name is Mosscap and you have one goal: figure out what people need."
142+
)
143+
history.add_user_message("Hi there, who are you?")
144+
history.add_assistant_message("I am Mosscap, a chat bot. I'm trying to figure out what people need.")
145+
146+
try:
147+
user_input = input("User:> ")
148+
except (KeyboardInterrupt, EOFError):
149+
print("\n\nExiting chat...")
150+
return False
151+
152+
if user_input.lower().strip() == "exit":
153+
print("\n\nExiting chat...")
154+
return False
155+
156+
# add the user input to the chat history
157+
history.add_user_message(user_input)
158+
159+
result = await chat_completion_service.get_chat_message_content(history, request_settings, kernel=kernel)
160+
161+
if result:
162+
print(f"Mosscap:> {result}")
163+
history.add_message(result)
164+
165+
# Save the chat history to CosmosDB.
166+
print(f"Saving {len(history.messages)} messages to AzureCosmosDB.")
167+
await history.store_messages()
168+
return True
169+
170+
171+
async def main() -> None:
172+
delete_when_done = True
173+
session_id = "session1"
174+
chatting = True
175+
# 5. We now create the store, ChatHistory and collection and start the chat loop.
176+
177+
# First we enter the store context manager to connect.
178+
# The create_database flag will create the database if it does not exist.
179+
async with AzureCosmosDBNoSQLStore(create_database=True) as store:
180+
# Then we create the chat history in CosmosDB.
181+
history = ChatHistoryInCosmosDB(store=store, session_id=session_id, user_id="user")
182+
# Finally we create the collection.
183+
await history.create_collection(collection_name="chat_history")
184+
print(
185+
"Welcome to the chat bot!\n"
186+
" Type 'exit' to exit.\n"
187+
" Try a math question to see function calling in action (e.g. 'what is 3+3?')."
188+
)
189+
try:
190+
while chatting:
191+
chatting = await chat(history)
192+
except Exception:
193+
print("Closing chat...")
194+
if delete_when_done and history.collection:
195+
await history.collection.delete_collection()
196+
197+
198+
if __name__ == "__main__":
199+
asyncio.run(main())

‎python/semantic_kernel/contents/chat_history.py

+10-7
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
# Copyright (c) Microsoft. All rights reserved.
22

3-
import json
43
import logging
54
from collections.abc import Generator, Iterable
65
from functools import singledispatchmethod
@@ -363,26 +362,30 @@ def restore_chat_history(cls: type[_T], chat_history_json: str) -> _T:
363362
fails validation.
364363
"""
365364
try:
366-
return cls(**json.loads(chat_history_json))
365+
return cls.model_validate_json(chat_history_json)
367366
except Exception as e:
368367
raise ContentInitializationError(f"Invalid JSON format: {e}")
369368

370369
def store_chat_history_to_file(self, file_path: str) -> None:
371370
"""Stores the serialized ChatHistory to a file.
372371
372+
Uses mode "w" which means the file is created if it does not exist and gets truncated if it does.
373+
373374
Args:
374-
file_path (str): The path to the file where the serialized data will be stored.
375+
file_path: The path to the file where the serialized data will be stored.
375376
"""
376377
json_str = self.serialize()
377-
with open(file_path, "w") as file:
378-
file.write(json_str)
378+
with open(file_path, "w") as local_file:
379+
local_file.write(json_str)
379380

380381
@classmethod
381-
def load_chat_history_from_file(cls, file_path: str) -> "ChatHistory":
382+
def load_chat_history_from_file(cls: type[_T], file_path: str) -> _T:
382383
"""Loads the ChatHistory from a file.
383384
385+
Uses mode "r" which means it can only be read if it exists.
386+
384387
Args:
385-
file_path (str): The path to the file from which to load the ChatHistory.
388+
file_path: The path to the file from which to load the ChatHistory.
386389
387390
Returns:
388391
ChatHistory: The deserialized ChatHistory instance.

‎python/semantic_kernel/data/record_definition/vector_store_model_decorator.py

+8-6
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import logging
44
from inspect import Parameter, _empty, signature
55
from types import MappingProxyType, NoneType
6-
from typing import Any
6+
from typing import TypeVar
77

88
from semantic_kernel.data.record_definition.vector_store_model_definition import VectorStoreRecordDefinition
99
from semantic_kernel.data.record_definition.vector_store_record_fields import (
@@ -15,11 +15,13 @@
1515

1616
logger = logging.getLogger(__name__)
1717

18+
_T = TypeVar("_T")
19+
1820

1921
@experimental
2022
def vectorstoremodel(
21-
cls: Any | None = None,
22-
):
23+
cls: type[_T] | None = None,
24+
) -> type[_T]:
2325
"""Returns the class as a vector store model.
2426
2527
This decorator makes a class a vector store model.
@@ -44,18 +46,18 @@ def vectorstoremodel(
4446
VectorStoreModelException: If there is a ndarray field without a serialize or deserialize function.
4547
"""
4648

47-
def wrap(cls: Any):
49+
def wrap(cls: type[_T]) -> type[_T]:
4850
# get fields and annotations
4951
cls_sig = signature(cls)
5052
setattr(cls, "__kernel_vectorstoremodel__", True)
5153
setattr(cls, "__kernel_vectorstoremodel_definition__", _parse_signature_to_definition(cls_sig.parameters))
5254

53-
return cls
55+
return cls # type: ignore
5456

5557
# See if we're being called as @vectorstoremodel or @vectorstoremodel().
5658
if cls is None:
5759
# We're called with parens.
58-
return wrap
60+
return wrap # type: ignore
5961

6062
# We're called as @vectorstoremodel without parens.
6163
return wrap(cls)

0 commit comments

Comments
 (0)
Please sign in to comment.