Agents - Creating and using agents in Semantic Kernel
- OpenAI Assistant Chart Maker Streaming
- OpenAI Assistant Chart Maker
- OpenAI Assistant File Manipulation Streaming
- OpenAI Assistant File Manipulation
- OpenAI Assistant File Manipulation Streaming
- OpenAI Assistant Retrieval
- OpenAI Assistant Streaming
- OpenAI Assistant Structured Outputs
- OpenAI Assistant Templating Streaming
- OpenAI Assistant Vision Streaming
- Bedrock Agent Simple Chat Streaming
- Bedrock Agent Simple Chat
- Bedrock Agent With Code Interpreter Streaming
- Bedrock Agent With Code Interpreter
- Bedrock Agent With Kernel Function Simple
- Bedrock Agent With Kernel Function Streaming
- Bedrock Agent With Kernel Function
- Bedrock Agent Mixed Chat Agents Streaming
- Bedrock Agent Mixed Chat Agents
- Chat Completion Function Termination
- Chat Completion Templating
- Chat Completion Summary History Reducer Agent Chat
- Chat Completion Summary History Reducer Single Agent
- Chat Completion Truncate History Reducer Agent Chat
- Chat Completion Truncate History Reducer Single Agent
- Mixed Chat Agents Plugins
- Mixed Chat Agents
- Mixed Chat Files
- Mixed Chat Images
- Mixed Chat Reset
- Mixed Chat Streaming
- Chat with Audio Input
- Chat with Audio Output
- Chat with Audio Input and Output
- Audio Player
- Audio Recorder
AutoFunctionCalling - Using Auto Function Calling
to allow function call capable models to invoke Kernel Functions automatically
- Azure Python Code Interpreter Function Calling
- Function Calling with Required Type
- Parallel Function Calling
- Chat Completion with Auto Function Calling Streaming
- Functions Defined in JSON Prompt
- Chat Completion with Manual Function Calling Streaming
- Functions Defined in YAML Prompt
- Chat Completion with Auto Function Calling
- Chat Completion with Manual Function Calling
- Nexus Raven
ChatCompletion - Using ChatCompletion
messaging capable service with models
- Simple Chatbot
- Simple Chatbot Kernel Function
- Simple Chatbot Logit Bias
- Simple Chatbot Store Metadata
- Simple Chatbot Streaming
- Simple Chatbot with Image
- Simple Chatbot with Summary History Reducer Keeping Function Content
- Simple Chatbot with Summary History Reducer
- Simple Chatbot with Truncation History Reducer
- Simple Chatbot with Summary History Reducer using Auto Reduce
- Simple Chatbot with Truncation History Reducer using Auto Reduce
ChatHistory - Using and serializing the ChatHistory
- Auto Function Invoke Filters
- Function Invocation Filters
- Function Invocation Filters Stream
- Prompt Filters
- Retry with Filters
Local Models - Using the OpenAI connector
and OnnxGenAI connector
to talk to models hosted locally in Ollama, OnnxGenAI, and LM Studio
- ONNX Chat Completion
- LM Studio Text Embedding
- LM Studio Chat Completion
- ONNX Phi3 Vision Completion
- Ollama Chat Completion
- ONNX Text Completion
Memory - Using Memory
AI concepts
- Simple Memory
- Memory Data Models
- Memory with Pandas Dataframes
- Complex memory
- Full sample with Azure AI Search including function calling
Model-as-a-Service - Using models deployed as serverless APIs on Azure AI Studio
to benchmark model performance against open-source datasets
On Your Data - Examples of using AzureOpenAI On Your Data
- Azure Chat GPT with Data API
- Azure Chat GPT with Data API Function Calling
- Azure Chat GPT with Data API Vector Search
Plugins - Different ways of creating and using Plugins
- Azure Key Vault Settings
- Azure Python Code Interpreter
- OpenAI Function Calling with Custom Plugin
- Plugins from Directory
Processes - Examples of using the Process Framework
PromptTemplates - Using Templates
with parametrization for Prompt
rendering
- Template Language
- Azure Chat GPT API Jinja2
- Load YAML Prompt
- Azure Chat GPT API Handlebars
- Configuring Prompts
Reasoning - Using ChatCompletion
to reason with OpenAI Reasoning
Search - Using Search
services information
Structured Outputs - How to leverage OpenAI's json_schema Structured Outputs
functionality
TextGeneration - Using TextGeneration
capable service with models
In Semantic Kernel for Python, we leverage Pydantic Settings to manage configurations for AI and Memory Connectors, among other components. Here’s a clear guide on how to configure your settings effectively:
-
Reading Environment Variables:
- Primary Source: Pydantic first attempts to read the required settings from environment variables.
-
Using a .env File:
- Fallback Source: If the required environment variables are not set, Pydantic will look for a
.env
file in the current working directory. - Custom Path (Optional): You can specify an alternative path for the
.env
file viaenv_file_path
. This can be either a relative or an absolute path.
- Fallback Source: If the required environment variables are not set, Pydantic will look for a
-
Direct Constructor Input:
- As an alternative to environment variables and
.env
files, you can pass the required settings directly through the constructor of the AI Connector or Memory Connector.
- As an alternative to environment variables and
To authenticate to your Azure resources using a Microsoft Entra Authentication Token, the AzureChatCompletion
AI Service connector now supports this as a built-in feature. If you do not provide an API key -- either through an environment variable, a .env
file, or the constructor -- and you also do not provide a custom AsyncAzureOpenAI
client, an ad_token
, or an ad_token_provider
, the AzureChatCompletion
connector will attempt to retrieve a token using the DefaultAzureCredential
.
To successfully retrieve and use the Entra Auth Token, you need the Cognitive Services OpenAI Contributor
role assigned to your Azure OpenAI resource. By default, the https://cognitiveservices.azure.com
token endpoint is used. You can override this endpoint by setting an environment variable .env
variable as AZURE_OPENAI_TOKEN_ENDPOINT
or by passing a new value to the AzureChatCompletion
constructor as part of the AzureOpenAISettings
.
- .env File Placement: We highly recommend placing the
.env
file in thesemantic-kernel/python
root directory. This is a common practice when developing in the Semantic Kernel repository.
By following these guidelines, you can ensure that your settings for various components are configured correctly, enabling seamless functionality and integration of Semantic Kernel in your Python projects.