Class Chat
- Namespace
- OllamaSharp
- Assembly
- OllamaSharp.dll
A chat helper that handles the chat logic internally and automatically extends the message history.
var ollama = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(ollama);
// ...
while (true)
{
Console.Write("You: ");
var message = Console.ReadLine()!;
Console.Write("Ollama: ");
await foreach (var answerToken in chat.SendAsync(message))
Console.Write(answerToken);
// ...
Console.WriteLine();
}
// ...
// Output:
// You: Write a haiku about AI models
// Ollama: Code whispers secrets
// Intelligent designs unfold
// Minds beyond our own
public class Chat
- Inheritance
-
Chat
- Inherited Members
Constructors
Chat(IOllamaApiClient)
Initializes a new instance of the Chat class. This basic constructor sets up the chat without a predefined system prompt.
public Chat(IOllamaApiClient client)
Parameters
clientIOllamaApiClientAn implementation of the IOllamaApiClient interface, used for managing communication with the chat backend.
Examples
Setting up a chat instance without a system prompt:
var client = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(client);
// Sending a message to the chat
chat.SendMessage("Hello, how are you?");
Exceptions
- ArgumentNullException
Thrown when the
clientparameter isnull.
Chat(IOllamaApiClient, string)
Initializes a new instance of the Chat class with a custom system prompt. This constructor allows you to define the assistant's initial behavior or personality using a system prompt.
public Chat(IOllamaApiClient client, string systemPrompt)
Parameters
clientIOllamaApiClientAn implementation of the IOllamaApiClient interface, used for managing communication with the chat backend.
systemPromptstringA string representing the system prompt that defines the behavior and context for the chat assistant. For example, you can set the assistant to be helpful, humorous, or focused on a specific domain.
Examples
Creating a chat instance with a custom system prompt:
var client = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var systemPrompt = "You are an expert assistant specializing in data science.";
var chat = new Chat(client, systemPrompt);
// Sending a message to the chat
chat.SendMessage("Can you explain neural networks?");
Exceptions
- ArgumentNullException
Thrown when the
clientparameter isnull.
Properties
AllowRecursiveToolCalls
Allow recursive tool calls when a model decides to call different tools after each other.
public bool AllowRecursiveToolCalls { get; set; }
Property Value
Client
Gets the Ollama API client
public IOllamaApiClient Client { get; }
Property Value
Messages
Gets or sets the messages of the chat history
public List<Message> Messages { get; set; }
Property Value
Model
Gets or sets the AI model to chat with
public string Model { get; set; }
Property Value
Options
Gets or sets the RequestOptions to chat with
public RequestOptions? Options { get; set; }
Property Value
Think
Gets or sets a value to enable or disable thinking. Use reasoning models like openthinker, qwen3, deepseek-r1, phi4-reasoning that support thinking when activating this option. This might cause errors with non-reasoning models, see https://github.com/awaescher/OllamaSharp/releases/tag/5.2.0 More information: https://github.com/ollama/ollama/releases/tag/v0.9.0
public ThinkValue? Think { get; set; }
Property Value
ToolInvoker
Gets or sets the class instance that invokes provided tools requested by the AI model
public IToolInvoker ToolInvoker { get; set; }
Property Value
Methods
SendAsAsync(ChatRole, string, IEnumerable<IEnumerable<byte>>?, CancellationToken)
Sends a message in a given role to the currently selected model and streams its response asynchronously.
public IAsyncEnumerable<string> SendAsAsync(ChatRole role, string message, IEnumerable<IEnumerable<byte>>? imagesAsBytes, CancellationToken cancellationToken = default)
Parameters
roleChatRoleThe role in which the message should be sent. Refer to ChatRole for supported roles.
messagestringThe message to send to the model.
imagesAsBytesIEnumerable<IEnumerable<byte>>Optional images represented as byte arrays to include in the request. This parameter can be
null.cancellationTokenCancellationTokenA cancellation token to observe while waiting for the response. By default, this parameter is set to None.
Returns
- IAsyncEnumerable<string>
An IAsyncEnumerable<T> of strings representing the streamed response generated by the model.
Examples
Sending a user message with optional images:
var client = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(client);
var role = new ChatRole("user");
var message = "What's the weather like today?";
var images = new List<IEnumerable<byte>> { File.ReadAllBytes("exampleImage.jpg") };
await foreach (var response in chat.SendAsAsync(role, message, images, CancellationToken.None))
{
Console.WriteLine(response);
}
SendAsAsync(ChatRole, string, IEnumerable<object>?, IEnumerable<string>?, object?, CancellationToken)
Sends a message as a specified role to the current model and streams back its response as an asynchronous enumerable.
public IAsyncEnumerable<string> SendAsAsync(ChatRole role, string message, IEnumerable<object>? tools, IEnumerable<string>? imagesAsBase64 = null, object? format = null, CancellationToken cancellationToken = default)
Parameters
roleChatRoleThe role in which the message should be sent. This determines the context or perspective of the message.
messagestringThe message that needs to be sent to the chat model.
toolsIEnumerable<object>A collection of tools available for the model to utilize. Tools can alter the behavior of the model, such as turning off response streaming automatically when used.
imagesAsBase64IEnumerable<string>An optional collection of images encoded in Base64 format, which are sent along with the message to the model.
formatobjectDefines the response format. Acceptable values include
"json"or a schema object created withJsonSerializerOptions.Default.GetJsonSchemaAsNode.cancellationTokenCancellationTokenA token to cancel the ongoing operation if required.
Returns
- IAsyncEnumerable<string>
An asynchronous enumerable of response strings streamed from the model.
Examples
Using the SendAsAsync(ChatRole, string, IEnumerable<object>?, IEnumerable<string>?, object?, CancellationToken) method to send a message and stream the model's response:
var chat = new Chat(client);
var role = new ChatRole("assistant");
var tools = new List<Tool>();
var images = new List<string> { "base64EncodedImageData" };
await foreach (var response in chat.SendAsAsync(role, "Generate a summary for the attached image", tools, images))
{
Console.WriteLine($"Received response: {response}");
}
Exceptions
- NotSupportedException
Thrown if the
formatargument is of type CancellationToken by mistake, or if any unsupported types are passed.
SendAsAsync(ChatRole, string, IEnumerable<string>?, CancellationToken)
Sends a message with a specified role to the current model and streams the response as an asynchronous sequence of strings.
public IAsyncEnumerable<string> SendAsAsync(ChatRole role, string message, IEnumerable<string>? imagesAsBase64, CancellationToken cancellationToken = default)
Parameters
roleChatRoleThe role from which the message originates, such as "User" or "Assistant".
messagestringThe message to send to the model.
imagesAsBase64IEnumerable<string>Optional collection of images, encoded in Base64 format, to include with the message.
cancellationTokenCancellationTokenA token that can be used to cancel the operation.
Returns
- IAsyncEnumerable<string>
An asynchronous sequence of strings representing the streamed response from the model.
Examples
var client = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(client)
{
Model = "llama3.2-vision:latest"
};
// Sending a message as a user role and processing the response
await foreach (var response in chat.SendAsAsync(ChatRole.User, "Describe the image", null))
{
Console.WriteLine(response);
}
SendAsAsync(ChatRole, string, CancellationToken)
Sends a message in a given role to the currently selected model and streams its response.
public IAsyncEnumerable<string> SendAsAsync(ChatRole role, string message, CancellationToken cancellationToken = default)
Parameters
roleChatRoleThe role in which the message should be sent, represented by a ChatRole.
messagestringThe message to be sent as a string.
cancellationTokenCancellationTokenAn optional CancellationToken to observe while waiting for the response.
Returns
- IAsyncEnumerable<string>
An IAsyncEnumerable<T> of strings representing the streamed response from the server.
Examples
Example usage of the SendAsAsync(ChatRole, string, CancellationToken) method:
var client = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(client);
var role = new ChatRole("assistant");
var responseStream = chat.SendAsAsync(role, "How can I assist you today?");
await foreach (var response in responseStream)
{
Console.WriteLine(response); // Streams and prints the response from the server
}
SendAsync(string, IEnumerable<IEnumerable<byte>>?, CancellationToken)
Sends a message to the currently selected model and streams its response
public IAsyncEnumerable<string> SendAsync(string message, IEnumerable<IEnumerable<byte>>? imagesAsBytes, CancellationToken cancellationToken = default)
Parameters
messagestringThe message to send
imagesAsBytesIEnumerable<IEnumerable<byte>>Images in byte representation to send to the model
cancellationTokenCancellationTokenThe token to cancel the operation with
Returns
- IAsyncEnumerable<string>
An IAsyncEnumerable<T> that streams the response.
Examples
Getting a response from the model with an image:
var client = new HttpClient();
var cat = await client.GetByteArrayAsync("https://cataas.com/cat");
var ollama = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(ollama);
var response = chat.SendAsync("What do you see?", [cat]);
await foreach (var answerToken in response) Console.Write(answerToken);
// Output: The image shows a white kitten with black markings on its
// head and tail, sitting next to an orange tabby cat. The kitten
// is looking at the camera while the tabby cat appears to be
// sleeping or resting with its eyes closed. The two cats are
// lying in a blanket that has been rumpled up.
SendAsync(string, IEnumerable<object>?, IEnumerable<string>?, object?, CancellationToken)
Sends a message to the currently selected model and streams its response. Allows for optional tools, images, or response formatting to customize the interaction.
public IAsyncEnumerable<string> SendAsync(string message, IEnumerable<object>? tools, IEnumerable<string>? imagesAsBase64 = null, object? format = null, CancellationToken cancellationToken = default)
Parameters
messagestringThe message to send to the chat model as a string.
toolsIEnumerable<object>A collection of Tool instances that the model can utilize. Enabling tools automatically disables response streaming. For more information, see the tools documentation: Tool Support.
imagesAsBase64IEnumerable<string>An optional collection of images encoded as Base64 strings to pass into the model.
formatobjectSpecifies the response format. Can be set to
"json"or an object created withJsonSerializerOptions.Default.GetJsonSchemaAsNode.cancellationTokenCancellationTokenA CancellationToken to observe while waiting for the operation to complete.
Returns
- IAsyncEnumerable<string>
An asynchronous enumerable stream of string responses from the model.
Examples
Example usage of SendAsync(string, IEnumerable<object>?, IEnumerable<string>?, object?, CancellationToken):
var client = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(client);
var tools = new List<Tool> { new Tool() }; // Example tools
var images = new List<string> { ConvertImageToBase64("path-to-image.jpg") };
await foreach (var response in chat.SendAsync(
"Tell me about recent advancements in AI.",
tools: tools,
imagesAsBase64: images,
format: "json",
cancellationToken: CancellationToken.None))
{
Console.WriteLine(response);
}
SendAsync(string, IEnumerable<string>?, CancellationToken)
Sends a message to the currently selected model and streams its response
public IAsyncEnumerable<string> SendAsync(string message, IEnumerable<string>? imagesAsBase64, CancellationToken cancellationToken = default)
Parameters
messagestringThe message to send
imagesAsBase64IEnumerable<string>Base64 encoded images to send to the model
cancellationTokenCancellationTokenThe token to cancel the operation with
Returns
- IAsyncEnumerable<string>
An IAsyncEnumerable<T> that streams the response.
Examples
Getting a response from the model with an image:
var client = new HttpClient();
var cat = await client.GetByteArrayAsync("https://cataas.com/cat");
var base64Cat = Convert.ToBase64String(cat);
var ollama = new OllamaApiClient("http://localhost:11434", "llama3.2-vision:latest");
var chat = new Chat(ollama);
var response = chat.SendAsync("What do you see?", [base64Cat]);
await foreach (var answerToken in response) Console.Write(answerToken);
// Output:
// The image shows a cat lying on the floor next to an iPad. The cat is looking
// at the screen, which displays a game with fish and other sea creatures. The
// cat's paw is touching the screen, as if it is playing the game. The background
// of the image is a wooden floor.
SendAsync(string, CancellationToken)
Sends a message to the currently selected model and streams its response
public IAsyncEnumerable<string> SendAsync(string message, CancellationToken cancellationToken = default)
Parameters
messagestringThe message to send
cancellationTokenCancellationTokenThe token to cancel the operation with
Returns
- IAsyncEnumerable<string>
An IAsyncEnumerable<T> that streams the response.
Examples
Getting a response from the model:
var response = await chat.SendAsync("Write a haiku about AI models");
await foreach (var answerToken in response)
Console.WriteLine(answerToken);
Events
OnThink
Event that gets fired for each token that the AI model is thinking. This will just work for models that support thinking according to their Ollama manifest and if Think is set to true. If Think is set to null, think tokens will be written to the default model output. If Think is false, think tokens will not be emitted.
public event EventHandler<string>? OnThink
Event Type
OnToolCall
Gets fired when the AI model wants to invoke a tool.
public event EventHandler<Message.ToolCall>? OnToolCall
Event Type
OnToolResult
Gets fired after a tool was invoked and the result is available.
public event EventHandler<ToolResult>? OnToolResult