Namespace OllamaSharp.Models
Classes
- CopyModelRequest
Copy a model. Creates a model with another name from an existing model. Ollama API docs
- CreateModelRequest
Create a model from: another model; a safetensors directory; or a GGUF file. If you are creating a model from a safetensors directory or from a GGUF file, you must [create a blob] for each of the files and then use the file name and SHA256 digest associated with each blob in the
filesfield.
- CreateModelResponse
Represents the response from the /api/create endpoint
- DeleteModelRequest
Delete a model and its data.
- Details
Represents additional details about a model.
- EmbedRequest
Generate embeddings from a model.
- EmbedResponse
The response from the /api/embed endpoint
- ErrorResponse
Ollama server error response message
- GenerateDoneResponseStream
Represents the final response from the /api/generate endpoint
- GenerateRequest
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
- GenerateResponseStream
The response from the /api/generate endpoint when streaming is enabled
- ListModelsResponse
List models that are available locally.
- ListRunningModelsResponse
List models that are currently loaded into memory.
- Model
Represents a model with its associated metadata.
- ModelInfo
Represents additional model information.
- OllamaOption
Collection of options available to Ollama
- OllamaRequest
Represents the base class for requests to the Ollama API.
- ProjectorInfo
Represents projector-specific information.
- PullModelRequest
Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress.
- PullModelResponse
Represents the streamed response from the /api/pull endpoint.
- PushModelRequest
Upload a model to a model library. Requires registering for ollama.ai and adding a public key first.
Ollama API docs
- PushModelResponse
Represents the response from the /api/push endpoint.
- RequestOptions
The configuration information used for a chat completions request.
- RunningModel
Represents a running model.
- ShowModelRequest
Show information about a model including details, modelfile, template, parameters, license, system prompt.
Ollama API docs
- ShowModelResponse
Represents the response containing detailed model information.