Ollama¶
Since v0.29.0
Introduction¶
The Testcontainers module for Ollama.
Adding this module to your project dependencies¶
Please run the following command to add the Ollama module to your Go dependencies:
go get github.com/testcontainers/testcontainers-go/modules/ollama
Usage example¶
The module allows you to run the Ollama container or the local Ollama binary.
ctx := context.Background()
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.5.7")
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.3.13", tcollama.WithUseLocal("OLLAMA_DEBUG=true"))
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
If the local Ollama binary fails to execute, the module will fall back to the container version of Ollama.
Module Reference¶
Run function¶
- Since v0.32.0
Info
The RunContainer(ctx, opts...) function is deprecated and will be removed in the next major release of Testcontainers for Go.
The Ollama module exposes one entrypoint function to create the Ollama container, and this function receives three parameters:
func Run(ctx context.Context, img string, opts ...testcontainers.ContainerCustomizer) (*OllamaContainer, error)
context.Context, the Go context.string, the Docker image to use.testcontainers.ContainerCustomizer, a variadic argument for passing options.
Image¶
Use the second argument in the Run function to set a valid Docker image.
In example: Run(context.Background(), "ollama/ollama:0.5.7").
Container Options¶
When starting the Ollama container, you can pass options in a variadic way to configure it.
Use Local¶
- Since v0.35.0
Warning
Please make sure the local Ollama binary is not running when using the local version of the module: Ollama can be started as a system service, or as part of the Ollama application, and interacting with the logs of a running Ollama process not managed by the module is not supported.
If you need to run the local Ollama binary, you can set the UseLocal option in the Run function.
This option accepts a list of environment variables as a string, that will be applied to the Ollama binary when executing commands.
E.g. Run(context.Background(), "ollama/ollama:0.5.7", WithUseLocal("OLLAMA_DEBUG=true")).
All the container methods are available when using the local Ollama binary, but will be executed locally instead of inside the container. Please consider the following differences when using the local Ollama binary:
- The local Ollama binary will create a log file in the current working directory, identified by the session ID. E.g.
local-ollama-<session-id>.log. It's possible to set the log file name using theOLLAMA_LOGFILEenvironment variable. So if you're running Ollama yourself, from the Ollama app, or the standalone binary, you could use this environment variable to set the same log file name. - For the Ollama app, the default log file resides in the
$HOME/.ollama/logs/server.log. - For the standalone binary, you should start it redirecting the logs to a file. E.g.
ollama serve > /tmp/ollama.log 2>&1. ConnectionStringreturns the connection string to connect to the local Ollama binary started by the module instead of the container.ContainerIPreturns the bound host IP127.0.0.1by default.ContainerIPsreturns the bound host IP["127.0.0.1"]by default.CopyToContainer,CopyDirToContainer,CopyFileToContainerandCopyFileFromContainerreturn an error if called.GetLogProductionErrorChannelreturns a nil channel.Endpointreturns the endpoint to connect to the local Ollama binary started by the module instead of the container.Execpasses the command to the local Ollama binary started by the module instead of inside the container. First argument is the command to execute, and the second argument is the list of arguments, else, an error is returned.GetContainerIDreturns the container ID of the local Ollama binary started by the module instead of the container, which maps tolocal-ollama-<session-id>.Hostreturns the bound host IP127.0.0.1by default.Inspectreturns a ContainerJSON with the state of the local Ollama binary started by the module.IsRunningreturns true if the local Ollama binary process started by the module is running.Logsreturns the logs from the local Ollama binary started by the module instead of the container.MappedPortreturns the port mapping for the local Ollama binary started by the module instead of the container.Startstarts the local Ollama binary process.Statereturns the current state of the local Ollama binary process,stoppedorrunning.Stopstops the local Ollama binary process.Terminatecalls theStopmethod and then removes the log file.
The local Ollama binary will create a log file in the current working directory, and it will be available in the container's Logs method.
Info
The local Ollama binary will use the OLLAMA_HOST environment variable to set the host and port to listen on.
If the environment variable is not set, it will default to localhost:0
which bind to a loopback address on an ephemeral port to avoid port conflicts.
The following options are exposed by the testcontainers package.
Basic Options¶
WithExposedPortsSince v0.37.0WithEnvSince v0.29.0WithWaitStrategySince v0.20.0WithAdditionalWaitStrategySince v0.38.0WithWaitStrategyAndDeadlineSince v0.20.0WithAdditionalWaitStrategyAndDeadlineSince v0.38.0WithEntrypointSince v0.37.0WithEntrypointArgsSince v0.37.0WithCmdSince v0.37.0WithCmdArgsSince v0.37.0WithLabelsSince v0.37.0
Lifecycle Options¶
WithLifecycleHooksSince v0.38.0WithAdditionalLifecycleHooksSince v0.38.0WithStartupCommandSince v0.25.0WithAfterReadyCommandSince v0.28.0
Files & Mounts Options¶
WithFilesSince v0.37.0WithMountsSince v0.37.0WithTmpfsSince v0.37.0WithImageMountSince v0.37.0
Build Options¶
WithDockerfileSince v0.37.0
Logging Options¶
WithLogConsumersSince v0.28.0WithLogConsumerConfigSince v0.38.0WithLoggerSince v0.29.0
Image Options¶
WithAlwaysPullSince v0.38.0WithImageSubstitutorsSince v0.26.0WithImagePlatformSince v0.38.0
Networking Options¶
WithNetworkSince v0.27.0WithNetworkByNameSince v0.38.0WithBridgeNetworkSince v0.38.0WithNewNetworkSince v0.27.0
Advanced Options¶
WithHostPortAccessSince v0.31.0WithConfigModifierSince v0.20.0WithHostConfigModifierSince v0.20.0WithEndpointSettingsModifierSince v0.20.0CustomizeRequestSince v0.20.0WithNameSince v0.38.0WithNoStartSince v0.38.0WithProviderNot available until the next release main
Experimental Options¶
WithReuseByNameSince v0.37.0
Container Methods¶
The Ollama container exposes the following methods:
ConnectionString¶
- Since v0.29.0
This method returns the connection string to connect to the Ollama container, using the default 11434 port.
connectionStr, err := ctr.ConnectionString(ctx)
Commit¶
- Since v0.29.0
This method commits the container to a new image, returning the new image ID. It should be used after a model has been pulled and loaded into the container in order to create a new image with the model, and eventually use it as the base image for a new container. That will speed up the execution of the following containers.
// Defining the target image name based on the default image and a random string.
// Users can change the way this is generated, but it should be unique.
targetImage := fmt.Sprintf("%s-%s", ollama.DefaultOllamaImage, strings.ToLower(uuid.New().String()[:4]))
err := ctr.Commit(context.Background(), targetImage)
Examples¶
Loading Models¶
It's possible to initialise the Ollama container with a specific model passed as parameter. The supported models are described in the Ollama project: https://github.com/ollama/ollama?tab=readme-ov-file and https://ollama.com/library.
Warning
At the moment you use one of those models, the Ollama image will load the model and could take longer to start because of that.
The following examples use the llama2 model to connect to the Ollama container using HTTP and Langchain.
ctx := context.Background()
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.5.7")
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
model := "llama2"
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "pull", model})
if err != nil {
log.Printf("failed to pull model %s: %s", model, err)
return
}
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "run", model})
if err != nil {
log.Printf("failed to run model %s: %s", model, err)
return
}
connectionStr, err := ollamaContainer.ConnectionString(ctx)
if err != nil {
log.Printf("failed to get connection string: %s", err)
return
}
httpClient := &http.Client{}
// generate a response
payload := `{
"model": "llama2",
"prompt":"Why is the sky blue?"
}`
req, err := http.NewRequest(http.MethodPost, connectionStr+"/api/generate", strings.NewReader(payload))
if err != nil {
log.Printf("failed to create request: %s", err)
return
}
resp, err := httpClient.Do(req)
if err != nil {
log.Printf("failed to get response: %s", err)
return
}
ctx := context.Background()
ollamaContainer, err := tcollama.Run(ctx, "ollama/ollama:0.5.7")
defer func() {
if err := testcontainers.TerminateContainer(ollamaContainer); err != nil {
log.Printf("failed to terminate container: %s", err)
}
}()
if err != nil {
log.Printf("failed to start container: %s", err)
return
}
model := "llama2"
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "pull", model})
if err != nil {
log.Printf("failed to pull model %s: %s", model, err)
return
}
_, _, err = ollamaContainer.Exec(ctx, []string{"ollama", "run", model})
if err != nil {
log.Printf("failed to run model %s: %s", model, err)
return
}
connectionStr, err := ollamaContainer.ConnectionString(ctx)
if err != nil {
log.Printf("failed to get connection string: %s", err)
return
}
var llm *langchainollama.LLM
if llm, err = langchainollama.New(
langchainollama.WithModel(model),
langchainollama.WithServerURL(connectionStr),
); err != nil {
log.Printf("failed to create langchain ollama: %s", err)
return
}
completion, err := llm.Call(
context.Background(),
"how can Testcontainers help with testing?",
llms.WithSeed(42), // the lower the seed, the more deterministic the completion
llms.WithTemperature(0.0), // the lower the temperature, the more creative the completion
)
if err != nil {
log.Printf("failed to create langchain ollama: %s", err)
return
}
words := []string{
"easy", "isolation", "consistency",
}
lwCompletion := strings.ToLower(completion)
for _, word := range words {
if strings.Contains(lwCompletion, word) {
fmt.Println(true)
}
}