LLM Models
score-compose6 minute read
Overview
In this example we will walk you through how you can deploy local LLM models with Open WebUI as the frontend, and this with score-compose. We’ll cover two approaches to run the LLM models locally: using Ollama or using Docker Model Runner (DMR).
flowchart TD
subgraph Workloads
open-webui-workload(Open WebUI)
end
open-webui-workload-->gemma3[[gemma3 - LLM Model]]
open-webui-workload-->smollm2[[smollm2 - LLM Model]]
open-webui-workload-->Volume
Score file
Open your IDE and paste in the following score.yaml file, which describes an Open WebUI application that connects to two LLM models (gemma3 and smollm2) and stores its data in a persistent volume.
For the Ollama scenario:
apiVersion: score.dev/v1b1
metadata:
name: open-webui
containers:
open-webui:
image: .
variables:
OLLAMA_BASE_URL: "${resources.gemma3.url}"
volumes:
/app/backend/data:
source: ${resources.data}
resources:
data:
type: volume
gemma3:
type: llm-model
params:
model: gemma3:270m
smollm2:
type: llm-model
params:
model: smollm2:135m
service:
ports:
tcp:
port: 8080
targetPort: 8080
For the Docker Model Runner (DMR) scenario, the score.yaml file is almost identical. The key differences are the model names, which follow the DMR naming convention:
apiVersion: score.dev/v1b1
metadata:
name: open-webui
containers:
open-webui:
image: .
variables:
OLLAMA_BASE_URL: "${resources.smollm2.url}"
volumes:
/app/backend/data:
source: ${resources.data}
resources:
data:
type: volume
gemma3:
type: llm-model
params:
model: ai/gemma3:270M-UD-IQ2_XXS
smollm2:
type: llm-model
params:
model: ai/smollm2:135M-Q2_K
service:
ports:
tcp:
port: 8080
targetPort: 8080
Both files use the llm-model resource type to request LLM models. The score-compose provisioners handle the underlying infrastructure differences (Ollama container vs Docker Model Runner) transparently.
Deployment with score-compose
From here, we will now see how to deploy this Score file with score-compose, using either Ollama or Docker Model Runner as the LLM backend:
To begin, follow the installation instructions to install the latest version of score-compose.
init
Initialize your current score-compose workspace with the Ollama provisioner and patch template. Run the following command in your terminal:
score-compose init --no-sample \
--patch-templates https://raw.githubusercontent.com/score-spec/community-patchers/refs/heads/main/score-compose/ollama.tpl \
--provisioners https://raw.githubusercontent.com/score-spec/community-provisioners/refs/heads/main/llm-model/score-compose/10-ollama-llm-model-service.provisioners.yaml
The init command will create the .score-compose directory with the default resource provisioners available, plus the Ollama-specific provisioner for the llm-model resource type.
The --patch-templates option adds the ollama.tpl patch template which configures the Ollama service integration in the generated Docker Compose file.
You can see the resource provisioners available by running this command:
score-compose provisioners list
The Score file example illustrated uses two resource types: llm-model and volume.
+---------------+-------+--------+--------------------+---------------------------------+
| TYPE | CLASS | PARAMS | OUTPUTS | DESCRIPTION |
+---------------+-------+--------+--------------------+---------------------------------+
| llm-model | (any) | model | url | Provisions an LLM model via |
| | | | | Ollama |
+---------------+-------+--------+--------------------+---------------------------------+
| volume | (any) | | source, type | Creates a persistent volume |
| | | | | that can be mounted on a |
| | | | | workload. |
+---------------+-------+--------+--------------------+---------------------------------+
generate
Convert the score.yaml file into a deployable compose.yaml. Run the following command in your terminal:
score-compose generate score.yaml \
--image ghcr.io/open-webui/open-webui:main-slim \
--publish 8080:open-webui:8080 \
--override-property containers.open-webui.variables.WEBUI_NAME="Hello, Ollama with Score Compose!" \
--output compose.yaml
The generate command will add the input score.yaml workload with a particular container image to the .score-compose/state.yaml state file and generate the output compose.yaml.
The --publish flag exposes the Open WebUI port to the host so you can access it in your browser.
See the generated compose.yaml by running this command:
cat compose.yaml
If you make any modifications to the score.yaml file, run score-compose generate score.yaml to regenerate the output compose.yaml.
docker compose
Run docker compose up to execute the generated compose.yaml file:
docker compose up -d --wait
This will pull the Ollama container image, start the Ollama service, pull the requested LLM models (gemma3:270m and smollm2:135m), and then start the Open WebUI container connected to the Ollama backend.
Verify
Check the Docker images that were pulled:
docker images
REPOSITORY TAG SIZE
ollama/ollama latest 3.33GB
open-webui main-slim ...
Check that the LLM models were successfully pulled by Ollama:
curl localhost:11434/api/tags | jq -r .models[].name
gemma3:270m
smollm2:135m
See the running containers:
docker ps
You should see the open-webui and ollama containers running, along with some exited “puller” containers that were used to download the models.
Access Open WebUI
Open your browser and navigate to http://localhost:8080 to access the Open WebUI frontend. You can now chat with the LLM models you deployed!
Congrats! You’ve successfully deployed, with the score-compose implementation, local LLM models with Open WebUI and Ollama. You provisioned them through Docker, without writing the Docker Compose file by yourself.
To begin, follow the installation instructions to install the latest version of score-compose.
Prerequisites
Docker Model Runner (DMR) needs to be set up in your local environment. Follow the DMR get started guide to set it up.init
Initialize your current score-compose workspace with the Docker Model Runner provisioner. Run the following command in your terminal:
score-compose init --no-sample \
--provisioners https://raw.githubusercontent.com/score-spec/community-provisioners/refs/heads/main/llm-model/score-compose/10-dmr-llm-model.provisioners.yaml
The init command will create the .score-compose directory with the default resource provisioners available, plus the Docker Model Runner-specific provisioner for the llm-model resource type.
You can see the resource provisioners available by running this command:
score-compose provisioners list
The Score file example illustrated uses two resource types: llm-model and volume.
+---------------+-------+--------+--------------------+---------------------------------+
| TYPE | CLASS | PARAMS | OUTPUTS | DESCRIPTION |
+---------------+-------+--------+--------------------+---------------------------------+
| llm-model | (any) | model | url | Provisions an LLM model via |
| | | | | Docker Model Runner |
+---------------+-------+--------+--------------------+---------------------------------+
| volume | (any) | | source, type | Creates a persistent volume |
| | | | | that can be mounted on a |
| | | | | workload. |
+---------------+-------+--------+--------------------+---------------------------------+
generate
Convert the score.yaml file into a deployable compose.yaml. Run the following command in your terminal:
score-compose generate score.yaml \
--image ghcr.io/open-webui/open-webui:main-slim \
--publish 8080:open-webui:8080 \
--override-property containers.open-webui.variables.WEBUI_NAME="Hello, DMR with Score Compose!" \
--output compose.yaml
The generate command will add the input score.yaml workload with a particular container image to the .score-compose/state.yaml state file and generate the output compose.yaml.
The --publish flag exposes the Open WebUI port to the host so you can access it in your browser.
See the generated compose.yaml by running this command:
cat compose.yaml
If you make any modifications to the score.yaml file, run score-compose generate score.yaml to regenerate the output compose.yaml.
docker compose
Run docker compose up to execute the generated compose.yaml file:
docker compose up -d --wait
This will pull the requested LLM models (ai/gemma3:270M-UD-IQ2_XXS and ai/smollm2:135M-Q2_K) via Docker Model Runner, and then start the Open WebUI container connected to the DMR backend.
Verify
Check the Docker images, including the LLM model images pulled by Docker Model Runner:
docker images
REPOSITORY TAG SIZE
open-webui main-slim ...
ai/gemma3 270M-UD-IQ2_XXS 182MB
ai/smollm2 135M-Q2_K 56.9MB
See the running containers:
docker ps
You should see the open-webui container running and connected to the Docker Model Runner backend.
Access Open WebUI
Open your browser and navigate to http://localhost:8080 to access the Open WebUI frontend. You can now chat with the LLM models you deployed!
Congrats! You’ve successfully deployed, with the score-compose implementation, local LLM models with Open WebUI and Docker Model Runner. You provisioned them through Docker, without writing the Docker Compose file by yourself.
Next steps
- Deep dive with the associated blog post: Go through the associated step-by-step blog post to understand the different concepts in more detail.
- Explore more examples: Check out more examples to dive into further use cases and experiment with different configurations.
- Join the Score community: Connect with fellow Score developers on our CNCF Slack channel or start find your way to contribute to Score.