Microcks
score-compose and score-k8s7 minute read
Overview
In this example we will walk you through how you can deploy a containerized frontend application using Microcks to mock an external backend service dependency, and this with both score-compose and score-k8s.
flowchart TD
frontend-workload(Frontend) --> backend-mock[[backend-mock - Microcks]]
subgraph Workloads
frontend-workload
end
backend-mock --> microcks[(Microcks)]
Score file
Open your IDE and paste in the following score.yaml file, which describes a simple frontend application that references a backend service resource via its OpenAPI specification. The demo code can be found here.
apiVersion: score.dev/v1b1
metadata:
name: frontend
containers:
frontend:
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo Hello $BACKEND_SVC!; sleep 5; done"]
variables:
BACKEND_SVC: ${resources.backend.url}/orders
resources:
backend:
type: service
params:
port: 8181
artifacts: resources/backend-openapi.yaml:true
name: Order Service API
version: 0.1.0
In the resources section, the backend resource of type service declares the external backend dependency. The Developer only needs to know that a backend service exists and what its OpenAPI spec looks like — Microcks handles generating a realistic mock at deployment time, resolving ${resources.backend.url} automatically.
Deployment with score-compose and score-k8s
From here, we will now see how to deploy this exact same Score file with either with score-compose or with score-k8s:
To begin, follow the installation instructions to install the latest version of score-compose.
init
Initialize your current score-compose workspace, run the following command in your terminal:
score-compose init --no-sample \
--provisioners https://raw.githubusercontent.com/score-spec/community-provisioners/refs/heads/main/service/score-compose/10-service-with-microcks.provisioners.yaml \
--patch-templates https://raw.githubusercontent.com/score-spec/community-patchers/refs/heads/main/score-compose/microcks.tpl
The init command will create the .score-compose directory with the default resource provisioners available. We are also importing one external provisioner to seamlessly generate a Microcks mock for the backend service resource: service-with-microcks provisioner. The microcks.tpl patch template is also injected to spin up the Microcks control plane container in the generated compose.yaml.
You can see the resource provisioners available by running this command:
score-compose provisioners list
The Score file example illustrated uses one resource type: service.
+---------+-------+-------------------------------------------+--------+-----------------------------------+
| TYPE | CLASS | PARAMS |OUTPUTS | DESCRIPTION |
+---------+-------+-------------------------------------------+--------+-----------------------------------+
| service | (any) | port, artifacts, name, version | name, | Generates a Microcks mock for |
| | | | url | an external service dependency |
| | | | | using the provided OpenAPI spec. |
+---------+-------+-------------------------------------------+--------+-----------------------------------+
generate
Convert the score.yaml file into a deployable compose.yaml, run the following command in your terminal:
score-compose generate score.yaml
The generate command will add the input score.yaml workload to the .score-compose/state.yaml state file and generate the output compose.yaml.
See the generated compose.yaml by running this command:
cat compose.yaml
If you make any modifications to the score.yaml file, run score-compose generate score.yaml to regenerate the output compose.yaml.
resources
Get the information of the resources dependencies of the workload, run the following command:
score-compose resources list
+-----------------------------------+------------+
| UID | OUTPUTS |
+-----------------------------------+------------+
| service.default#frontend.backend | name, url |
+-----------------------------------+------------+
At this stage, we can already see the value of the service resource (the mocked backend URL) generated by Microcks:
score-compose resources get-outputs 'service.default#frontend.backend' --format '{{ .url }}'
http://microcks:8080/rest/Order+Service+API/0.1.0
docker compose
Run docker compose up to execute the generated compose.yaml file:
docker compose up -d --wait
[+] Running 3/3
✔ Container score-microcks-microcks-1 Started
✔ Container score-microcks-backend-mock-1 Started
✔ Container score-microcks-frontend-frontend-1 Started
Three containers are deployed:
frontend— The actual frontend application.backend-mock— Amicrocks-clisidecar that imports the OpenAPI spec into the Microcks control plane.microcks— The Microcks control plane, which generates and serves the backend mock.
docker ps
See the running containers:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8a17b908320 busybox "/bin/sh -c 'while t…" 20 seconds ago Up 16 seconds score-microcks-frontend-frontend-1
3d6e626b3d6e quay.io/microcks/microcks-uber:latest-native "/cnb/process/web" 20 seconds ago Up 19 seconds 0.0.0.0:9090->8080/tcp score-microcks-microcks-1
docker logs
Verify that the frontend app is successfully calling the Microcks-mocked backend:
docker logs score-microcks-frontend-frontend-1
Hello http://microcks:8080/rest/Order+Service+API/0.1.0/orders!
Hello http://microcks:8080/rest/Order+Service+API/0.1.0/orders!
The frontend successfully resolves ${resources.backend.url} to the Microcks-mocked endpoint and calls it — without the actual backend service running anywhere.
Congrats! You’ve successfully deployed, with the score-compose implementation, a containerized frontend workload whose external backend dependency is seamlessly mocked by Microcks. You provisioned everything through Docker, without writing the Docker Compose file by yourself.
To begin, follow the installation instructions to install the latest version of score-k8s.
init
Initialize your current score-k8s workspace, run the following command in your terminal:
score-k8s init --no-sample \
--provisioners https://raw.githubusercontent.com/score-spec/community-provisioners/refs/heads/main/service/score-k8s/10-service-with-microcks-cli.provisioners.yaml
The init command will create the .score-k8s directory with the default resource provisioners available. We are also importing the service-with-microcks-cli provisioner, which is responsible for importing the OpenAPI spec into the Microcks control plane already running in your Kubernetes cluster.
You can see the resource provisioners available by running this command:
score-k8s provisioners list
The Score file example illustrated uses one resource type: service.
+---------+-------+-------------------------------------------+--------+-----------------------------------+
| TYPE | CLASS | PARAMS |OUTPUTS | DESCRIPTION |
+---------+-------+-------------------------------------------+--------+-----------------------------------+
| service | (any) | port, artifacts, name, version | name, | Imports an OpenAPI spec into a |
| | | | url | running Microcks instance and |
| | | | | returns the mock endpoint URL. |
+---------+-------+-------------------------------------------+--------+-----------------------------------+
generate
You will need to have access to a Kubernetes cluster to execute the following commands. You can follow these instructions if you want to set up a Kind cluster. Your Kubernetes cluster should also have Microcks installed in it.
This is where the service provisioner will be invoked. Under the hood, it uses the microcks CLI to import the OpenAPI spec into Microcks (see the service-with-microcks-cli provisioner). You will need the microcks CLI installed locally on your machine (outside of the cluster).
Convert the score.yaml file into a deployable manifests.yaml, run the following command in your terminal:
score-k8s generate score.yaml
The generate command will add the input score.yaml workload to the .score-k8s/state.yaml state file and generate the output manifests.yaml.
See the generated manifests.yaml by running this command:
cat manifests.yaml
If you make any modifications to the score.yaml file, run score-k8s generate score.yaml to regenerate the output manifests.yaml.
resources
Get the information of the resources dependencies of the workload, run the following command:
score-k8s resources list
+-----------------------------------+------------+
| UID | OUTPUTS |
+-----------------------------------+------------+
| service.default#frontend.backend | name, url |
+-----------------------------------+------------+
At this stage, we can already see the value of the service resource (the Microcks-provided mock URL in cluster):
score-k8s resources get-outputs 'service.default#frontend.backend' --format '{{ .url }}'
http://microcks.microcks.svc.cluster.local:8080/rest/Order+Service+API/0.1.0
kubectl apply
Run kubectl apply to execute the generated manifests.yaml file:
kubectl apply -f manifests.yaml
deployment.apps/frontend created
service/frontend created
kubectl get all
See the running pods:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/frontend-7d9f8b6c4d-xk2pv 1/1 Running 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend ClusterIP 10.96.142.101 <none> 80/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 1/1 1 1 30s
kubectl logs
Verify that the frontend app is successfully calling the Microcks-mocked backend running inside the cluster:
kubectl logs deploy/frontend
Hello http://microcks.microcks.svc.cluster.local:8080/rest/Order+Service+API/0.1.0/orders!
Hello http://microcks.microcks.svc.cluster.local:8080/rest/Order+Service+API/0.1.0/orders!
The frontend successfully resolves ${resources.backend.url} to the Microcks control plane running in the cluster, using the same score.yaml file that was used locally with score-compose — no changes required.
Congrats! You’ve successfully deployed, with the score-k8s implementation, a containerized frontend workload whose external backend dependency is seamlessly mocked by Microcks running in Kubernetes. You provisioned the Kubernetes manifests through kubectl, without writing them by yourself.
Next steps
- Deep dive with the associated blog post: Go through the step-by-step guide to understand the concepts of bridging inner and outer development loops with Containers, Microcks, and Score.
- Watch the Score + Microcks session at KubeCon EU 2026: Unifying Inner & Outer Loops To Bridge the Gaps Between Devs & Ops With Microcks + Score — Laurent Broudoux (Microcks) & Mathieu Benoit (Docker), showing a more advanced use case.
- Explore more examples: Check out more examples to dive into further use cases and experiment with different configurations.
- Join the Score community: Connect with fellow Score developers on our CNCF Slack channel or start find your way to contribute to Score.