 Java, Spring and Web Development tutorials  1. Introduction
Docker Model Runner, introduced in Docker Desktop 4.40 for Mac with Apple Silicon (at the time of writing this article), revolutionizes local AI development by simplifying the deployment and management of large language models (LLMs). It tackles common challenges such as complex setup processes, high cloud inference costs, and data privacy concerns.
By providing an OpenAI-compatible Inference API, Model Runner enables seamless integration with frameworks like Spring AI, allowing developers to build AI-powered applications locally with ease. In this tutorial, we’ll learn how to set up Docker Model Runner and create a Spring AI application that connects to it. By the end, we’ll have a fully functional local AI application leveraging a powerful LLM.
2. Docker Model Runner
Docker Model Runner is a tool designed to simplify the deployment and execution of LLMs inside Docker containers. It’s an AI Inference Engine offering a wide range of models from various providers.
Let’s see the key features that Docker Model Runner includes:
-
Simplified Model Deployment: Models are distributed as standard Open Container Initiative (OCI) artifacts on Docker Hub under the ai namespace. This makes it easy to pull, run, and manage AI models directly within Docker Desktop.
-
Broad Model Support: Supports a variety of LLMs from multiple providers, such as Mistral, LLaMA, and Phi-4, ensuring flexibility in model selection.
-
Local Inference: Runs models locally, enhancing data privacy and eliminating dependency on cloud-based inference.
-
OpenAI-Compatible API: Provides a standardized API that integrates effortlessly with existing AI frameworks, reducing development overhead.
3. Set Up Environment
This section outlines the prerequisites for using Docker Model Runner and Maven dependencies to create a Spring AI application that uses the Model Runner.
3.1. Prerequisites
To use Docker Model Runner, we’ll need a few things:
3.2. Maven Dependencies
Let’s start by importing the spring-boot-starter-web, spring-ai-openai-spring-boot-starter, spring-ai-spring-boot-testcontainers, and junit-jupiter dependencies to the pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>1.0.0-M6</version>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>1.0.0-M6</version>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-spring-boot-testcontainers</artifactId>
<version>1.0.0-M6</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<version>1.19.8</version>
<scope>test</scope>
</dependency>
4. Enabling and Configuring Docker Model Runner
This section outlines the steps to enable Docker Model Runner and pull a specific model using two distinct methods.
4.1. Enable Model Runner With a Specific TCP Port
First, let’s enable Model Runner and expose it on a specific TCP port (e.g., 12434):
docker desktop enable model-runner --tcp 12434
This configures Model Runner to listen on http://localhost:12434/engines. In our Spring AI application, we need to configure the api-key, model, and base URL to point to the Model Runner endpoint:
spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.base-url=http://localhost:12434/engines
spring.ai.openai.chat.options.model=ai/gemma3
4.2. Enable Model Runner With Testcontainers
We can run the following command to enable Model Runner without specifying a port:
docker desktop enable model-runner
This sets up Model Runner to run on the default internal Docker network. Then, we use Testcontainers and set the base-url, api-key, and model as follows:
@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {
@Bean
DockerModelRunnerContainer socat() {
return new DockerModelRunnerContainer("alpine/socat:1.8.0.1");
}
@Bean
DynamicPropertyRegistrar properties(DockerModelRunnerContainer dmr) {
return (registrar) -> {
registrar.add("spring.ai.openai.base-url", dmr::getOpenAIEndpoint);
registrar.add("spring.ai.openai.api-key", () -> "test-api-key");
registrar.add("spring.ai.openai.chat.options.model", () -> "ai/gemma3");
};
}
}
The provided TestcontainersConfiguration class is a Spring Boot @TestConfiguration designed for integration testing with Testcontainers. It defines two beans: a DockerModelRunnerContainer that starts a Docker container using the alpine/socat:1.8.0.1 image, likely to proxy or mock an AI service endpoint, and a DynamicPropertyRegistrar that dynamically sets Spring AI properties. These properties configure the AI client with a base URL from the container’s endpoint (via getOpenAIEndpoint()), a test API key (test-api-key), and a model identifier (ai/gemma3). The @TestConfiguration(proxyBeanMethods = false) annotation ensures lightweight bean creation for testing without proxying. This setup enables tests to simulate an AI service environment without external dependencies, using the socat container to handle requests. The socat forwards traffic to the internal model-runner.docker.internal service.
4.3. Pulling and Verifying the Gemma 3 Model
Now, after enabling Model Runner using one of the options, we pull the Gemma 3 model:
docker model pull ai/gemma3
Then, we can confirm it’s available locally:
docker model list
This command lists all locally available models, including ai/gemma3.
5. Integration With Spring AI
Now, let’s create a simple controller to interact with the model:
@RestController
class ModelRunnerController {
private final ChatClient chatClient;
public ModelRunnerController(ChatClient.Builder chatClientBuilder) {
this.chatClient = chatClientBuilder.build();
}
@GetMapping("/chat")
public String chat(@RequestParam("message") String message) {
return this.chatClient.prompt()
.user(message)
.call()
.content();
}
}
5.1. Testing Model Runner With a Specific TCP Port
To use Docker Model Runner, we need to configure the OpenAI client to point to the right endpoint and use the model pulled earlier. Now, we start the application and test the /chat endpoint:
curl "http://localhost:8080/chat?prompt=What%20is%20the%20future%20of%20AI%20development?"
The response will be generated by the Gemma 3 model running in Model Runner.
5.2. Testing Model Runner With Testcontainers
Let’s create the ModelRunnerApplicationTest class. It will import the TestcontainersConfiguration class and call the sample controller:
@Import(TestcontainersConfiguration.class)
class ModelRunnerApplicationTest {
// ...
@Test
void givenMessage_whenCallChatController_thenSuccess() {
// given
String userMessage = "Hello, how are you?";
// when
ResponseEntity<String> response = restTemplate.getForEntity(
baseUrl + "/chat?message=" + userMessage, String.class);
// then
assertThat(response.getStatusCode().is2xxSuccessful()).isTrue();
assertThat(response.getBody()).isNotEmpty();
}
}
The @Import(TestcontainersConfiguration.class) imports the TestcontainersConfiguration class, which defines a DockerModelRunnerContainer (running alpine/socat:1.8.0.1). Also, it dynamically registers Spring AI properties (e.g., spring.ai.openai.base-url, spring.ai.openai.api-key, spring.ai.openai.chat.options.model). This ensures the test environment is configured with a mock AI service endpoint provided by the Testcontainers-managed container.
6. Conclusion
Docker Model Runner provides a developer-friendly, privacy-focused, and cost-effective solution for running LLMs locally, particularly for those building GenAI applications within the Docker ecosystem. In this article, we explored Docker Model Runner’s capabilities and demonstrated its integration with Spring AI. As always, the source code is available over on GitHub. The post Spring AI With Docker Model Runner first appeared on Baeldung.
Content mobilized by FeedBlitz RSS Services, the premium FeedBurner alternative. |