Using Ollama with Spring AI

This article will teach you how to create a Spring Boot application that implements several AI scenarios using Spring AI and the Ollama tool. Ollama is an open-source tool that aims to run open LLMs on our local machine. It acts like a bridge between LLM and a workstation, providing an API layer on top of them for other applications or services. With Ollama we can run almost every model we want only by pulling it from a huge library.
This is the fifth part of my series of articles about Spring Boot and AI. I mentioned Ollama in the first part of the series to show how to switch between different AI models with Spring AI. However, it was only a brief introduction. Today, we try to run all AI use cases described in the previous tutorials with the Ollama tool. Those tutorials integrated mostly with OpenAI. In this article, we will test them against different AI models.
- https://piotrminkowski.com/2025/01/28/getting-started-with-spring-ai-and-chat-model: The first tutorial introduces the Spring AI project and its support for building applications based on chat models like OpenAI or Mistral AI.
- https://piotrminkowski.com/2025/01/30/getting-started-with-spring-ai-function-calling: The second tutorial shows Spring AI support for Java function calling with the OpenAI chat model.
- https://piotrminkowski.com/2025/02/24/using-rag-and-vector-store-with-spring-ai: The third tutorial shows Spring AI support for RAG (Retrieval Augmented Generation) and vector store.
- https://piotrminkowski.com/2025/03/04/spring-ai-with-multimodality-and-images: The fourth tutorial shows Spring AI support for a multimodality feature and image generation
Fortunately, our application can easily switch between different AI tools or models. To achieve this, we must activate the right Maven profile.
Source Code
Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.
Prepare a Local Environment for Ollama
A few options exist for accessing Ollama on the local machine with Spring AI. I downloaded Ollama from the following link and installed it on my laptop. Alternatively, we can run it e.g. with Docker Compose or Testcontainers.

Once we install Ollama on our workstation we can run the AI model from its library with the ollama run
command. The full list of available models can be found here. At the beginning, we will choose the Llava model. It is one of the most popular models which supports both a vision encoder and language understanding.
ollama run llava
ShellSessionOllama must pull the model manifest and image. Here’s the ollama run
command output. Once we see that, we can interact with the model.

The sample application source code already defines the ollama-ai
Maven profile with the spring-ai-ollama-spring-boot-starter
Spring Boot starter.
<profile>
<id>ollama-ai</id>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>
</dependencies>
</profile>
XMLThe profile is disabled by default. We might enable it during development as shown below (for IntelliJ IDEA). However, the application doesn’t use any vendor-specific components but only generic Spring AI classes and interfaces.

We must activate the ollama-ai
profile when running the same application. Assuming we are in the project root directory, we need to run the following Maven command:
mvn spring-boot:run -Pollama-ai
ShellSessionPortability across AI Models
We should avoid using specific model library components to make our application portable between different models. For example, when registering functions in the chat model client we should use FunctionCallingOptions
instead of model-specific components like OpenAIChatOptions
or OllamaOptions
.
@GetMapping
String calculateWalletValue() {
PromptTemplate pt = new PromptTemplate("""
What’s the current value in dollars of my wallet based on the latest stock daily prices ?
""");
return this.chatClient.prompt(pt.create(
FunctionCallingOptions.builder()
.function("numberOfShares")
.function("latestStockPrices")
.build()))
.call()
.content();
}
JavaNot all models support all the AI capabilities used in our sample application. For models like Ollama or Mistral AI, Spring AI doesn’t provide image generation implementation since those tools don’t support it right now. Therefore we should inject the ImageModel
optionally, in case it is not provided by the model-specific library.
@RestController
@RequestMapping("/images")
public class ImageController {
private final static Logger LOG = LoggerFactory.getLogger(ImageController.class);
private final ObjectMapper mapper = new ObjectMapper();
private final ChatClient chatClient;
private ImageModel imageModel;
public ImageController(ChatClient.Builder chatClientBuilder,
Optional<ImageModel> imageModel,
VectorStore store) {
this.chatClient = chatClientBuilder
.defaultAdvisors(new SimpleLoggerAdvisor())
.build();
imageModel.ifPresent(model -> this.imageModel = model);
// other initializations
}
}
JavaThen, if a method requires the ImageModel
bean, we can throw an exception informing it is not by the AI model (1). On the other hand, Spring AI does not provide a dedicated interface for multimodality, which enables AI models to process information from multiple sources. We can use the UserMessage
class and the Media
class to combine e.g. text with image(s) in the user prompt. The GET /images/describe/{image}
endpoint lists items detected in the source image from the classpath (2).
@GetMapping(value = "/generate/{object}", produces = MediaType.IMAGE_PNG_VALUE)
byte[] generate(@PathVariable String object) throws IOException, NotSupportedException {
if (imageModel == null)
throw new NotSupportedException("Image model is not supported by the AI model"); // (1)
ImageResponse ir = imageModel.call(new ImagePrompt("Generate an image with " + object, ImageOptionsBuilder.builder()
.height(1024)
.width(1024)
.N(1)
.responseFormat("url")
.build()));
String url = ir.getResult().getOutput().getUrl();
UrlResource resource = new UrlResource(url);
LOG.info("Generated URL: {}", url);
dynamicImages.add(Media.builder()
.id(UUID.randomUUID().toString())
.mimeType(MimeTypeUtils.IMAGE_PNG)
.data(url)
.build());
return resource.getContentAsByteArray();
}
@GetMapping("/describe/{image}") // (2)
List<Item> describeImage(@PathVariable String image) {
Media media = Media.builder()
.id(image)
.mimeType(MimeTypeUtils.IMAGE_PNG)
.data(new ClassPathResource("images/" + image + ".png"))
.build();
UserMessage um = new UserMessage("""
List all items you see on the image and define their category.
Return items inside the JSON array in RFC8259 compliant JSON format.
""", media);
return this.chatClient.prompt(new Prompt(um))
.call()
.entity(new ParameterizedTypeReference<>() {});
}
JavaLet’s try to avoid similar declarations described in Spring AI. Although they are perfectly correct, they will cause problems when switching between different Spring Boot starters for different AI vendors.
ChatResponse response = chatModel.call(
new Prompt(
"Generate the names of 5 famous pirates.",
OllamaOptions.builder()
.model(OllamaModel.LLAMA3_1)
.temperature(0.4)
.build()
));
JavaIn this case, we can set the global property in the application.properties
file which sets the default model used in the scenario with Ollama.
spring.ai.ollama.chat.options.model = llava
JavaTesting Multiple Models with Spring AI and Ollama
By default, Ollama doesn’t require any API token to establish communication with AI models. The Ollama Spring Boot starter provides auto-configuration that connects the chat client to the Ollama API server running on the localhost:11434
address. So, before running our sample application we must export tokens used to authorize against stock market API and a vector store.
export STOCK_API_KEY=<YOUR_STOCK_API_KEY>
export PINECONE_TOKEN=<YOUR_PINECONE_TOKEN>
JavaLlava on Ollama
Let’s begin with the Llava model. We can call the first endpoint that asks the model to generate a list of persons (GET /persons
) and then search for the person with a particular in the list stored in the chat memory (GET /persons/{id}
).

Then we can the endpoint that displays all the items visible on the particular image from the classpath (GET /images/describe/{image}
).

By the way, here is the analyzed image stored in the src/main/resources/images/fruits-3.png
file.

The endpoint for describing all the input images from the classpath doesn’t work fine. I tried to tweak it by adding the RFC8259 JSON format sentence or changing a query. However, the AI model always returned a description of a single instead of a whole Media
list. The OpenAI model could print descriptions for all images in the String[]
format.
@GetMapping("/describe")
String[] describe() {
UserMessage um = new UserMessage("""
Explain what do you see on each image from the input list.
Return data in RFC8259 compliant JSON format.
""", List.copyOf(Stream.concat(images.stream(), dynamicImages.stream()).toList()));
return this.chatClient.prompt(new Prompt(um))
.call()
.entity(String[].class);
}
JavaHere’s the response. Of course, we can train a model to receive better results or try to prepare a better prompt.

After calling the GET /wallet
endpoint exposed by the WalletController
, I received the [400] Bad Request - {"error":"registry.ollama.ai/library/llava:latest does not support tools"}
response. It seems Llava doesn’t support the Function/Tool calling feature. We will also always receive the NotSupportedExcpetion
for GET /images/generate/{object}
endpoint, since the Spring AI Ollama library doesn’t provide ImageModel
bean. You can perform other tests e.g. for RAG and vector store features implemented in the StockController
@RestController
.
Granite on Ollama
Let’s switch to another interesting model – Granite. Particularly we will test the granite3.2-vision model dedicated to automated content extraction from tables, charts, infographics, plots, and diagrams. First, we set the current model name in the Ollama Spring AI configuration properties.
spring.ai.ollama.chat.options.model = granite3.2-vision
PlaintextLet’s stop the Llava model and then run granite3.2-vision
on Ollama:
ollama run granite3.2-vision
JavaAfter the application restarts, we can perform some test calls. The endpoint for describing a single image returns a more detailed response than the Llava model. The response for the query with multiple images still looks the same as before.

The Granite Vision model supports a “function calling” feature, but it couldn’t call functions properly using my prompt. Please refer to my article for more details about the Spring AI function calling with OpenAI.

Deepseek on Ollama
The last model we will run within this exercise is Deepseek. DeepSeek-R1 achieves performance comparable to OpenAI-o1 on reasoning tasks. First, we must set the current model name in the Ollama Spring AI configuration properties.
spring.ai.ollama.chat.options.model = deepseek-r1
PlaintextThen let’s stop the Granite model and then run deepseek-r1
on Ollama:
ollama run deepseek-r1
ShellSessionWe need to restart the app:
mvn spring-boot:run -Pollama-ai
ShellSessionAs usual, we can call the first endpoint that asks the model to generate a list of persons (GET /persons
) and then search for the person with a particular in the list stored in the chat memory (GET /persons/{id}
). The response was pretty large, but not in the required JSON format. Here’s the fragment of the response:

The deepseek-r1
model doesn’t support a tool/function calling feature. Also, it didn’t analyze my input image properly and it didn’t return a JSON response according to the Spring AI structured output feature.
Final Thoughts
This article shows how to easily switch between multiple AI models with Spring AI and Ollama. We tested several AI use cases implemented in the sample Spring Boot application across models such as Llava, Granite, or Deepseek. The app provides several endpoints for showing such features as multimodality, chat memory, RAG, vector store, or a function calling. It aims not to compare the AI models, but to give a simple recipe for integration with different AI models and allow playing with them using Spring AI.
Leave a Reply