Exploring Spring AI — Ollama & Stability AI

Varanasi Rama Krishna Parjanya
4 min readDec 25, 2024

--

Background

Spring community recently released the Spring AI module to integrate the applications with AI models natively with minimal efforts.

Spring has provided the support to all major AI players — https://docs.spring.io/spring-ai/reference/api/index.html

It has also given the space for developers to expose their own models, use different vector stores and leverage dynamic client function calls.

Use Cases

I have tried to cover the below two major use cases -

  1. Chat Completion → Ollama model
  2. Text to Image → Stability AI — stable diffusion model

1. Chat Completion

This feature leverages pre-trained language models, such as GPT (Generative Pre-trained Transformer), to generate human-like responses to user inputs in natural language.

I have pulled the open-source Ollama LLM — llama3.2 model which is lightweight in nature.

There are always better choices coming up. You can look them up here — https://ollama.com/library

Prerequisite Step -> Make sure the Ollama runs in your local machine — Default port: 11434

Spring boot’s Auto-Configuration dependency for Ollama

<!--    Chat Model - Ollama AI -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>

Inject the application properties with the model props

# Ollama properties
#spring.ai.ollama.base-url=localhost:11434 --> incase you wish to customize
spring.ai.ollama.chat.enabled=true
spring.ai.ollama.chat.options.model=llama3.2:1b

Controller Endpoints

@GetMapping("/ask-ai")
public String getOllamaResponse(@RequestParam String prompt){
return ollamaChatService.getChatServiceResponse(prompt);
}

@GetMapping("/ask-ai-options")
public String getOllamaResponseOptions(@RequestParam String prompt){
return ollamaChatService.getChatServiceResponseOptions(prompt);
}

Service Class

@Service
public class OllamaChatService {

private final OllamaChatModel ollamaChatModel;

public OllamaChatService(OllamaChatModel ollamaChatModel) {
this.ollamaChatModel = ollamaChatModel;
}

public String getChatServiceResponse(String prompt){
return ollamaChatModel.call(prompt);
}

public String getChatServiceResponseOptions(String prompt){
ChatResponse response = ollamaChatModel.call(
new Prompt(
prompt,
OllamaOptions.builder()
.withModel(OllamaModel.LLAMA3_2_1B)
.withTemperature(0.4)
.build()
));
return response.getResult().getOutput().getContent();
}

}

API Response

2. Text To Image

Spring’s auto configuration lets our application access the Stability AI API which generates images from text.

Stability uses stable-diffusion-v1–6 model for image generation (by default)

Spring boot’s Auto-Configuration dependency for Stability AI

<!--    Image Model - Stability AI -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-stability-ai-spring-boot-starter</artifactId>
</dependency>

Inject the application properties with the model props

# Stability properties
#spring.ai.stabilityai.base-url=https://api.stability.ai/v2beta/stable-image/generate/core
#spring.ai.stabilityai.base-url=https://api.stability.ai/v2beta/stable-image/generate/core
spring.ai.stabilityai.api-key=<REPLACE_API_KEY>

spring.ai.stabilityai.image.enabled=true
#spring.ai.stabilityai.image.base-url=https://api.stability.ai/v2beta/stable-image/generate/core
spring.ai.stabilityai.image.api-key=<REPLACE_API_KEY>

spring.ai.stabilityai.image.option.responseFormat="image/png"
#spring.ai.stabilityai.image.option.model=

Stability API Response

Controller Endpoints

@GetMapping("/generate-image-with-stability")
public List<String> generateStabilityImage(HttpServletResponse response,
@RequestParam String prompt,
@RequestParam(defaultValue = "hd") String quality,
@RequestParam(defaultValue = "1") int n,
@RequestParam(defaultValue = "1024") int width,
@RequestParam(defaultValue = "1024") int height) throws IOException {

ImageResponse imageResponse = stabilityImageService.generateImage(prompt, quality, n, width, height);

// String B64Json_Image = imageResponse.getResult().getOutput().getB64Json();
// byte[] imageBytes = ImageUtils.Base64StringToByteArray(B64Json_Image);
// System.out.println("Image bytes --> "+ imageBytes.toString());

// HttpHeaders headers = new HttpHeaders();
// headers.add("Content-Type", "image/*");
// headers.add("Content-Disposition", "inline; filename=image.png");


// Streams to get base 64 jsons from ImageResponse
List<String> image_jsons = imageResponse.getResults().stream()
.map(result -> result.getOutput().getB64Json())
.toList();

return image_jsons;
}

Service Class

@Service
public class StabilityImageService {

private final StabilityAiImageModel stabilityAiImageModel;

public StabilityImageService(StabilityAiImageModel stabilityAiImageModel) {
this.stabilityAiImageModel = stabilityAiImageModel;
}

public ImageResponse generateImage(String prompt,
String quality,
int n,
int width,
int height)
{
ImageResponse imageResponse = stabilityAiImageModel.call(
new ImagePrompt(prompt,
StabilityAiImageOptions.builder()
.withStylePreset(StyleEnum.CINEMATIC)
.withN(n)
.withHeight(height)
.withWidth(width).build())
);

return imageResponse;
}

}

You can customize more using the builder options based on use case

Front End — React

You can render the responses as well to the Front end like below —

Image Generation Demo
Image Generation Demo
Chat Completion Demo
Chat Completion Demo

--

--

No responses yet