Spring AI: An AI framework for Java developers

Artificial intelligence has been something of a fiesta for programmers for the last few years, and one language—Python—has been the undeniable belle of the ball. Java and other languages have been a bit sidelined. But now we are entering a new phase where AI models are the key component of machine learning, and the key question is how to integrate their functionality into larger systems. That kind of integration happens to be a Java specialty. Even better for Java developers, the Spring framework has recently introduced Spring AI, which streamlines programming for a wide range of AI projects. With Spring AI, you can apply familiar Spring semantics and everything you already know about enterprise infrastructure to machine learning.

Could Java rival Python for AI development? Only time will tell, but Spring AI is one of several newer projects that raise the possibility. Let’s take a look.

What is Spring AI?

Spring AI aims to encapsulate a wide range of AI tool providers including libraries and frameworks that support natural language processing and generative AI:

  • Natural language processing (NLP): OpenAI’s GPT, Google’s Gemini, Hugging Face Transformers
  • Computer vision: TensorFlow, PyTorch, OpenAI’s DALL-E
  • Speech recognition and synthesis: Google Speech-to-Text, Amazon Transcribe, Azure Speech Services
  • Recommendation systems: TensorFlow Recommenders, Amazon Personalize
  • Generative AI: Stable Diffusion, OpenAI’s DALL-E, Midjourney
  • Extract, transform, load (ETL): Vector store transformations

Spring AI also includes or is planned to include specialized providers for anomaly detection, time series analysis, and reinforcement learning. You can find a full list of planned providers on the Spring AI overview page. Spring is currently focused on the LLM use case and supports ChatGPT directly from OpenAI or as an Azure service. Additional AI providers include Google, Hugging Face, and Amazon.

The idea going forward is to wrap these services in an abstract form to integrate a wide range of AI tooling into a consistent, Spring-style component system. In the Spring AI model, POJOs will be the building blocks of an application to the AI domain.

Currently, getting a little chatbox to deliver coherent responses to custom enterprise data can be an enormous undertaking. Efforts to simplify the process and make it smoother are welcome.

Set up a Spring AI project

One way to use Spring AI is to set up a new Spring Boot app for it. Enter the following in your CLI:


spring boot new --from ai --name myProject

Or, if you already have an existing project, you can just add the following to it:


spring boot add ai

This command adds the spring-ai-bom dependency to an existing project.

The Spring AI API

Spring AI’s API consists of several branches, with the broadest being the Model interface. Model provides a generic component that developers can use to integrate almost any kind of AI functionality into an application. The interface also acts as a common target for AI providers to make their platforms available within the Spring ecosystem.

In Spring AI, many different types of AI are extended as implementations of the Model interface, including ChatModel, EmbeddingModel, ImageModel, and SpeechModel. There is also a streaming version called StreamingModel, for providers that support such an implementation.

These model implementations encapsulate the work done by the provider, which is consumed by the ChatClient implementation.

Spring AI also supports function calling, enabling custom application code to provide an API the AI can interact with to form its responses. So far, Spring AI includes support for:

  • Anthropic Claude
  • Azure OpenAI
  • Google VertexAI Gemini
  • Groq
  • Mistral AI
  • Ollama
  • OpenAI

Spring AI also includes ETL (extract, transform, load) support on vector databases. This is modeled as a document reader, transformer, and writer. All the major vendors are covered.

Spring AI is also rolling out extensive embedding support. The EmbeddingModel interface abstracts the conversion of text into numeric format for a variety of providers.

Another area of complexity that Spring AI tackles is multimodality. This allows you to mix text and images. Here’s an example from the Spring AI documentation:


byte[] imageData = new ClassPathResource("/multimodal.test.png").getContentAsByteArray();
var userMessage = new UserMessage(
	"Explain what do you see in this picture?", // content
	List.of(new Media(MimeTypeUtils.IMAGE_PNG, imageData))); // media
ChatResponse response = chatModel.call(new Prompt(List.of(userMessage)));

Prompts help structure user input with whatever consistent framework your app requires—something like a view with variable interpolation. At first glance, prompts seem simple, but in fact thay can entail quite a bit of complexity, including framing content and contextual information like roles.

The StructuredOutput interface aids in structuring the output of models; especially important when channeling the output into another system as input.

Another interesting facet of AI development is testing, where using a model (possibly a different one from the primary) to evaluate the responses is an important approach. Spring AI includes support for this need.

Spring AI application example

We can take a look at the workings of a simple example in the Spring AI Azure Workshop repository. The repo includes a few examples, but let’s look at the simplest. This is a project in a Maven layout and the first thing to note is the application.resources file, which contains the following line:


// src/main/resources/application.resources
spring.ai.azure.openai.chat.options.deployment-name=gpt-35-turbo-16k

This creates a property with the value of gpt-turbo-16k. The name spring.ai.azure.openai.chat.options.deployment-name is important because it’s tied by autoconfiguration to a Spring Bean configurator that will produce a ChatClient using that parameter. The following dependency in the pom.xml provides that client:



  org.springframework.ai
  spring-ai-azure-openai-spring-boot-starter


In essence, when Spring scans the project looking for a ChatClient, it’ll use the property to make one using naming conventions in the openai starter project. In the simple helloworld example we are looking at, that ChatClient is called for by the controller:


package com.xkcd.ai.helloworld;

import org.springframework.ai.chat.ChatClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.Map;

@RestController
public class SimpleAiController {

	private final ChatClient chatClient;

	@Autowired
	public SimpleAiController(ChatClient chatClient) {
		this.chatClient = chatClient;
	}

	@GetMapping("/ai/simple")
	public MapString,  generation(
			@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
		return Map.of("generation", chatClient.call(message));
	}

}

This is a typical Spring REST controller, where the chatClient member is method annotated as @Autowired. That ChatClient is then used to handle the requests at /ai/simple. (Request parameter defaults are provided on the method, so a request with no parameters will be set as “Tell me a joke.”) The endpoint method returns a map with a “generation” key whose value is the return value of chatClient.call(message).

For all this to work, you need an API key for Azure. The key is set as an environment variable:


export SPRING_AI_AZURE_OPENAI_API_KEY=

You also need to tell the engine where the AI endpoint is located:


export SPRING_AI_AZURE_OPENAI_ENDPOINT=

With all those elements in place, you can run the project with $ maven spring-boot:run. Now, if you visit localhost:8080/ai/simple, you should see an AI-generated joke.

Other examples in the Azure repository demonstrate how to layer on additional features to this basic frame. For example, you can easily add a prompt template to the example app:


// src/main/resources/prompts/joke-prompt.st
Tell me a {adjective} joke about {topic}

Which is used in the controller like so:


@Value("classpath:/prompts/joke-prompt.st")
	private Resource jokeResource;

And then in the endpoint, you might add something like:


PromptTemplate promptTemplate = new PromptTemplate(jokeResource);

The case for Spring AI

An essential question to ask before adopting a technology is whether it is justified by the return on investment. In other words, look at the complexity and effort in relation to what the new tech brings to the table. A primary value proposition of a tool like Spring is to simplify and reduce complexity. Spring seeks to make things more consistent, then deliver extra features on top of that framework.

If you are seeking simplicity, you might be tempted to start out by integrating the AI providers you use by making calls from your application code to OpenAI or Google APIs, for instance. This approach has a directness to it. But if your project is already using Spring, adopting Spring AI might make more sense, especially in the longer term. The more complex and ambitious your AI use cases are—and AI definitely tends towards sprawling complexity—the more you will appreciate the structure, consistency, and templating you find in Spring AI.

Go to Source

Author: