Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Overview

In this tutorial, we’ll learn how to call OpenAI ChatGPT APIs in Spring Boot. We’ll create a Spring Boot application that will generate responses to a prompt by calling the OpenAI ChatGPT APIs.

2. OpenAI ChatGPT APIs

Before starting with the tutorial, let’s explore the OpenAI ChatGPT API we’ll use in this tutorial. We’ll call the create chat completion API to generate responses to a prompt.

2.1. API Parameters and Authentication

Let’s look at the API’s mandatory request parameters:

  • model – it is the version of the model we’ll send requests to. There are a few versions of the model available. We’ll use the gpt-3.5-turbo model, which is the latest version of the model publicly available
  • messages – messages are the prompts to the model. Each message requires two fields: role and content. The role field specifies the sender of the message. It will be “user” in requests and “assistant” in the response. The content field is the actual message

To authenticate with the API, we’ll generate a OpenAI API key. We’ll set this key in the Authorization header while calling the API.

A sample request in cURL format would look like this:

$ curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Additionally, the API accepts a number of optional parameters to modify the response.

In the following sections, we’ll focus on a simple request but let’s look at a few optional parameters that can help in tweaking the response:

  • n – can be specified if we want to increase the number of responses to generate. The default value is 1.
  • temperature – controls the randomness of the response. The default value is 1 (most random).
  • max_tokens – is used to limit the maximum number of tokens in the response. The default value is infinity which means that the response will be as long as the model can generate. Generally, it would be a good idea to set this value to a reasonable number to avoid generating very long responses and incurring a high cost.

2.2. API Response

The API response will be a JSON object with some metadata and a choices field. The choices field will be an array of objects. Each object will have a text field that will contain the response to the prompt.

The number of objects in the choices array will be equal to the optional n parameter in the request. If the n parameter is not specified, the choices array will contain a single object.

Here’s a sample response:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

The usage field in the response will contain the number of tokens used in the prompt and the response. This is used to calculate the cost of the API call.

3. Code Example

We’ll create a Spring Boot application that will use OpenAI ChatGPT APIs. To do so, we’ll create a Spring Boot Rest API that accepts a prompt as a request parameter, passes it to the OpenAI ChatGPT API, and returns the response as a response body.

3.1. Dependencies

First, let’s create a Spring Boot project. We’ll need the Spring Boot Starter Web dependency for this project:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

3.2. DTOs

Next, let’s create a DTO that corresponds to the request parameters of the OpenAI ChatGPT API:

public class ChatRequest {

    private String model;
    private List<Message> messages;
    private int n;
    private double temperature;

    public ChatRequest(String model, String prompt) {
        this.model = model;
        
        this.messages = new ArrayList<>();
        this.messages.add(new Message("user", prompt));
    }

    // getters and setters
}

Let’s also define the Message class:

public class Message {

    private String role;
    private String content;

    // constructor, getters and setters
}

Similarly, let’s create a DTO for the response:

public class ChatResponse {

    private List<Choice> choices;

    // constructors, getters and setters
    
    public static class Choice {

        private int index;
        private Message message;

        // constructors, getters and setters
    }
}

3.3. Controller

Next, let’s create a controller that will accept a prompt as a request parameter and return the response as a response body:

@RestController
public class ChatController {
    
    @Qualifier("openaiRestTemplate")
    @Autowired
    private RestTemplate restTemplate;
    
    @Value("${openai.model}")
    private String model;
    
    @Value("${openai.api.url}")
    private String apiUrl;
    
    @GetMapping("/chat")
    public String chat(@RequestParam String prompt) {
        // create a request
        ChatRequest request = new ChatRequest(model, prompt);
        
        // call the API
        ChatResponse response = restTemplate.postForObject(apiUrl, request, ChatResponse.class);
        
        if (response == null || response.getChoices() == null || response.getChoices().isEmpty()) {
            return "No response";
        }
        
        // return the first response
        return response.getChoices().get(0).getMessage().getContent();
    }
}

Let’s look at some important parts of the code:

  • We used the @Qualifier annotation to inject a RestTemplate bean that we’ll create in the next section
  • Using the RestTemplate bean, we called the OpenAI ChatGPT API using the postForObject() method. The postForObject() method takes the URL, the request object, and the response class as parameters
  • Finally, we read the responses’ choices list and returned the first reply

3.4. RestTemplate

Next, let’s define a custom RestTemplate bean that will use the OpenAI API key for authentication:

@Configuration
public class OpenAIRestTemplateConfig {

    @Value("${openai.api.key}")
    private String openaiApiKey;

    @Bean
    @Qualifier("openaiRestTemplate")
    public RestTemplate openaiRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        restTemplate.getInterceptors().add((request, body, execution) -> {
            request.getHeaders().add("Authorization", "Bearer " + openaiApiKey);
            return execution.execute(request, body);
        });
        return restTemplate;
    }
}

Here, we added an interceptor to the base RestTemplate and added the Authorization header.

3.5. Properties

Finally, let’s provide the properties for the API in the application.properties file:

openai.model=gpt-3.5-turbo
openai.api.url=https://api.openai.com/v1/chat/completions
openai.api.key=your-api-key

4. Running the Application

We can now run the application and test it in the browser:

Chat GPT Response when calling the API in browser

 

As we can see, the application generated a response to the prompt. Please note that the response may vary as it is generated by the model.

5. Conclusion

In this tutorial, we explored the OpenAI ChatGPT API to generate responses to prompts. We created a Spring Boot application that calls the API to generate responses to prompts.

The code examples for this tutorial are available over on GitHub.

Course – LS – All

Get started with Spring and Spring Boot, through the Learn Spring course:

>> CHECK OUT THE COURSE
res – REST with Spring (eBook) (everywhere)
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.