
Chat and Conversational AI: Building a Stateful, Interactive Chat Application with Gemini
Welcome to the Chat and Conversational AI chapter! You've already learned how to make a basic API call to the Gemini model to generate a single response. Now, we'll take that concept a big step further and build a full-featured, stateful chat application.
A stateful chat application is one that "remembers" the past conversation. This is crucial for building a natural and coherent chat experience, as the AI needs context from previous messages to provide meaningful responses. We'll achieve this by sending the entire conversation history to the Gemini API with each new message.
This tutorial will use React for the front end and the Gemini API for the conversational back end.
Step 1: Setting Up the React Component
First, let's set up the main App
component. We'll need a few key state variables to manage our application's state:
messages
: An array that will hold our entire conversation history. Each message will be an object with arole
('user' or 'model') andcontent
.input
: A string that will store the current text the user is typing in the input field.isLoading
: A boolean to indicate when we're waiting for the Gemini API to respond. This is important for user experience.
We'll also use a useRef
hook to enable automatic scrolling to the bottom of the chat window as new messages are added.
import React, { useState, useEffect, useRef } from 'react';
export default function App() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const [isLoading, setIsLoading] = useState(false);
const chatEndRef = useRef(null);
// ... rest of the component will go here
}
The useEffect
hook will handle the auto-scrolling. It runs every time the messages
array changes.
useEffect(() => {
chatEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);
Step 2: Handling User Input and State Updates
Next, we'll create the handleSubmit
function, which is called when the user sends a message. This function does three things:
- It prevents the form's default submission behavior.
- It adds the new user message to the
messages
state. - It calls our API function, passing the updated
messages
array.
const handleSubmit = async (e) => {
e.preventDefault();
if (input.trim() === '') return;
// Add the user's message to the conversation history
const userMessage = { role: 'user', content: input };
setMessages((prevMessages) => [...prevMessages, userMessage]);
setInput('');
setIsLoading(true);
try {
// Call the API with the full chat history
const response = await callGeminiAPI([...messages, userMessage]);
// Add the AI's response to the conversation history
const aiMessage = { role: 'model', content: response };
setMessages((prevMessages) => [...prevMessages, aiMessage]);
} catch (error) {
// Error handling
console.error('Error calling Gemini API:', error);
const errorMessage = { role: 'model', content: "Sorry, an error occurred." };
setMessages((prevMessages) => [...prevMessages, errorMessage]);
} finally {
setIsLoading(false);
}
};
Step 3: The Stateful API Call
This is the most critical part of the tutorial. The callGeminiAPI
function is what makes our chat stateful. Instead of just sending the user's latest message, we send a formatted array containing the entire conversation history. The Gemini API is designed to handle this, using the full context to generate its response.
The payload
's contents
property is an array of message objects, with the role
and parts
properties correctly formatted for the API. We also include a robust exponential backoff mechanism to handle potential network issues or rate limiting, ensuring our app is reliable.
const callGeminiAPI = async (currentChatHistory) => {
const formattedChatHistory = currentChatHistory.map(msg => ({
role: msg.role,
parts: [{ text: msg.content }]
}));
const payload = {
contents: formattedChatHistory,
};
const apiKey = "";
const apiUrl = `https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-05-20:generateContent?key=${apiKey}`;
const fetchOptions = {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
};
let response = null;
let retries = 0;
const maxRetries = 3;
const initialDelay = 1000;
while (retries < maxRetries) {
try {
const fetchResponse = await fetch(apiUrl, fetchOptions);
if (!fetchResponse.ok) { throw new Error(`HTTP error! status: ${fetchResponse.status}`); }
response = await fetchResponse.json();
break;
} catch (error) {
retries++;
if (retries >= maxRetries) { throw error; }
const delay = initialDelay * Math.pow(2, retries - 1) + Math.random() * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
}
}
if (!response || !response.candidates || response.candidates.length === 0) {
throw new Error('Invalid or empty API response.');
}
return response.candidates[0].content.parts[0].text;
};
Step 4: Building the User Interface
The final step is to build the UI using JSX and Tailwind CSS. The App
component will render a list of messages by mapping over the messages
array. It will also conditionally render a loading message when isLoading
is true.
The messages are styled differently based on their role
(user
vs. model
) to make the conversation easy to follow. The input form at the bottom is a simple, fixed component that triggers our handleSubmit
function.
You can copy and paste the complete code below to get a fully functional app.