
How to Build with Generative AI
Part 1: Getting Started with Large Language Models (LLMs) via APIs
Introduction: What are LLMs and APIs?
Welcome to the first part of our series on building with generative AI! Before we get our hands dirty, let's understand the two core concepts we'll be working with: Large Language Models (LLMs) and APIs.
An LLM is a powerful AI model trained on a massive amount of text and code. This training allows it to understand, generate, and process human language in a remarkable way. Think of it as a super-intelligent digital brain for text.
An API (Application Programming Interface) is a set of rules that allows different applications to talk to each other. In our case, the API is the bridge that lets your code send a request (like a prompt) to an LLM and receive a response (like generated text). Using an API means you don't need to host or manage the massive LLM on your own computer.
Step 1: Get an API Key
To use an LLM via its API, you first need to get a secret API key. This key authenticates your requests and tracks your usage. While there are many options, we'll use a popular and accessible provider for this tutorial.
Follow these general steps to get your key:
- Go to a provider's developer platform (e.g., OpenAI, Google AI).
- Sign up or log in to your account.
- Navigate to the "API Keys" or "Developer" section.
- Follow the instructions to create a new secret key. Important: Copy this key immediately and store it in a secure place. You will not be able to view it again!
Step 2: Make Your First API Call
Now that you have your key, let's write a simple piece of code to make a basic request. This example uses JavaScript, which is great for web development. You can run this code in a simple HTML file or a local Node.js environment.
// This is a placeholder for your actual API key.
// It should be loaded from a secure environment variable, not hardcoded.
const apiKey = 'YOUR_API_KEY_HERE';
const apiUrl = 'https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-05-20:generateContent';
async function generateText(prompt) {
try {
const payload = {
contents: [{
role: "user",
parts: [{ text: prompt }]
}]
};
const response = await fetch(`${apiUrl}?key=${apiKey}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
const result = await response.json();
const generatedText = result.candidates[0].content.parts[0].text;
return generatedText;
} catch (error) {
console.error('Error generating text:', error);
return 'Failed to generate text.';
}
}
// Example usage
const userPrompt = "What is the capital of France?";
generateText(userPrompt).then(text => {
console.log(text);
});
This code performs the following actions:
- Sets up your API key and the model's API URL.
- Defines an asynchronous function `generateText` that takes a prompt as input.
- Inside the function, it creates a `payload` object with your prompt.
- It uses the `fetch` API to send a `POST` request to the LLM.
- It awaits the response, parses the JSON, and extracts the generated text.
- Finally, it calls the function with an example prompt and logs the result.