
Connecting with AI: Building AI Agents with TypeScript
This blog post is part of an independent series called Connecting with AI, authored by Felipe Mantilla, an AI engineering expert at Gorilla Logic. The series' goal is to make the world of AI more accessible, support those who want to learn more about the field, and establish foundations on the most interesting advancements in AI. You can find the original version of this post in Spanish on his Medium blog.
In recent months, AI has found its way into virtually every space. The term “agents” is buzzing everywhere, and companies are quick to integrate features that make API calls to LLM providers, branding themselves as “AI-friendly.” But what exactly are these agents? How do they work? And more importantly—how can you build one using TypeScript?
While AI is most commonly associated with Python, JavaScript remains one of the most widely used languages globally, especially for web development. Many companies already have codebases and engineering teams fluent in JavaScript, making TypeScript a valid and accessible option for integrating agents. Even though enterprise support exists, community adoption is still catching up. Still, it’s worth knowing how to do it.
Getting Started with OpenAI in TypeScript
Since ChatGPT and OpenAI are household names at this point, here's what you’ll need to use OpenAI’s API with TypeScript:
- Create a TypeScript-compatible project
- Install the
openai
package via npm - Get an OpenAI token
- Start experimenting with the API
Creating a TypeScript-Compatible Project
There are multiple ways to set up a TypeScript project. Recently, Node.js announced native support for TypeScript in version 23. If you want more details on that, check out this article.
Another approach is to create a project from scratch, adding the dependencies and configurations manually. I put together a step-by-step guide for this process—check it out here.
For this guide, though, we’ll use a boilerplate available on GitHub. You can find it here. It already comes with built-in features like watchers, testing, and a build process.
To use the boilerplate, follow these steps:
Clone it from the repo.
Then run:
npm i && npm run build && npm run start
And that’s it—you’re ready to start developing your project using TypeScript.
Installing the openai
Package from npm
Head here to check out OpenAI’s JavaScript documentation. To install the package, run the following command:
npm i openai
I made two additional tweaks:
Added a dev
script with file watchers:
"dev": "tsc -w -p tsconfig.json & node --watch build/src/main.js",
And cleaned up main.ts
and the associated test files.
Now OpenAI is officially part of our project’s dependencies:
Get Your OpenAI Token
Here’s the tricky bit—OpenAI is a paid service. You’ll need a paid account to get an access token. (But don’t worry, if you don’t have one yet, keep reading—we’ll also explore open-source models you can use without an OpenAI subscription.)
Create a new OpenAI account or log into an existing one.
Then grab your token, which will look something like this:
Once you’ve got your token, be sure to add some credits to your account. Otherwise, your API calls will fail.
To keep things secure, create a .env
file in your project and paste your token there.
And just like that, you're ready to bring your ideas to life!
Playing with the API
To use your .env
file, you’ll need to slightly tweak the scripts
section in package.json
:
Install dotenv
to handle environment variables:
npm i dotenv
Now update your main.ts
and confirm that your environment variables are accessible from your code.
Let’s Code!
We'll copy and paste the available code in npm OpenAI, making some minor adjustments.
If you run this code without purchasing OpenAI credits, you’ll probably see an error response like this:
If you have added credits, you should see a proper response like:
Understanding the API Response
With that, we're ready. To finalize this first section, we'll analyze the available properties in the answer. In this example, we’re using the Chat Completions API.
Analyzing the response, it's clear that we should access choices[0].message.content. The rest of the response includes useful metadata, like:
- Number of tokens used (input + output)
- Why the model stopped generating (e.g., max tokens, stop sequence)
- Message role (e.g., user, assistant)
- Additional metadata about the interaction
This structure gives you a detailed view into the exchange and how the model interpreted your prompt.
A Quick Note on Tokens
What’s a token? It’s a chunk of text (usually a word or part of one) that the model uses to interpret language statistically. If you're curious about how text is broken into tokens, try OpenAI’s Tokenizer tool.
OpenAI pricing is based on the number of tokens processed. Longer prompts and responses = higher costs.
You can check pricing details depending on which model you use:
https://openai.com/api/pricing/
https://platform.openai.com/settings/organization/limits
Keep Exploring
The sky's the limit with OpenAI—try different use cases like audio, image, and complex text processing. Be bold. Experiment. Tinker around and push the boundaries of what’s possible.
What’s Next?
I’ll be posting more entries diving deeper into advanced and exciting topics. Stay tuned—you won’t want to miss them.