Blog_AI_Hero.jpg
Blog

Connecting with AI: Building AI Agents with TypeScript

This blog post is part of an independent series called Connecting with AI, authored by Felipe Mantilla, an AI engineering expert at Gorilla Logic. The series' goal is to make the world of AI more accessible, support those who want to learn more about the field, and establish foundations on the most interesting advancements in AI. You can find the original version of this post in Spanish on his Medium blog

In recent months, AI has found its way into virtually every space. The term “agents” is buzzing everywhere, and companies are quick to integrate features that make API calls to LLM providers, branding themselves as “AI-friendly.” But what exactly are these agents? How do they work? And more importantly—how can you build one using TypeScript?

While AI is most commonly associated with Python, JavaScript remains one of the most widely used languages globally, especially for web development. Many companies already have codebases and engineering teams fluent in JavaScript, making TypeScript a valid and accessible option for integrating agents. Even though enterprise support exists, community adoption is still catching up. Still, it’s worth knowing how to do it.

Getting Started with OpenAI in TypeScript

Since ChatGPT and OpenAI are household names at this point, here's what you’ll need to use OpenAI’s API with TypeScript:

  1. Create a TypeScript-compatible project
  2. Install the openai package via npm
  3. Get an OpenAI token
  4. Start experimenting with the API

Creating a TypeScript-Compatible Project

There are multiple ways to set up a TypeScript project. Recently, Node.js announced native support for TypeScript in version 23. If you want more details on that, check out this article.

Another approach is to create a project from scratch, adding the dependencies and configurations manually. I put together a step-by-step guide for this process—check it out here.

For this guide, though, we’ll use a boilerplate available on GitHub. You can find it here. It already comes with built-in features like watchers, testing, and a build process.

To use the boilerplate, follow these steps:

openAI.gif

Clone it from the repo.

Then run:

npm i && npm run build && npm run start

And that’s it—you’re ready to start developing your project using TypeScript.

Installing the openai Package from npm

Head here to check out OpenAI’s JavaScript documentation. To install the package, run the following command:

npm i openai

I made two additional tweaks:

Added a dev script with file watchers:

"dev": "tsc -w -p tsconfig.json & node --watch build/src/main.js", 

And cleaned up main.ts and the associated test files.

Now OpenAI is officially part of our project’s dependencies:

typescript1.jpg

Get Your OpenAI Token

Here’s the tricky bit—OpenAI is a paid service. You’ll need a paid account to get an access token. (But don’t worry, if you don’t have one yet, keep reading—we’ll also explore open-source models you can use without an OpenAI subscription.)

Create a new OpenAI account or log into an existing one.

typescript2.jpg

Then grab your token, which will look something like this:

typescript3.jpg

Once you’ve got your token, be sure to add some credits to your account. Otherwise, your API calls will fail.

typescript4.jpg

To keep things secure, create a .env file in your project and paste your token there.

typescript5.jpg

And just like that, you're ready to bring your ideas to life!

Playing with the API

To use your .env file, you’ll need to slightly tweak the scripts section in package.json:

Screenshot 2025-04-07 at 10.00.28 AM.png

Install dotenv to handle environment variables:

npm i dotenv

Now update your main.ts and confirm that your environment variables are accessible from your code.

typescript6.jpg

typescrip7.jpg

Let’s Code!

We'll copy and paste the available code in npm OpenAI, making some minor adjustments. 

Screenshot 2025-04-07 at 10.02.20 AM.png

If you run this code without purchasing OpenAI credits, you’ll probably see an error response like this:

typescript8.jpg

If you have added credits, you should see a proper response like:

typescript9.jpg

Understanding the API Response

With that, we're ready. To finalize this first section, we'll analyze the available properties in the answer. In this example, we’re using the Chat Completions API. 

typescript10.jpg

Analyzing the response, it's clear that we should access choices[0].message.content. The rest of the response includes useful metadata, like:

  • Number of tokens used (input + output)
  • Why the model stopped generating (e.g., max tokens, stop sequence)
  • Message role (e.g., user, assistant)
  • Additional metadata about the interaction

This structure gives you a detailed view into the exchange and how the model interpreted your prompt.

A Quick Note on Tokens

What’s a token? It’s a chunk of text (usually a word or part of one) that the model uses to interpret language statistically. If you're curious about how text is broken into tokens, try OpenAI’s Tokenizer tool.

typescript11.jpg

OpenAI pricing is based on the number of tokens processed. Longer prompts and responses = higher costs.

You can check pricing details depending on which model you use:

https://openai.com/api/pricing/

https://platform.openai.com/settings/organization/limits

Keep Exploring

The sky's the limit with OpenAI—try different use cases like audio, image, and complex text processing. Be bold. Experiment. Tinker around and push the boundaries of what’s possible.

What’s Next?

I’ll be posting more entries diving deeper into advanced and exciting topics. Stay tuned—you won’t want to miss them.

Ready to be Unstoppable? Partner with Gorilla Logic, and you can be.

TALK TO OUR SALES TEAM