ChatGPT 101: Developing GPT Applications

Madhusudhan Konda
6 min readJun 11, 2023

The ChatGPT’s user-interface is anything but fancy — it is the most basic non-glossy and minimalist UI you’d come across adopted and content by millions across the globe instantly. The primary purpose of the ChatGPT UI is to allow users to create conversational content with the language model. Have you ever seen people complaining about Amazon retailer’s UI?

API Integration

While it may not cater to the preferences of software engineers who typically desire more advanced features and customisation options, it serves the purpose of generating responses effectively.

We would want to create applications that will interact with the LLM. In this and next couple of upcoming articles, we look at developing applications using ChatGPT APIs, including creating an application that works with the LLM but with custom private organisational data.

By leveraging ChatGPT APIs, developers can seamlessly integrate the power of the language model into their own applications, enabling them to build interactive chatbots, virtual assistants, customer support systems, and much more.

Let’s dig in! We start with NodeJS (we will go over Python in the next installment — though I’m a hardcore Java guy, I’ll save the Java version until later part).

Symptom Checker Application using NodeJS

The Symptom Checker is a GPT based application that would list out all the symptoms of a given disease. For example, ask the symptoms of “Flu”, it would print the top 5 symptoms to the console, as the image shown below:

Top five symptoms of Flu printed as a table

Remember, we are not building a UI for this application — we will do that using Streamlit or Flask with Python in the coming articles. For now, we go with the backend code using a command line interface.

Building Symptom Checker NodeJS application

I am assuming you have NodeJS already installed, if not go over here and get it installed.

Project

We create a brand new project for this purpose — may be called symtom-checker-gpt. So, on your OS, create the symtom-checker-gpt folder where all our code will be developed (I’d suggest checking out my repository which is available here if you don’t want to start from scratch).

The following snippet checks out my repository — but as I said you don’t have to follow this path, you can create a local project by all means.

cminds@Madhusudhans-MacBook-Pro apps % git clone https://github.com/madhusudhankonda/symptom-checker-gpt.git
cminds@Madhusudhans-MacBook-Pro apps % cd symtom-checker-gpt
cminds@Madhusudhans-MacBook-Pro apps % npm init --yes

The init method will create a package.json — open up this file and add “type”: “module” at the bottom of it.

Once you are in the folder, create a main.js file which is our main entry file into the application. In the terminal, issue “touch main.js” command to get an empty main.js file created for us. We will write our code in this file.

Open the folder in your visual code (I use Microsoft Visual Code — but you can open up in any IDEs of your choice) and start developing code.

You can also ask ChatGPT to help you with writing all this code for you — though I do suggest learning by self is going to stay in our minds for a while than someone spoon feeding us!

Environment

We need the openai node module to be installed. So, let’s get that downloaded and installed in our local machine. Issue to the following command to get the openai module installed:

npm install openai

Development

Just as any software integration, we will need to integrate our application with the ChatGPT’s API and invoke appropriate endpoints. In this instance, we are expected to invoke a createChatCompletion method on a provided openai module.

Edit the main.js and import the openai module and create an instance of the OpenAIApi class, as shown below:

import OpenAIApi from "openai";
// create an instance of OpenAIApi class
const openai = new OpenAIApi(config);

As you can see, the OpenAIApi class requires configuration object. A configuration object consists of our API key. Once you’ve registered and become plus member, you can create the API keys by visiting API keys section from your account at platform. You should be able to create a new API key from the “View API Keys” section on the account profile.

The following snippet is the configuration object:

const config = new Configuration({
apiKey: "YOUR_API_KEY",
});

Once the OpenAIApi class is available to us, we will need to invoke the createChatCompletion method on the openai instance, which is shown below:

const symptoms =  "Top 5 Flu Symptoms as a table"
openai.createChatCompletion(
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: symptoms }],
})
.then((res) => {
console.log(res.data.choices[0].message.content);
})
.catch((e) => {
console.log(e);
});

The code is straightforward: the createChatCompletion method expects a model and messages fields. We are expected to use “gpt-3.5-turbo” as the model, which is hard code as the first argument. The second argument — the messages consists of a role provided as user, and content where our prompt data goes.

In this example, the prompt is held in a symptoms object which is a hard code prompt, declared just about this class (we are hardcoding to “Top 5 Flu Symptoms” for now — we will find out how we can change this in the coming articles).

The full code for main.js looks like this:

import { Configuration, OpenAIApi } from "openai"
import readline from "readline";
// configuration object holding the API key
const config = new Configuration({
apiKey: "YOUR_API_KEY",
});
// Instance of OpenAIApi class
const openai = new OpenAIApi(config);
// Hard code prompt
const symptoms = "Top 5 Flu Symptoms as a table"
openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: symptoms }],
})
.then((res) => {
console.log(res.data.choices[0].message.content);
})
.catch((e) => {
console.log(e);
});

That’s pretty much is our code to invoke the ChatGPT! There are no servers to deploy, virtual machines provision, no dockers to be created.

One the file is saved, issue node main.js command to get the ball rolling. You should see an output something like the below:

That’s it, we’ve built a GPT based application within minutes. Awesome!

We can change the symptoms to what ever we wish to and rerun the application. Alternatively, we can improve the application by asking for the input from the user and pass on that input to the GPT.

The code snippet below does exactly that — it asks your to enter the disease and prints out top 5 symptoms and awaits for your input. The code is available in my GitHub repository or here:

import { Configuration, OpenAIApi } from "openai";
import readline from "readline";

const config = new Configuration({
apiKey: "YOUR_API_KEY"
});
const openai = new OpenAIApi(config);
// Creating read interface
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
//Reading user input
rl.on('line', (line) => {

line += " What are the top 5 symptoms of "+line
console.log("Prompt: ", line);
openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: line }],
})
.then((res) => {
console.log(res.data.choices[0].message.content);
})
.catch((e) => {
console.log(e);
});
});

This time, the Symptom Checker awaits for user’s input. Of course, we can improvise this even further by putting validations and input criteria definitions!

That’s pretty much it — we managed to get a working application that would be assistant to check our symptoms.

Be aware that not all the information ChatGPT provides is correct — so do make sure you consider the outputs with a pinch of salt.

In the next article, we will explore a working Python application that goes beyond simply interacting with the ChatGPT model to get answers. We will also delve into the exciting realm of leveraging the Language Model (LLM) on our own data sets. This application will showcase the versatility of the ChatGPT API and demonstrate how it can be harnessed to process and generate responses based on own custom data.

Code is available in my repo here.

Stay tuned!

Me @ Medium || LinkedIn || Twitter || GitHub

--

--

Madhusudhan Konda

Madhusudhan Konda is a full-stack lead engineer, mentor, and conference speaker. He delivers live online training on Elasticsearch, Elastic Stack &Spring Cloud