Developing LLM based Applications: Getting Started with LangChain (1/n)

Madhusudhan Konda
6 min readAug 26, 2023
Photo by Rahul Pandit: https://www.pexels.com/photo/blue-and-red-light-from-computer-1933900/

We all are in the midst of the AI storm: the recent surge in Generative Artificial Intelligence models is significantly transforming the tech industry.

In the current business landscape, organisations are eagerly integrating AI into their operational frameworks. They, along with the tech leadership, are exploring efficient ways to integrate the power of natural language processing (NLP) and artificial intelligence (AI) into their applications.

The demand is clear: end-users want intuitive interfaces, instant responses, and smarter applications. They’re looking to bring their products to market faster, amplify productivity by ten fold, and, if possible, slash costs in half!

This article, and the subsequent ones, delve into how we can harness LLMs to tap into the intelligence of the Generative AI realm. I will be discussing one framework in particular — LangChain- that’s destined to tranform the way we build the applications with LLMs.

Overview

Before we jump in to learn/understand the ways of adopting LLMs in to generic applications, let’s check a sample set of scenarios where LLMs can meet applications:

Consider the following scenarios:

  • Customer Support Chatbots: You have seen such chatbots popping up already in many places. Much like e-commerce websites, bank customers often have queries about their account details, loan applications, or transaction histories. AI-driven chatbots can instantly answer these routine questions in a financial realm, improving customer satisfaction.
  • Personalised Financial Advice: AI-driven “robo-advisors” (robvisors) can provide customers with financial advice, tailored investment strategies, and savings tips based on their spending habits and financial goals.
  • Fraud Detection: If you are fraud detection agent, you are in good hands: AI can analyse vast amounts of transaction data in real-time to detect patterns consistent with fraudulent activities. The system can perhaps alert the bank or even freeze transactions based on predefined criteria.

If you are in a policing side of the game:

  • Predictive Policing: Police may be able to analyse historical crime data to predict where crimes are likely to occur in the future using these AI assisted models. This may allow them for better planning and rollout proactive measures.
  • Face Recognition Systems: AI-powered facial recognition can help police identify criminals from surveillance footage or crowds.
  • Social Media Monitoring: AI systems can scan social media platforms and other online spaces for radical content. This could help pinpoint potential threats and identify individuals who might be at risk of creating violence or radicalisation.

And if you are in the health side of the business:

Health apps: Users might want to describe their symptoms in natural language and get immediate feedback or advice.

Or a student/educator:

E-learning platforms: Students may wish to ask complex questions and receive explanations in understandable, human-like language.

The list is endless! Seriously I can’t think of any industry that can’t use AI for their betterment. AI can be of a great help in many facets of life, both digital and personal.

All these examples demand a potent integration of LLMs. But the pain points are consistent: high costs, potential latency, complex setups, and a steep learning curve for developers.

Fret not, there’s one game-changing framework designed to bridge this gap — the LangChain framework. LangChain has become a defacto framework for working with LLMs seamlessly.

We will introduce ourselves gently about this framework (this article) before we build an conversational application for employees (next article).

Introducing LangChain

If you have not heard of LangChain, I can forgive you :)

As the marvel of GenAI LLMs emerges, LangChain identified a significant gap.

While LLMs are impressive, they lack domain-specific insights; they’re trained on public data and don’t grasp the intricacies of, say, a music studio’s operations or an investment bank’s trading algorithms.

Directly feeding them vast datasets isn’t practical due to token limitations and potential high costs. This is the niche LangChain seeks to fill.

LangChain is a revolutionary framework, tailored for effortless integration of LLMs into general applications. With LangChain, the complexities of setting up and managing LLMs are abstracted away (you can see the examples in the article further down), giving developers a straightforward path to empower their applications with state-of-the-art language models.

Getting started

Let’s get started with a simple project that uses LangChain and LLMs (we will use OpenAI LLM model in this article — hold on for Llama 2 model integration in the coming articles).

I have created a Python based project (code is available here in my repository) for getting hands dirty learning LangChain.

First step is to get the required Python libraries to be downloaded and installed — you can run the pip install on the requirements file:

// run the pip to install few libraries
pip install -r requirements.txt

Running this command will install langchain, openai libraries amongst others.

Don’t forget to get an API key from the Open AI site. Dro[ this API key to the .env file

Create a main.py file and start coding. We first install all the required packages:

import os
import openai
import langchain
from dotenv import load_dotenv
from langchain.llms import OpenAI

Now that we have all the required imports, let’s load the env (the .env file has the required API key) file by invoking load_dotenv() function.

We fetch the API key from the .env file and set it to the openai.api_key: openai.api_key = os.getenv(“OPENAI_API_KEY”)

This is the scaffolding that’s required pretty much for most programs that we work with Open AI.

The next step is the easy one!

Let me show the code first and then explain:

llm = OpenAI() # instantiate the model

prompt = "What is BTC" # create the prompt

response = llm(prompt) # invoke OpenAI LLM

print(response) # output the response

We instantiate the OpenAI model and pass the prompt to this model in the form of a question (“What is BTC”). We then print the answer to the console. That’s all!

There’s a lot that goes on behind the scenes but all that legwork has been hidden away by LangChain. It abstracts the mechanics and simplifies the invocation.

To put this in perspective, if you remember from my previous articles, the following code is the logic that we need to use to invoke OpenAI’s API directly:

# A funtion that expects the prompt
# It invokes the ChatComplettion endpoint on the openAI model
# It invokes the ChatComplettion.create method and collects the response
# Return the content by looking into the JSON response that was returned

def GPTGenerate(prompt):
response = openai.ChatCompletion.create(
model = "gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt},
]
)

return response.choices[0].message.content

All these direct invocation of OpenAI’s APIs are abstracted away by simply invoking llm = OpenAI()method in our LangChain project code.

We can still improve the LangChain invocation using LangChain’s Prompt Templates.

Prompt Templates

Rather than hardcoding the prompts, we can use prompt templates provided by LangChain. There are a few ways we could use them, two examples shown in the snippet below:

# A simple version of prompt template
prompt_template = PromptTemplate.from_template("What is BTC")
response = llm(prompt_template.format()) # invoke OpenAI LLM
print(response) # output the response

We are creating the PromptTemplate object from a question (“What is BTC”) directly. We then pass the formatted PromptTemplate value to the LLM.

We can use variables when setting the prompt template too, as the following code shows:

# Another version of prompt template
movie_prompt_template = PromptTemplate(
input_variables=["movie_name"],
template="What is the synopsis of the movie {movie_name}"
)
movie_prompt = movie_prompt_template.format(movie_name="The Godfather")

When executed, you should get the Godfather’s synopsis printed to the screen.

We wrap up here. In the coming articles, we look at how we can use the chains to create a multiple flows.

Here’s my code repository. Don’t forget to follow me/clap to show me some encouragement :)

Me @ Medium || LinkedIn || Twitter || GitHub

--

--

Madhusudhan Konda
Madhusudhan Konda

Written by Madhusudhan Konda

Madhusudhan Konda is a full-stack lead engineer, mentor, and conference speaker. He delivers live online training on Elasticsearch, Elastic Stack &Spring Cloud

No responses yet