Developing AI Powered Applications: Low/no code applications using Azure OpenAI (8/n)

Madhusudhan Konda
7 min readDec 19, 2023

If you wish to develop AI-powered applications (xApps) but feeling overwhelmed by the vast array of options and pathways available? If you want to quickly integrate large-language-model (LLM) into your traditional application but the path for design, development and deployment wasn’t clear, you are not alone.

The AI ecosystem has expanded exponentially in the last few months and is growing boundless every day — with everyday new dose of fresh influx of frameworks, news, and updates.

If you don’t know where to start your journey — I have a bit of a good news for you! With low-code/no-code mantra, Microsoft is providing a dedicated OpenAI platform for development of xApps. In my view, the OpenAI platform from Microsoft is an easy path to get onboard the AI train without much of steep learning curve.

Introducing Azure OpenAI

Azure OpenAI platform facilitates low-code/no-code development of AI powered applications easily. The platform allows us to create chatbots, text summarisation tools, translations applications, ChatGPT style conversational assistants, bring-your-own-data-question-and-answering libraries and more with OpenAI based models.

Under the umbrella of Azure AI services, Microsoft introduced tooling to develop applications integrating exclusively with OpenAI’s GPT models. The platform provides RESTful access to OpenAI’s models seamlessly — meaning — we can integrate the OpenAI’s LLMs (GPT3.5 and GPT4) by simply invoking REST endpoints.

The OpenAI service is invite-only. We need to fill out a form to request the access. I was given access almost after a week of applying for it.

The wrapper around OpenAI’s models via Azure OpenAI by Microsoft provides additional features such as content filtering, responsible AI, security, scalability and others.

If you have an already deployed application on Azure (or have application stack on Azure), integrating them with OpenAI would be straightforward — a RESTful call is all it takes.

Let’s dig in and get hands dirty exploring and experimenting Azure OpenAI services.

Azure OpenAI Workspace

Once you’ve got the email accepting your request to use OpenAI services, head over to portal.azure.com and find the Azure OpenAI dashboard by searching for it.

Create a new Azure OpenAI workspace

Create a new workspace if you haven’t already done so — pick up the resource group (or create a new one), select the pricing tier (Standard S0 tier is what that’s available as I write this at this time) and get it created. The workspace might take couple of minutes to get it instantiated for us.

Once the workspace is all ready, the first step is to open Azure OpenAI Studio — a web based tool that lets us create and integrate applications using OpenAI. Click on the “Go to Azure OpenAI Studio” to explore and deploy models:

Visit the OpenAI Studio

OpenAI Studio is where most of our time will be spent in getting the things ready for invoking OpenAI models.

Azure OpenAI Studio

Navigating to the Azure OpenAI studio will show us an overview of what we can do with the service at a glance. Here’s where we can chose the type of application that we’d want to build — for example — we can create a simple ChatGPT style chat assistant or asking DALL-E model to generate images based on text inputs or summarising an article and so on.

There is also support for playgrounds — that is — you can click on the “Chat” menu in the “Playground” on the left hand menu — to start a conversational chat with the model.

Chat Playground

In the chat playground, you can prompt the model and get an answer.

You can try DALL-E for image generation too:

Images generated by DALL-E

As you can see, you can provide the sample prompts and let the system work for you. Behind the scenes, these playgrounds interact with the OpenAI’s respectively models.

Developing a Chat Application

In this article, let’s develop a simple chat application (similar to ChatGPT) based on OpenAI model. We can start with a sample provided by Azure, so, click on the “Chat playground” tile to get started.

You should see the Playground that we had seen a moment back — where we tried prompting with couple of questions. Leave the default settings as they are. Other than experimenting with prompts, there’s nothing much you can do here. Or in fact, as I mentioned earlier, the environment is a no-code/low-code setup, so you are expected to code pretty much nothing.

We will need to deploy this application so it can be publicly accessed by peers/colleagues and friends who ever who has the access keys and the URL.

Click on the “Deploy to” button on the top right of the dashboard:

Deploying the chat application

Clicking on it would let you create a new web app — accept the application to be deployed as a web app. We will need to fill up the form as you can expect:

Deploying a new webapp

Provide the required details (the underscores are not allowed — so the name in the above screenshot was changed to mychatsampleappmk), including the location and pricing plan (pickup the Free (F1) plan so you’d not be charged). The app gets deployed to the VM but it would take at least 15 minutes before you can access the application.

Once the application is successfully deployed, you have a ChatGPT style application deployed that can be accessed by authenticated users anywhere in the world!

Bring Your Own Data Application

Along the same lines, you can also create a “Private Data” application: upload your files to let the GPT LLMs know that the context is around your own documents.

On the Azure AI Studio dashboard, click on the “Bring your own data” tile to get started. You will need to provide the location from where you’d want to feed the data:

Pick the location from where the data will be fed

I chose the local directory to choose the files from — you will be asked provide the Blob storage as well as Cognitive Search storage resources (you’ll need to create them if it is your first time).

You need to provide the index name — the “table” where the uploaded data (embeddings) will be stored. I will be using a medical leaflet about Tamoxifen (a tablet usually given to Breast Cancer patients), so I’ve named my index as: “medical-tamoxifen”. We are not going to enable vector search for this application so ignore that check box.

Acknowledge the form and move to next stage — where you will be asked to pick up the document to upload. Click on the link to upload the document (or drag and drop) so it gets copied to the given index.

Once the file has been uploaded, select “Keyword” as the search type in the next step. Finally review all your inputs and create the application. You should see a spinner mentioning that the data is being ingested:

Ingestion in progress spinner shows up for few monents

Once the ingestion is ready, the application will be available for deployment. But before it gets deployed, let’s test it:

In the chat area, input your prompt and check the completion:

The completion is fetched from leaflet data

Asking the prompt will fetch the answer (completion) as expected. As you can see the answer above, you’d find the references in the chat too — if you scroll down, you’d see those references.

Answers with the references of the parts

Once we happily tested the application, just as we did for the chat application, we need to deploy this AI-powered private-data application.

Once this application is deployed (it’d take a bit of time — be patient), they are available to access.

That’s pretty much it — let’s wrap up

Wrap up

Azure’s OpenAI services provide a platform to develop and integrate AI powered applications using OpenAI services. The low-code/no-code mantra of Azure helps developers to develop applications powered with AI using OpenAI’s models.

--

--

Madhusudhan Konda

Madhusudhan Konda is a full-stack lead engineer, mentor, and conference speaker. He delivers live online training on Elasticsearch, Elastic Stack &Spring Cloud