How to Build AI-Powered Apps with LangChain and Azure AI Foundry in Minutes
Have you ever wished you could build your own AI-powered app—without needing to be a coder or data scientist?
Today, it’s not just possible. It’s easier than ever.
With LangChain and Azure AI Foundry, you can connect advanced AI models like Mistral or GPT-4o to build apps that reason, create, and even check their own output.
In this post, I’ll show you how to build an AI app in minutes—right inside VS Code in the Azure Portal—with zero infrastructure setup.
🚀 The AI Revolution Is Here
Let’s face it. Building AI apps used to be complicated.
You needed:
- Cloud servers
- API gateways
- Model fine-tuning
- Data pipelines
But not anymore.
Today, with LangChain and Azure AI Foundry, you can:
- Use pre-trained models right away
- Chain models together to create advanced workflows
- Run your code in the cloud, from the browser
It’s like having an AI co-pilot that helps you build smarter apps—without worrying about backend complexity.
🤖 Meet Your AI Co-Pilot: LangChain + Azure AI Foundry
LangChain is a developer-friendly toolkit that lets you:
- Connect to different AI models (like GPT, Mistral, Cohere, etc.)
- Chain tasks together (e.g., create > verify > summarize)
- Simplify complex AI pipelines into easy-to-use functions
Azure AI Foundry provides the infrastructure:
- You pick a model from their Model Catalog
- Deploy it with one click
- Get an endpoint and key to use in your apps
This combination is perfect for innovators, startups, and non-technical users who want to experiment with AI—without deep cloud knowledge.
🎯 Real-Life Example: AI That Writes AND Reviews Itself
Here’s a real use case you can build today:
Problem:
- Generate creative content, but make sure it’s safe and non-offensive.
Solution:
Use two AI models:
- Producer – Writes a poem
- Verifier – Checks for bad language
This is how advanced AI apps work in real life—creating AND verifying content automatically.
And you can build this pipeline in LangChain with Azure Foundry in just a few lines of code!
🛠️ Step-by-Step: Deploying a Model in Azure AI Foundry
Follow these steps to deploy your first AI model:
1️⃣ Go to Azure AI Foundry
- Sign in with your Azure account
2️⃣ Create a New Project
- Click "Create Project"
- Name your project (e.g., LangChainDemo)
- Choose your region and resource group
- Click "Create"
3️⃣ Deploy a Model
- Go to Models + Endpoints
- Click "Deploy Model"
- Select Mistral-Large-2411 from the model catalog
- Set authentication to Access Key (simpler for demos)
- Click "Deploy"
4️⃣ Copy Endpoint & Key
- After deployment, copy:
- Endpoint URL (ends with /models)
- Access Key
These will be used in your LangChain code.
🖥️ Run LangChain in Azure Portal Using VS Code
Azure provides VS Code in the browser via:
- Cloud Shell + VS Code Web UI
- Azure DevBox or VM with VS Code Web
No local setup needed!
Install dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install --user langchain langchain-azure-ai azure-identity azure-ai-projects
Export environment variables:
export AZURE_INFERENCE_ENDPOINT="https://<your-endpoint>.services.ai.azure.com/models"
export AZURE_INFERENCE_CREDENTIAL="<your-access-key>"
💻 Demo: Translate Text with LangChain and Azure
Create a file called demo_langchain_azure.py:
import os
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Connect to Azure AI Foundry
model = AzureAIChatCompletionsModel(
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
model="Mistral-Large-2411"
)
# Build a prompt
system_template = "Translate the following into {language}:"
prompt_template = ChatPromptTemplate.from_messages([
("system", system_template),
("user", "{text}")
])
# Parse the output
parser = StrOutputParser()
# Chain the steps
chain = prompt_template | model | parser
# Run the chain
result = chain.invoke({"language": "French", "text": "Hello, how are you?"})
print("Translation:", result)
Run the code:
python demo_langchain_azure.py
Output:
Translation: Bonjour, comment ça va ?
🔍 View Traces in Azure AI Foundry
Once you've enabled tracing in your AI application, you can monitor and inspect all the steps your AI workflow performed.
Here’s how to view traces in Azure AI Foundry:
Steps to View Traces
-
Go to the Azure AI Foundry Portal.
-
Navigate to the Tracing section in the left-hand menu.
- Tip: If you don’t see it, click ... More at the bottom of the menu to expand all options.
-
Identify the trace you have created.
- It might take a few seconds for your new trace to appear in the list.
- Each trace shows detailed logs of the AI operations, including:
- Inputs and outputs
- Model invocations
- Intermediate steps if content recording is enabled
-
Click on a trace to view details.
- Use this view to debug, optimize, and understand your AI pipeline execution.
Why Use Tracing?
Tracing lets you:
- Debug AI workflows with full visibility
- Monitor performance and usage
- Capture input/output for audits or testing
All traces are stored in Azure Application Insights and can also be queried via Azure Monitor for deeper analysis.
Note:
For privacy and compliance, consider disabling content recording in production by setting:
export AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED=false
Final code with tracing:
import os
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
project_client = AIProjectClient.(
credential=DefaultAzureCredential(),
conn_str="InstrumentationKey=0b2f5dcb-a185-44b3-b46e-312b494f1ed3;IngestionEndpoint=https://eastus-8.in.applicationinsights.azure.com/;LiveEndpoint=https://eastus.livediagnostics.monitor.azure.com/;ApplicationId=d64948e1-6785-4479-b18b-d315fc728d1f",
)
application_insights_connection_string = project_client.telemetry.get_connection_string()
application_insights_connection_string = "InstrumentationKey=0b2f5dcb-a185-44b3-b46e-312b494f1ed3;IngestionEndpoint=https://eastus-8.in.applicationinsights.azure.com/;LiveEndpoint=https://eastus.livediagnostics.monitor.azure.com/;ApplicationId=d64948e1-6785-4479-b18b-d315fc728d1f"
# Initialize the model
model = AzureAIChatCompletionsModel(
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
model="mistral-small-2503"
)
# Create a prompt template for translation
system_template = "Translate the following into {language}:"
prompt_template = ChatPromptTemplate.from_messages([
("system", system_template),
("user", "{text}")
])
# Output parser to get a string result
parser = StrOutputParser()
# Build the chain
chain = prompt_template | model | parser
# Call the chain
result = chain.invoke({"language": "Spanish", "text": "Hello, how are you?"})
print("Translation:", result)
🔚 Call to Action
Building AI-powered apps is no longer just for big tech companies. With tools like LangChain and Azure AI Foundry, you can start innovating today—whether you're an entrepreneur, a developer, or part of a growing business.
For more tutorials, best practices, and AI development tips, subscribe to our newsletter or follow our video series on AI app development.
🚀 Need help deploying your AI solutions on Azure? Visit arinatechnologies.com for cloud consulting, AI architecture, and deployment support.
Thinking of bringing AI into your organization? If yes, please contact us—we’d love to help you start your AI journey.
💬 Leave a comment below if you’d like to see Part 2, where we cover vector search, document QA, and integrating AI with your website!