Skip to main content

3 posts tagged with "Containers"

View All Tags

Deploy n8n on Azure App Service (Portal Guide)

· 9 min read

This guide walks you through what n8n is, why it’s so popular, and then the click-through Azure Portal steps to deploy it cleanly on Azure App Service for Linux (Web App for Containers). The recipe below is the “known-good” baseline we used successfully, including the exact App Settings and Health Check path that make App Service happy.


What is n8n?

n8n is an open-source workflow automation platform. Think of it as a visual way to connect APIs, databases, and services together—like LEGO for integrations.

  1. Visual workflow builder (drag-drop nodes)
  2. Massive integration surface (HTTP, DBs, clouds, apps)
  3. Self-hostable (no vendor lock-in), extensible, and scriptable
  4. Great for low-code automation, but friendly to developers too

  1. Open source & vendor-neutral — run it where you want.
  2. Low-code UX — business users can compose automations; devs can extend.
  3. Cost-effective — keep control of infra, cost, and privacy.
  4. Dev-friendly — add custom nodes, call APIs, integrate with CI/CD.

Why run n8n on Azure App Service?

Azure App Service is a PaaS that gives you:

  1. Easy container hosting (bring a public Docker image, set a port)
  2. Scale & reliability (scale up/out without re-architecting)
  3. Built-in monitoring/security (App Insights, access restrictions, TLS)
  4. CI/CD support and managed platform updates

In short: you focus on n8n; Azure handles the undifferentiated heavy lifting.


Slide-style outline

  1. What is n8n? — Open-source automation with a visual builder and tons of integrations.
  2. Why is n8n popular? — Open, flexible, low-code + dev-friendly. Great for demos & production.
  3. Why Azure? — Scalable, secure, integrated monitoring, easy CI/CD, full container control.
  4. Deployment Overview — Create RG → Create App Service (Linux/Container) → Set container image → Configure port/env → Health check → Start.
  5. Environment Variables — Key vars to make Azure reverse proxy and n8n happy.
  6. Networking & Monitoring — Optional VNet integration; enable App Insights.
  7. Recap — Pull image, set app port 5678, env vars, health check, restart → done.

Architecture at a glance

This diagram shows how a user’s HTTPS request reaches an n8n container running on Azure App Service (Linux), and which platform components and settings make it work reliably.

n8n Reference Architecture

What you’re seeing

  1. User Browser → Azure Front End
    The Azure Front End terminates TLS and routes traffic to your container.

  2. App Service Plan (Linux)
    Hosts two containers:

  3. Kudu sidecar (8181) for SSH, log streaming, and management.

  4. n8n container listening on 0.0.0.0:5678 to serve the editor/API.

  5. Routing & Health

  6. Requests are forwarded to the n8n container on port 5678.

  7. A health probe targets /healthz to keep the site warm and enable SSH.

Key takeaways (match these in your config)

  1. Tell App Service the app port:
    WEBSITES_PORT=5678, PORT=5678
  2. Bind n8n to IPv4 inside the container:
    N8N_LISTEN_ADDRESS=0.0.0.0
  3. Set a health check path:
    Monitoring → Health check → /healthz
  4. Platform hygiene:
    Always On = On (B1+), Startup Command = empty,
    WEBSITES_ENABLE_APP_SERVICE_STORAGE=true

Public URL variables (add after first successful boot)

  1. N8N_PROTOCOL=https
  2. N8N_HOST=your-app.azurewebsites.net
  3. WEBHOOK_URL=https://your-app.azurewebsites.net/

If you see timeouts, first confirm App port = 5678 in Container settings and the two app settings WEBSITES_PORT + PORT are set to 5678, then re-check the health path.


Prerequisites

  1. An Azure subscription with permission to create resource groups and App Service resources
  2. A publicly accessible Docker image: n8nio/n8n (we’ll pin a version)
  3. Basic familiarity with Azure Portal

Step-by-Step (Azure Portal)

This is the exact flow that worked end-to-end. We keep the config minimal first so the platform’s startup probe passes, then add optional variables.

0) Create a Resource Group

  1. Azure Portal → Resource groups+ Create
  2. Resource group name: n8n-rg (or your choice)
  3. Region: choose a region near your users
  4. Review + createCreate

1) Create the App Service (Linux, Container)

  1. Azure Portal → Create a resourceApp Service
  2. Project details
    1. Subscription: select your sub
    2. Resource Group: select the RG you just created
  3. Instance details
    1. Name: e.g., n8n-portal-demo → your default URL will look like
      https://n8n-portal-demo.azurewebsites.net
      (Some regions append -01 automatically; use whatever Azure shows.)
    2. Publish: Container
    3. Operating System: Linux
  4. Plan
    1. Create new App Service plan (Linux)
    2. SKU: B1 or higher (enables Always On)

2) Container (image) settings

  1. Image source: Other container registries
  2. Registry: Public
  3. Server URL: https://index.docker.io
  4. Image and tag: n8nio/n8n:1.108.2 (pin a known version; avoid latest initially)
  5. App port: 5678critical

Save and continue to create the Web App.

3) Minimal App Settings to boot successfully

App Service → Configuration → Application settings → + New setting
Add exactly these (keep others out for now to reduce variables):

NameValueWhy
WEBSITES_PORT5678Tells front end which port the container listens on
PORT5678Some stamps honor PORT; harmless to set both
WEBSITES_ENABLE_APP_SERVICE_STORAGEtrueEnables persistent storage area for App Service
WEBSITES_CONTAINER_START_TIME_LIMIT1200Gives the startup probe more time on first boot
N8N_LISTEN_ADDRESS0.0.0.0Ensures IPv4 bind that App Service can reach

Save the settings.

  1. Monitoring → Health check
  2. Path: /healthz
  3. Save

n8n exposes /healthz and returns 200; this helps the startup probe pass quickly.

5) General Settings

  1. Settings → Configuration → General settings
  2. Always On: On (B1 or higher)
  3. Startup Command: (empty) — the official n8nio/n8n image starts itself
  4. HTTPS Only: On (recommended)

6) Full recycle

  1. Click Stop (wait ~20–30 seconds) → Start the app
    (Stop/Start forces re-creation and re-probing; Restart sometimes isn’t enough.)

7) Test the default URL

  1. Open: https://your-app-name.azurewebsites.net (If your region adds -01, your app host will be ...azurewebsites.net with that suffix; use the exact URL shown in the Overview or logs.)

If it’s reachable, congrats — the container is live and the platform probes are passing. Now add the optional “public URL” variables.


Add n8n “public URL” variables (after it’s reachable)

Back to Configuration → Application settings and add:

NameValue
N8N_PORT5678
N8N_PROTOCOLhttps
N8N_HOST<your-app>.azurewebsites.net
WEBHOOK_URLhttps://<your-app>.azurewebsites.net/

SaveRestart.

If you add these too early and hit a redirect/host check during probe, the app can flap. That’s why we start minimal, then add them.


Add these as needed:

NameSuggested ValueWhy
N8N_ENCRYPTION_KEYa long random string (32+ chars)Encrypts credentials on disk
N8N_BASIC_AUTH_ACTIVEtrueBasic auth for the editor
N8N_BASIC_AUTH_USER<user>
N8N_BASIC_AUTH_PASSWORD<strong-password>
DB_SQLITE_POOL_SIZE1Satisfies deprecation warning for SQLite
N8N_RUNNERS_ENABLEDtrueEnables task runners (forward-compat)
N8N_EDITOR_BASE_URLhttps://<your-app>.azurewebsites.net/Explicit editor base URL

Using PostgreSQL instead of SQLite (prod)

Provision Azure Database for PostgreSQL – Flexible Server, then set:

DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=<host>
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=<db>
DB_POSTGRESDB_USER=<user>
DB_POSTGRESDB_PASSWORD=<password>
DB_POSTGRESDB_SCHEMA=public
DB_POSTGRESDB_SSL=true

Keep WEBSITES_ENABLE_APP_SERVICE_STORAGE=true even with Postgres so n8n can persist local files it needs.


How to find your URL

  1. App Service Overview page shows the default URL.
  2. Logs may also echo it (e.g., Editor is now accessible via: https://app...azurewebsites.net).
  3. Some Azure regions append -01 in the hostname automatically—use the exact host Azure gives you.

Verify & Logs

  1. Log stream: App Service → Logs → Log stream to watch container output.
  2. SSH: App Service → SSH (works once the startup probe passes).
  3. Inside the container you can check:
ss -ltnp | grep 5678
curl -I http://127.0.0.1:5678/
curl -I http://127.0.0.1:5678/healthz

Troubleshooting (common gotchas)

  1. Site times out
    Ensure App port = 5678 (Container settings) and App Settings include WEBSITES_PORT=5678 and PORT=5678. Start with the minimal settings list; add N8N_HOST/PROTOCOL only after it’s reachable.

  2. SSH session closes immediately
    The startup probe is failing and the site keeps recycling. Trim to minimal settings, pin the image tag (e.g., `n8nio/n8n:1.108.2), set Health check to /healthz, then Stop/Start.

  3. “InvalidTemplateDeployment / SubscriptionIsOverQuotaForSku”
    Your region/SKU quota is 0. Pick a different region or SKU, or request a quota increase (App Service vCPU) for that region.

  4. Using latest tag
    New latest builds may change behavior. Pin a version while you validate.

  5. Access Restrictions
    If enabled, ensure a rule allows public access to the site during testing.


Recap

  1. n8n is an open-source automation powerhouse with a visual builder and endless integrations.
  2. Azure App Service gives you a simple, scalable, secure home for the n8n container.
  3. The key to a painless deployment is: App port 5678, WEBSITES_PORT/PORT = 5678, N8N_LISTEN_ADDRESS=0.0.0.0, and a Health check at /healthz.
  4. Start minimal so the platform stabilizes, then layer on the public URL and security vars.

Happy automating! 🚀

Call to Action

Choosing the right platform depends on your organization’s goals and constraints. For ongoing tips and deep dives on cloud computing, subscribe to our newsletter. Prefer video? Follow our series on cloud comparisons.

Ready to deploy n8n on Azure—or set up your broader cloud foundation? Contact us and we’ll help you plan, secure, and ship with confidence.

Build Your Azure Kubernetes Service (AKS) Cluster in Just 10 Minutes!

· 4 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

Kubernetes has become a go-to solution for deploying microservices and managing containerized applications. In this blog, we will walk through a real-world demo of how to deploy a Node.js app on Azure Kubernetes Service (AKS), referencing the hands-on transcript and official Microsoft Docs.




Introduction


Kubernetes lets you deploy web apps, data-processing pipelines, and backend APIs on scalable clusters. This walkthrough will guide you through:


  1. Preparing the app
  2. Building and pushing to Azure Container Registry (ACR)
  3. Creating the AKS cluster
  4. Deploying the app
  5. Exposing it to the internet


🧱 Step 1: Prepare the Application


Start by organizing your code and creating a Dockerfile:


FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
CMD ["node", "app.js"]

For the setup we'll use code from repo: https://github.com/Azure-Samples/aks-store-demo. Clone the code and navigate to the directory.

The sample application you create in this tutorial uses the docker-compose-quickstart YAML file from the repository you cloned.

if you get error:

error during connect: Get "http://%2F%2F.%2Fpipe%2FdockerDesktopLinuxEngine/v1.46/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.config-hash%22%3Atrue%2C%22com.docker.compose.project%3Daks-store-demo%22%3Atrue%7D%7D": open //./pipe/dockerDesktopLinuxEngine: The system cannot find the file specified.

ensure that your docker desktop is running.


📦 Step 2: Create a resource group using the az group create command

Open Cloud Shell

az group create --name arinarg --location eastus

📦 Step 2: Build and Push to Azure Container Registry


Create your Azure Container Registry:


az acr create --resource-group arinarg --name arinaacrrepo --sku Basic

Login and build your Docker image directly in the cloud:


az acr login --name arinaacrrepo
az acr build --registry arinaacrrepo --image myapp:v1 .

📦 **Step 3: Build and push the images to your ACR using the Azure CLI az acr build command.

az acr build --registry arinaacrrepo --image aks-store-demo/product-service:latest ./src/product-service/
az acr build --registry arinaacrrepo --image aks-store-demo/order-service:latest ./src/order-service/
az acr build --registry arinaacrrepo --image aks-store-demo/store-front:latest ./src/store-front/

This step creates and stores the image at:
arinaacrrepo.azurecr.io/


☸️ Step 4: Create the AKS Cluster


Use the following command:


az aks create --resource-group arinarg --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys --attach-acr arinaacrrepo

Then configure kubectl:

az aks get-credentials --resource-group arinarg --name myAKSCluster


🚀 Step 4: Deploy the App


Now apply the Kubernetes manifest:


# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: arinaacrrepo.azurecr.io/myapp:v1
ports:
- containerPort: 80

Apply it:


kubectl apply -f deployment.yaml


🌐 Step 5: Expose the App via LoadBalancer


We will use a LoadBalancer to expose the service to the internet...


# service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80

Apply it:


kubectl apply -f service.yaml

Get the external IP:


kubectl get service myapp-service

Open the IP in your browser, and your app should now be live!


📝 Conclusion


Kubernetes on Azure is powerful and accessible. You've just deployed a containerized Node.js app to AKS, with best practices for build, deploy, and scale.


🔚 Call to Action

Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Need help launching your app on Azure AKS? Visit CloudMySite.com for expert help in cloud deployment and DevOps automation.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.


💬 Comment below:
Which tool is your favorite? What do you want us to review next?

The Ultimate AWS ECS and EKS Tutorial

· 5 min read

In the evolving landscape of AWS (Amazon Web Services), two giants stand tall for container orchestration: ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). With the rise of microservices architecture, the decision between ECS and EKS becomes crucial. This guide dives deep into the intricacies of both platforms, helping you make an informed decision based on your specific needs. The only way to do great work is to love what you do. If you haven't found it yet, keep looking. Don't settle. As with all matters of the heart, you'll know when you find it. - Steve Jobs


The Shift to Container Orchestration The transition from traditional infrastructure

to cloud-native paradigms has sparked a containerization revolution. Containers have become pivotal in modern application development and deployment, offering a way to encapsulate an application's environment, dependencies, and configurations into a single package. This evolution addresses the infamous "it works on my machine" problem, ensuring consistency across development, testing, and production environments.


Why Container Orchestration Matters? Container orchestration revolutionizes application

development, deployment, and management by enhancing portability, scalability, and resource efficiency. It simplifies the deployment and scaling of containerized applications, automates essential tasks, and facilitates seamless communication between containers. AWS offers robust solutions for container orchestration, notably ECS and EKS, catering to diverse deployment needs and complexities.


ECS: Elastic Container Service ECS is AWS's fully managed container orchestration

service designed to run Docker containers. It simplifies container deployment by abstracting infrastructure complexities and integrates seamlessly with AWS services such as IAM, Secret Manager, and KMS. ECS supports both EC2 and Fargate launch types, allowing for either serverless operation or more granular control over instances.


EKS: Elastic Kubernetes Service EKS provides a managed Kubernetes service, combining

the power of Kubernetes with AWS's scalability and integration. It offers easy cluster management, supports the latest Kubernetes versions, and integrates with AWS services like ELB and IAM. EKS taps into Kubernetes's extensive ecosystem, providing access to a wealth of tools and community support for complex orchestration needs.



# ECS vs EKS Comparison When comparing ECS and EKS, several factors come into play, including ease of use, deployment complexity, security features, and cloud-agnostic capabilities. ECS excels in simplicity and integration with AWS services, making it ideal for straightforward applications or those heavily reliant on AWS. On the other hand, EKS offers more flexibility, an extensive ecosystem, and compatibility with Kubernetes, suitable for complex or cloud-agnostic applications.

Feature/AspectAWS ECS (Elastic Container Service)AWS EKS (Elastic Kubernetes Service)
Workload TypeMicroservices, monoliths & containerized workloascontainerized & microservices applications
Ease of UseAs AWS provides more deployment options, it is simpler than KubernetesCan be more complex than ECS setup
DeploymentPrimarily AWS Supported tools such as CloudFormation, Terraform, CI/CD pipelines such as CodeDeployApart from Terraform, CloudFormation, CodeDeploy support, more industry support such as ArgoCD, Rancher, etc
Service DiscoveryECS NativeService Mesh setup using Istio, Cillium, OpenMesh etc
SecurityNative integration with AWS services such as IAM roles, KMS, etc.Apart from native Kubernetes support, also has seamless integration with AWS services
Resource ControlResources managed by Services, tasks, Capacity Provider and auto-scaling setupPod and Node setup
Cost ModelPay for EC2 or Fargate setupSimilar to ECS, need to pay for EC2 or Fargate setup
Integration with CI/CDSeamless integration with AWS CodePipeline, GitHub Actions, etc.Similar to ECS, there are seamless integration options with AWS services but much more 3rd party services are available
CustomizabilityHighly customizable account structure and policies.Pre-configured blueprints limit customization but ensure best practices.
Use-CasesECS is well-suited for microservices, batch processing, and simple applications, offering a quick and easy setupEKS caters to more complex scenarios, hybrid environments, and applications requiring Kubernetes's rich feature set and community support. Cost, complexity, and integration with existing tools and workflows should also influence your choice between ECS and EKS.


# Which should you choose: ECS or EKS? Choosing between ECS and EKS depends on your specific requirements, such as application complexity, anticipated growth, and whether you need a cloud-agnostic solution. ECS offers simplicity and deep AWS integration, while EKS provides flexibility and a broad ecosystem where multiple 3rd party systems support EKS cluster setup and management, thus providing a more cloud agnostic option. Do consider your non-functional requirements, future growth expectations, and enterprise cloud strategy to make the best choice for your organization.


🔚 Call to Action


Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.


💬 Comment below:
Which tool is your favorite? What do you want us to review next?