Skip to main content

2 posts tagged with "Automation"

View All Tags

Deploy n8n on Azure App Service (Portal Guide)

· 9 min read

This guide walks you through what n8n is, why it’s so popular, and then the click-through Azure Portal steps to deploy it cleanly on Azure App Service for Linux (Web App for Containers). The recipe below is the “known-good” baseline we used successfully, including the exact App Settings and Health Check path that make App Service happy.


What is n8n?

n8n is an open-source workflow automation platform. Think of it as a visual way to connect APIs, databases, and services together—like LEGO for integrations.

  1. Visual workflow builder (drag-drop nodes)
  2. Massive integration surface (HTTP, DBs, clouds, apps)
  3. Self-hostable (no vendor lock-in), extensible, and scriptable
  4. Great for low-code automation, but friendly to developers too

  1. Open source & vendor-neutral — run it where you want.
  2. Low-code UX — business users can compose automations; devs can extend.
  3. Cost-effective — keep control of infra, cost, and privacy.
  4. Dev-friendly — add custom nodes, call APIs, integrate with CI/CD.

Why run n8n on Azure App Service?

Azure App Service is a PaaS that gives you:

  1. Easy container hosting (bring a public Docker image, set a port)
  2. Scale & reliability (scale up/out without re-architecting)
  3. Built-in monitoring/security (App Insights, access restrictions, TLS)
  4. CI/CD support and managed platform updates

In short: you focus on n8n; Azure handles the undifferentiated heavy lifting.


Slide-style outline

  1. What is n8n? — Open-source automation with a visual builder and tons of integrations.
  2. Why is n8n popular? — Open, flexible, low-code + dev-friendly. Great for demos & production.
  3. Why Azure? — Scalable, secure, integrated monitoring, easy CI/CD, full container control.
  4. Deployment Overview — Create RG → Create App Service (Linux/Container) → Set container image → Configure port/env → Health check → Start.
  5. Environment Variables — Key vars to make Azure reverse proxy and n8n happy.
  6. Networking & Monitoring — Optional VNet integration; enable App Insights.
  7. Recap — Pull image, set app port 5678, env vars, health check, restart → done.

Architecture at a glance

This diagram shows how a user’s HTTPS request reaches an n8n container running on Azure App Service (Linux), and which platform components and settings make it work reliably.

n8n Reference Architecture

What you’re seeing

  1. User Browser → Azure Front End
    The Azure Front End terminates TLS and routes traffic to your container.

  2. App Service Plan (Linux)
    Hosts two containers:

  3. Kudu sidecar (8181) for SSH, log streaming, and management.

  4. n8n container listening on 0.0.0.0:5678 to serve the editor/API.

  5. Routing & Health

  6. Requests are forwarded to the n8n container on port 5678.

  7. A health probe targets /healthz to keep the site warm and enable SSH.

Key takeaways (match these in your config)

  1. Tell App Service the app port:
    WEBSITES_PORT=5678, PORT=5678
  2. Bind n8n to IPv4 inside the container:
    N8N_LISTEN_ADDRESS=0.0.0.0
  3. Set a health check path:
    Monitoring → Health check → /healthz
  4. Platform hygiene:
    Always On = On (B1+), Startup Command = empty,
    WEBSITES_ENABLE_APP_SERVICE_STORAGE=true

Public URL variables (add after first successful boot)

  1. N8N_PROTOCOL=https
  2. N8N_HOST=your-app.azurewebsites.net
  3. WEBHOOK_URL=https://your-app.azurewebsites.net/

If you see timeouts, first confirm App port = 5678 in Container settings and the two app settings WEBSITES_PORT + PORT are set to 5678, then re-check the health path.


Prerequisites

  1. An Azure subscription with permission to create resource groups and App Service resources
  2. A publicly accessible Docker image: n8nio/n8n (we’ll pin a version)
  3. Basic familiarity with Azure Portal

Step-by-Step (Azure Portal)

This is the exact flow that worked end-to-end. We keep the config minimal first so the platform’s startup probe passes, then add optional variables.

0) Create a Resource Group

  1. Azure Portal → Resource groups+ Create
  2. Resource group name: n8n-rg (or your choice)
  3. Region: choose a region near your users
  4. Review + createCreate

1) Create the App Service (Linux, Container)

  1. Azure Portal → Create a resourceApp Service
  2. Project details
    1. Subscription: select your sub
    2. Resource Group: select the RG you just created
  3. Instance details
    1. Name: e.g., n8n-portal-demo → your default URL will look like
      https://n8n-portal-demo.azurewebsites.net
      (Some regions append -01 automatically; use whatever Azure shows.)
    2. Publish: Container
    3. Operating System: Linux
  4. Plan
    1. Create new App Service plan (Linux)
    2. SKU: B1 or higher (enables Always On)

2) Container (image) settings

  1. Image source: Other container registries
  2. Registry: Public
  3. Server URL: https://index.docker.io
  4. Image and tag: n8nio/n8n:1.108.2 (pin a known version; avoid latest initially)
  5. App port: 5678critical

Save and continue to create the Web App.

3) Minimal App Settings to boot successfully

App Service → Configuration → Application settings → + New setting
Add exactly these (keep others out for now to reduce variables):

NameValueWhy
WEBSITES_PORT5678Tells front end which port the container listens on
PORT5678Some stamps honor PORT; harmless to set both
WEBSITES_ENABLE_APP_SERVICE_STORAGEtrueEnables persistent storage area for App Service
WEBSITES_CONTAINER_START_TIME_LIMIT1200Gives the startup probe more time on first boot
N8N_LISTEN_ADDRESS0.0.0.0Ensures IPv4 bind that App Service can reach

Save the settings.

  1. Monitoring → Health check
  2. Path: /healthz
  3. Save

n8n exposes /healthz and returns 200; this helps the startup probe pass quickly.

5) General Settings

  1. Settings → Configuration → General settings
  2. Always On: On (B1 or higher)
  3. Startup Command: (empty) — the official n8nio/n8n image starts itself
  4. HTTPS Only: On (recommended)

6) Full recycle

  1. Click Stop (wait ~20–30 seconds) → Start the app
    (Stop/Start forces re-creation and re-probing; Restart sometimes isn’t enough.)

7) Test the default URL

  1. Open: https://your-app-name.azurewebsites.net (If your region adds -01, your app host will be ...azurewebsites.net with that suffix; use the exact URL shown in the Overview or logs.)

If it’s reachable, congrats — the container is live and the platform probes are passing. Now add the optional “public URL” variables.


Add n8n “public URL” variables (after it’s reachable)

Back to Configuration → Application settings and add:

NameValue
N8N_PORT5678
N8N_PROTOCOLhttps
N8N_HOST<your-app>.azurewebsites.net
WEBHOOK_URLhttps://<your-app>.azurewebsites.net/

SaveRestart.

If you add these too early and hit a redirect/host check during probe, the app can flap. That’s why we start minimal, then add them.


Add these as needed:

NameSuggested ValueWhy
N8N_ENCRYPTION_KEYa long random string (32+ chars)Encrypts credentials on disk
N8N_BASIC_AUTH_ACTIVEtrueBasic auth for the editor
N8N_BASIC_AUTH_USER<user>
N8N_BASIC_AUTH_PASSWORD<strong-password>
DB_SQLITE_POOL_SIZE1Satisfies deprecation warning for SQLite
N8N_RUNNERS_ENABLEDtrueEnables task runners (forward-compat)
N8N_EDITOR_BASE_URLhttps://<your-app>.azurewebsites.net/Explicit editor base URL

Using PostgreSQL instead of SQLite (prod)

Provision Azure Database for PostgreSQL – Flexible Server, then set:

DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=<host>
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=<db>
DB_POSTGRESDB_USER=<user>
DB_POSTGRESDB_PASSWORD=<password>
DB_POSTGRESDB_SCHEMA=public
DB_POSTGRESDB_SSL=true

Keep WEBSITES_ENABLE_APP_SERVICE_STORAGE=true even with Postgres so n8n can persist local files it needs.


How to find your URL

  1. App Service Overview page shows the default URL.
  2. Logs may also echo it (e.g., Editor is now accessible via: https://app...azurewebsites.net).
  3. Some Azure regions append -01 in the hostname automatically—use the exact host Azure gives you.

Verify & Logs

  1. Log stream: App Service → Logs → Log stream to watch container output.
  2. SSH: App Service → SSH (works once the startup probe passes).
  3. Inside the container you can check:
ss -ltnp | grep 5678
curl -I http://127.0.0.1:5678/
curl -I http://127.0.0.1:5678/healthz

Troubleshooting (common gotchas)

  1. Site times out
    Ensure App port = 5678 (Container settings) and App Settings include WEBSITES_PORT=5678 and PORT=5678. Start with the minimal settings list; add N8N_HOST/PROTOCOL only after it’s reachable.

  2. SSH session closes immediately
    The startup probe is failing and the site keeps recycling. Trim to minimal settings, pin the image tag (e.g., `n8nio/n8n:1.108.2), set Health check to /healthz, then Stop/Start.

  3. “InvalidTemplateDeployment / SubscriptionIsOverQuotaForSku”
    Your region/SKU quota is 0. Pick a different region or SKU, or request a quota increase (App Service vCPU) for that region.

  4. Using latest tag
    New latest builds may change behavior. Pin a version while you validate.

  5. Access Restrictions
    If enabled, ensure a rule allows public access to the site during testing.


Recap

  1. n8n is an open-source automation powerhouse with a visual builder and endless integrations.
  2. Azure App Service gives you a simple, scalable, secure home for the n8n container.
  3. The key to a painless deployment is: App port 5678, WEBSITES_PORT/PORT = 5678, N8N_LISTEN_ADDRESS=0.0.0.0, and a Health check at /healthz.
  4. Start minimal so the platform stabilizes, then layer on the public URL and security vars.

Happy automating! 🚀

Call to Action

Choosing the right platform depends on your organization’s goals and constraints. For ongoing tips and deep dives on cloud computing, subscribe to our newsletter. Prefer video? Follow our series on cloud comparisons.

Ready to deploy n8n on Azure—or set up your broader cloud foundation? Contact us and we’ll help you plan, secure, and ship with confidence.

Step-by-Step Guide: Install and Configure GitLab on AWS EC2 | DevOps CI/CD with GitLab on AWS

· 6 min read

Introduction

This document outlines the steps taken to deploy and configure GitLab Runners, including the installation of Terraform, ensuring that the application team can focus solely on writing pipelines.

Architecture

The following diagram displays the solution architecture.

Architecture

AWS CloudFormation is used to create the infrastructure hosting the GitLab Runner. The main steps are as follows:

  1. The user runs a deploy script to deploy the CloudFormation template. The template is parameterized, and the parameters are defined in a properties file. The properties file specifies the infrastructure configuration and the environment in which to deploy the template.
  2. The deploy script calls CloudFormation CreateStack API to create a GitLab Runner stack in the specified environment.
  3. During stack creation, an EC2 autoscaling group is created with the desired number of EC2 instances. Each instance is launched via a launch template created with values from the properties file. An IAM role is created and attached to the EC2 instance, containing permissions required for the GitLab Runner to execute pipeline jobs. A lifecycle hook is attached to the autoscaling group on instance termination events, ensuring graceful instance termination.
  4. During instance launch, GitLab Runner will be configured and installed. Terraform, Git, and other software will also be installed as needed.
  5. The user may repeat the same steps to deploy GitLab Runner into another environment.

Infrastructure Setup with CloudFormation

Customizing the CloudFormation Template

The initial step in deploying GitLab Runners involved setting up the infrastructure using AWS CloudFormation. The standard CloudFormation template was customized to fit the unique requirements of the environment.

CloudFormation Template Location: GitLab Runner Template

CloudFormation Template Location: GitLab Runner Scaling Group / Cluster Template

For any automation requirement or issues, please reach out to us Contact Us

Parameters used:

Parameters

Deploying the CloudFormation Stack

To deploy the CloudFormation stack, use the following command. This command assumes you have AWS CLI configured with the appropriate credentials:

aws cloudformation create-stack --stack-name amazon-ec2-gitlab-runner-demo1 --template-body file://gitlab-runner.yaml --capabilities CAPABILITY_NAMED_IAM

To update the stack, use the following command:

aws cloudformation update-stack --stack-name amazon-ec2-gitlab-runner-demo1 --template-body file://gitlab-runner.yaml --capabilities CAPABILITY_NAMED_IAM

This command will provision a CloudFormation stack similar to table shown below:

Logical IDPhysical IDType
ASGBucketPolicyarn:aws:iam::your-account-id:policy/amazon-ec2-gitlab-runner-RnrASG-1TE6FTX28FEDB-ASGBucketPolicyAWS::IAM::ManagedPolicy
ASGInstanceProfileamazon-ec2-gitlab-runner-RnrASG-1TE6FTX28FEDB-ASGInstanceProfile-MM31yammSlL2AWS::IAM::InstanceProfile
ASGLaunchTemplatelt-0ae6b1f22e6fb59d3AWS::EC2::LaunchTemplate
ASGRebootRoleamazon-ec2-gitlab-runner-RnrASG-1TE6F-ASGRebootRole-qY5TrCFgM17ZAWS::IAM::Role
ASGSelfAccessPolicyarn:aws:iam::your-account-id:policy/amazon-ec2-gitlab-runner-RnrASG-1TE6FTX28FEDB-ASGSelfAccessPolicyAWS::IAM::ManagedPolicy
CFCustomResourceLambdaRoleamazon-ec2-gitlab-runner CFCustomResourceLambdaRol-QGhwhUWsmzOsAWS::IAM::Role
EC2SelfAccessPolicyarn:aws:iam::your-account-id:policy/amazon-ec2-gitlab-runner-RnrASG-1TE6FTX28FEDB-EC2SelfAccessPolicyAWS::IAM::ManagedPolicy
InstanceASGamazon-ec2-gitlab-runner-RnrASG-1TE6FTX28FEDB-InstanceASG-o3DHi2HsGB7YAWS::AutoScaling::AutoScalingGroup
LookupVPCInfo2024/08/09/[$LATEST]74897306b3a74abd98a9c637a27c19a7Custom::VPCInfo
LowerCasePlusRandomLambdaamazon-ec2-gitlab-runner LowerCasePlusRandomLambd-oGUYEJJRIG0OAWS::Lambda::Function
S3BucketNameLower2024/08/09/[$LATEST]e3cb7909bd224ab594c81514708e7827Custom::Lowercase
VPCInfoLambdaamazon-ec2-gitlab-runner-RnrASG-1TE6-VPCInfoLambda-kL65a1M75SYRAWS::Lambda::Function

Shell-Based Installation Approach

Rather than using Docker, in your environment, you can use Shell (kernel) for installing GitLab Runner and Terraform directly on the EC2 instances. Using shell rather than container provides the following benefits:

  • Simpler Debugging: Direct installation via shell scripts simplifies the debugging process. If something goes wrong, engineers can SSH into the instance and troubleshoot directly rather than dealing with Docker container issues.
  • Performance Considerations: Running the runner directly on the EC2 instance reduces the overhead introduced by containerization, potentially improving performance.

Installation Commands

Below are the key commands used in the shell script for installing GitLab Runner and Terraform:

#!/bin/bash
# Update and install necessary packages
yum update -y
yum install -y amazon-ssm-agent git unzip wget jq

# Install Terraform
wget https://releases.hashicorp.com/terraform/1.0.11/terraform_1.0.11_linux_amd64.zip
unzip terraform_1.0.11_linux_amd64.zip
mv terraform /usr/local/bin/

# Install GitLab Runner
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
sudo chmod +x /usr/local/bin/gitlab-runner
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start

# Source GitBash
echo 'export PATH=$PATH:/home/gitlab-runner' >> ~/.bashrc
source ~/.bashrc

Configuration and Usage

Registering the GitLab Runner

Once the GitLab Runner is installed, it needs to be registered with your GitLab instance. This process can be automated or done manually. Below is an example of how you can register the runner using the gitlab-runner register command:

gitlab-runner register \
--non-interactive \
--url "https://gitlab.com/" \
--registration-token "YOUR_REGISTRATION_TOKEN" \
--executor "shell" \
--description "GitLab Runner" \
--tag-list "shell,sgkci/cd" \
--run-untagged="true" \
--locked="false"

A simple command:

sudo gitlab-runner register --url https://gitlab.com/ --registration-token <Your registration token>

Example:
sudo gitlab-runner register --url https://gitlab.com/ --registration-token GR1348941Du4BazUzERU5M1m_LeLU

This command registers the GitLab Runner to your GitLab project, allowing it to execute CI/CD pipelines directly on the EC2 instance using the shell executor.

Attaching Runner to GitLab Repo

Attaching Runner

Navigate to RepoSettingsCI/CD. Your runner should show up. Click "Enable for this project," after which the runner should be visible.

Note: To ensure that the runner picks up your job, ensure that the right tag is in place, and you may need to disable the Instance Runners.


🔚 Call to Action

Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.

Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.

💬 Comment below:
Which tool is your favorite? What do you want us to review next?