Skip to main content

6 posts tagged with "Lambda"

View All Tags

Watch Your Fleet Move LIVE - Asset Tracking with Amazon Location & IoT Core

Β· 6 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

Asset tracking is essential in modern logistics and supply chain operations. Knowing where assets such as trucks or delivery vehicles are located can significantly enhance operational efficiency, reduce costs, and prevent losses. In this detailed walkthrough, we'll explore Amazon Location Service, its use cases, and how to set up a fully functional asset tracking application integrated with AWS IoT Core.


🎯 What is Amazon Location?​


Amazon Location is a managed service from AWS that allows developers to add location data and functionality such as maps, geocoding, routing, geofencing, and tracking into their applications. It sources data from trusted global providers like Esri, HERE, and OpenStreetMap.


Key Features:​


  1. Maps & Geospatial Visualization
  2. Real-Time Tracking
  3. Geofence Monitoring
  4. Cost-effective location solutions

Use cases include:


  1. Fleet tracking
  2. Delivery route optimization
  3. Asset protection
  4. Consumer app geolocation

πŸ“Œ Use Cases​


Geofencing and Proximity-Based Alerts​


  1. Use Case: Setting up virtual boundaries (geofences) around specific areas and triggering actions or notifications when devices or users enter or exit these zones.
  2. Benefit: Security alerts (e.g., unauthorized entry into a restricted area), location-based marketing (e.g., promotional offers to customers), and workflow automation (e.g., clocking in/out field employees). A retail store could notify users when they enter a geofence around the store.

Real-time Asset Tracking and Management​


  1. Use Case: Businesses with fleets of vehicles, equipment, or personnel can track their real-time locations on a map.
  2. Benefit: Improved operational efficiency, optimized routing, enhanced security, and better resource allocation. For example, dispatching the nearest available driver for a delivery.

Route Planning and Optimization​


  1. Use Case: Calculating optimal routes for navigation considering traffic, road closures, and preferred transport modes.
  2. Benefit: Reduced travel time, lower fuel costs, improved delivery efficiency, and better user guidance.


🧱 Architecture Overview​


To better understand the technical setup and flow, let's break down the detailed architecture used in this asset tracking solution. This architecture not only supports real-time tracking but also historical location data, scalable device input, and geofence event handling.


Core Components:​


  1. Amazon Location Service: Provides maps, geofences, and trackers.
  2. AWS IoT Core: Acts as the entry point for location data using MQTT.
  3. Amazon Kinesis Data Streams: Streams live device location data for processing.
  4. AWS Lambda: Used for transforming data and invoking downstream services like Amazon Location or notifications.
  5. Amazon SNS: Sends real-time alerts or notifications to subscribed users (e.g., when a geofence is breached).
  6. Amazon Cognito: Authenticates users for frontend access and API interactions.
  7. Amazon CloudFront + S3: Hosts the web-based frontend application securely and globally.

Data Flow:​


  1. A GPS-enabled device or simulation sends a location update to AWS IoT Core using MQTT.
  2. The update is routed to Kinesis Data Streams for real-time processing.
  3. An AWS Lambda function processes the Kinesis records and forwards the location to the Amazon Location Tracker.
  4. If the location triggers a geofence event, another Lambda function can be used to publish a message to Amazon SNS.
  5. SNS sends out a notification to subscribers, such as mobile users, application dashboards, or administrators.
  6. The frontend web application, hosted on S3 + CloudFront, visualizes live and historical positions by querying Amazon Location services directly using the credentials from Amazon Cognito.

The architecture consists of Amazon Location for geospatial services, AWS Lambda for processing events, and Amazon SNS to send notifications to end users.


Sample Architecture Diagram


πŸ›  Setting Up the Project​


To demonstrate Amazon Location's capabilities, we'll build a web application that displays current and historical locations of assets. We'll simulate an IoT device and stream location updates to AWS using MQTT.


1. Clone the Sample Project​


git clone https://github.com/aws-solutions-library-samples/guidance-for-tracking-assets-and-locating-devices-using-aws-iot.git --recurse-submodules
cd guidance-for-tracking-assets-and-locating-devices-using-aws-iot

2. Install Frontend Dependencies​


cd amazon-location-samples-react/tracking-data-streaming
npm install

3. Deploy Location Infrastructure​


chmod +x deploy_cloudformation.sh && export AWS_REGION=<your region> && ./deploy_cloudformation.sh

4. Deploy IoT Core Resources​


cd ../../cf
aws cloudformation create-stack --stack-name TrackingAndGeofencingIoTResources \
--template-body file://iotResources.yml \
--capabilities CAPABILITY_IAM

πŸ–Ό Configuring the Frontend​


Get the CloudFormation stack outputs:


aws cloudformation describe-stacks \
--stack-name TrackingAndGeofencingSample \
--query "Stacks[0].Outputs[*].[OutputKey, OutputValue]"

Set values in configuration.js accordingly:


export const READ_ONLY_IDENTITY_POOL_ID = "us-east-1:xxxx...";
export const WRITE_ONLY_IDENTITY_POOL_ID = "us-east-1:xxxx...";
export const REGION = "us-east-1";
export const MAP = {
NAME: "TrackingAndGeofencingSampleMapHere",
STYLE: "VectorHereExplore"
};
export const GEOFENCE = "TrackingAndGeofencingSampleCollection";
export const TRACKER = "SampleTracker";
export const DEVICE_POSITION_HISTORY_OFFSET = 3600;
export const KINESIS_DATA_STREAM_NAME = "TrackingAndGeofencingSampleKinesisDataStream";

Start the frontend locally:

npm start

Navigate to http://localhost:8080 to see your live map.


🌐 Hosting on CloudFront​


1. Create S3 Bucket​


  1. Go to S3 Console > Create Bucket
  2. Use a unique bucket name

2. Build Frontend​


npm run build

3. Upload to S3​


aws s3 cp ./build s3://<your-bucket-name>/ --recursive

4. Create CloudFront Distribution​


  1. Origin: S3 Bucket
  2. Create a new OAC (Origin Access Control)
  3. Enable WAF protections

5. Update S3 Bucket Policy​


Paste in the policy suggested by CloudFront for the OAC.


Access your site at:


https://<your-distribution>.cloudfront.net/index.html

πŸ”„ Extend with Real Devices​


This tutorial used MQTT message simulation. For real-world scenarios:

  1. Use GPS-enabled IoT devices
  2. Integrate with certified hardware listed in the AWS Partner Device Catalog

βœ… Summary​


In this blog, we:

  1. Introduced Amazon Location Service
  2. Simulated IoT data with AWS IoT Core
  3. Visualized tracking in a React app
  4. Hosted it with Amazon S3 + CloudFront

This powerful combination enables real-time tracking for logistics, delivery, field ops, and more.


πŸ™Œ Final Thoughts​


Whether you are building internal logistics tools or customer-facing tracking apps, Amazon Location and AWS IoT Core offer a scalable, cost-effective foundation. Try building this project and tailor it to your business use case!


πŸ”š Call to Action​


Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Need help launching your app on AWS? Visit arinatechnologies.com for expert help in cloud architecture.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.


πŸ’¬ Comment below:
How do you plan to use Amazon Locations?

Automate Your OpenSearch/Elasticsearch Backups with S3 and Lambda: A Complete Guide

Β· 10 min read

In the world of data management and cloud computing, ensuring data security through regular backups is crucial. OpenSearch and Elasticsearch provide robust mechanisms to back up data using snapshots, offering several approaches to cater to different operational needs. This blog post will walk you through setting up and managing snapshots using AWS, with detailed steps for both beginners and advanced users.



Introduction to Snapshots in OpenSearch and Elasticsearch​


Snapshots are point-in-time backups of your OpenSearch or Elasticsearch data. By taking snapshots at regular intervals, you can ensure your data is always backed up, which is especially important in production environments. Snapshots can be scheduled to run automatically, whether hourly, daily, or at another preferred frequency, making it easy to maintain a stable backup routine.


Setting Up an OpenSearch Cluster on AWS​


Before diving into snapshot creation, its essential to set up an OpenSearch cluster. Here is how:

  1. AWS Console Access: Begin by logging into your AWS Console and navigating to OpenSearch.
  2. Cluster Creation: Create a new OpenSearch domain (essentially your cluster) using the "Easy Create" option. This option simplifies the setup process, especially for demonstration or learning purposes.
  3. Instance Selection: For this setup, select a lower instance size if you are only exploring OpenSearch features and dont require high memory or compute power. For this demo, an m5.large instance with minimal nodes is sufficient.

Configuring the Cluster​


When configuring the cluster, adjust the settings according to your requirements:


Memory and Storage


  1. Memory and Storage: Set minimal storage (e.g., 10 GB) to avoid unnecessary costs.
  2. Node Count: Choose a single-node setup if you are only testing the system.
  3. Access Control: For simplicity, keep public access open, though in production, you should configure a VPC and control access strictly.

Snapshot Architecture: AWS Lambda and S3 Buckets​


 Snapshot Architecture


AWS provides a serverless approach to managing snapshots via Lambda and S3 buckets. Here is the basic setup:

  1. Create an S3 Bucket: This bucket will store your OpenSearch snapshots.

S3 Bucket


  1. Lambda Function for Snapshot Automation: Use AWS Lambda to automate the snapshot process. Configure the Lambda function to run daily or at a frequency of your choice, ensuring backups are consistent and reliable.

Lambda Function


Writing the Lambda Code​


For the Lambda function, Python is a convenient choice, but you can choose other languages as well. The Lambda function will connect to OpenSearch, initiate a snapshot, and store it in the S3 bucket. Here is a simple breakdown of the code structure:

import boto3, os, time
import requests
from requests_aws4auth import AWS4Auth
from datetime import datetime
import logging

from requests.adapters import HTTPAdapter, Retry

# Set the global variables
# include https:// and trailing /
host = str(os.getenv('host'))
region = str(os.getenv('region','eu'))
s3Bucket = str(os.getenv('s3Bucket'))
s3_base_path = str(os.getenv('s3_base_path','daily'))
s3RepoName = str(os.getenv('s3RepoName'))
roleArn = str(os.getenv('roleArn'))

service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

s3 = boto3.client('s3')

def lambda_handler(event, context):
datestamp = datetime.now().strftime('%Y-%m-%dt%H:%M:%S')

# Register repository
# the Elasticsearch API endpoint

path = '_snapshot/'+s3RepoName
url = host + path

snapshotName = 'snapshot-'+datestamp

# Setting for us-east-1. Comment below if another region
payload = {
"type": "s3",
"settings": {
"bucket": s3Bucket,
"base_path": s3_base_path,
"endpoint": "s3.amazonaws.com",
"role_arn": roleArn
}
}

headers = {"Content-Type": "application/json"}

r = requests.put(url, auth=awsauth, json=payload, headers=headers)

print(r.status_code)
print(r.text)

# Take snapshot - Even though this looks similar to above, but this code is required to take snapshot.
# Snapshot to take with datestamp concatanetated - this creates separate snapshots
path = '_snapshot/'+s3RepoName+'/'+snapshotName

url = host + path

string = snapshotName
bucket_name = s3Bucket


s3 = boto3.resource("s3")
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=string)
print(f"Created {s3_path}")
### Text File copying ends here

while True:
response = requests.put(url, auth=awsauth)
status_code = response.status_code
print("status_code == "+str(status_code))
if status_code >= 500:
# Hope it won't 500 a little later
print("5xx thrown. Sleeping for 200 seconds.. zzzz...")
time.sleep(200)
else:
print(f"Snapshot {snapshotName} successfully taken")
break

print(r.text)
  1. Snapshot API Call: The code uses the OpenSearch API to trigger snapshot creation. You can customize the frequency to take snapshots.
  2. Error Handling: In scenarios where snapshots take long, retries and error handling are implemented in to manage API call failures.
  3. Permissions Setup: Grant your Lambda function the necessary permissions to access OpenSearch and the S3 bucket. This includes setting up roles and policies in AWS Identity and Access Management (IAM).
  4. Invocation Permissions: Lambda function will need to have role that allows access to OpenSearch domain. The role should allow Lambda to upload snapshots to s3 bucket:

{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET-NAME>/*"
]
}

Creating an AWS Lambda Layer for the Requests Library​

To create a custom AWS Lambda layer specifically for the requests library, follow these steps. This guide will help you set up the requests package as a Lambda layer so it can be reused across multiple Lambda functions.

Follow these steps to create a custom AWS Lambda layer that includes the requests library.


Step 1: Prepare the Requests Dependency​


Since Lambda layers require dependencies to be packaged separately, we need to install the requests library in a specific structure.


Set Up a Local Directory​

Create a folder structure for installing the dependency.


mkdir requests-layer
cd requests-layer
mkdir python

Install the Requests Library​

Use pip to install requests into the python folder:


pip install requests -t python/

Verify Installation​

Check that the python directory contains the installed requests package:


ls python

You should see a folder named requests, confirming that the package was installed successfully.


Step 2: Create a Zip Archive of the Layer​

After installing the dependencies, zip the python directory:


zip -r requests-layer.zip python

This creates a requests-layer.zip file that you will upload as a Lambda layer.


Step 3: Upload the Layer to AWS Lambda​


  1. Open the AWS Lambda Console
  2. Go to the AWS Lambda Console.
  3. Create a New Layer

 New Layer


  1. Select Layers from the left-hand navigation.

layer


  1. Click Create layer.
  2. Configure the Layer
  3. Name: Provide a name like requests-layer.
  4. Description: Optionally, describe the purpose of the layer.
  5. Upload the .zip file: Choose the requests-layer.zip file you created.
  6. Compatible runtimes: Choose the runtime(s) that match your Lambda function, such as Python 3.8, Python 3.9, or Python 3.10. 11.Create the Layer
  7. Click Create to upload the layer.

Step 4: Add the Layer to Your Lambda Function​


1.Open Your Lambda Function 2.In the Lambda Console, open the Lambda function where you want to use requests. 3.Add the Layer 4.In the Layers section, click Add a layer. 5.Select Custom layers and choose the requests-layer. 6.Select the specific version (if there are multiple versions). 7.Click Add.


OpenSearch Dashboard Configuration​


 OpenSearch


The OpenSearch Dashboard (formerly Kibana) is your go-to for managing and monitoring OpenSearch. Here is how to set up your snapshot role in the dashboard:


  1. Access the Dashboard: Navigate to the OpenSearch Dashboard using the provided domain link.
  2. Role Setup: Go to the security settings and create a new role for managing snapshots. Grant this role permissions to access the necessary indices and S3 bucket. Following is the role that needs to be created:
Trust Policy:​
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"opensearch.amazonaws.com",
"es.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}

Role Policy:​
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": [
"arn:aws:s3:::BUCKET-NAME"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*Object"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME/*"
]
},
{
"Sid": "ESaccess",
"Effect": "Allow",
"Action": [
"es:*"
],
"Resource": [
"arn:aws:es:eu-west-2:<ACCOUNT-NUMBER>:domain/*"
]
}
]
}

  1. Mapping the Role: Map the new role to your Lambda functions IAM role to ensure seamless access.

 Mapping the Role


Setting Up Snapshot Policies in the Dashboard​


The OpenSearch Dashboard allows you to create policies for managing snapshots, making it easy to define backup schedules and retention periods. Here is how:


  1. Policy Configuration: Define your backup frequency (daily, weekly, etc.) and the retention period for each snapshot.
  2. Retention Period: Set the maximum number of snapshots to keep, ensuring that old snapshots are automatically deleted to save space.

Channel


  1. Notification Channel: You can set up notifications (e.g., via Amazon SNS) to alert you if a snapshot operation fails.

Testing and Troubleshooting Your Snapshot Setup​


Once your setup is complete, it is time to test it:

  1. Run a Test Snapshot: Trigger your Lambda function manually and check your S3 bucket for the snapshot data.
  2. Verify Permissions: If you encounter errors, check your IAM roles and permissions. Snapshot failures often occur due to insufficient permissions, so make sure both the OpenSearch and S3 roles are configured correctly.
  3. Monitor Logs: Use CloudWatch logs to review the execution of your Lambda function, which will help in troubleshooting any issues that arise.

Disaster Recovery and Restoring Snapshots​


In the unfortunate event of data loss or a disaster, restoring your data from snapshots is straightforward. Here is a simple guide:

  1. New Cluster Setup: If your original cluster is lost, create a new OpenSearch domain.
  2. Restore Snapshot: Use the OpenSearch API to restore the snapshot from your S3 bucket.
  3. Cluster Health Check: Once restored, check the health of your cluster and validate that your data is fully recovered.

Conclusion​


Using AWS Lambda and S3 for snapshot management in OpenSearch provides a scalable and cost-effective solution for data backup and recovery. By setting up automated snapshots, you can ensure that your data is consistently backed up without manual intervention. With the additional security and monitoring tools provided by AWS, maintaining the integrity and availability of your OpenSearch data becomes a manageable task.


Explore the various options within AWS and OpenSearch to find the configuration that best fits your environment. And as always, remember to test your setup thoroughly to prevent unexpected issues down the line.


For more tips on OpenSearch, AWS, and other cloud solutions, subscribe to our newsletter and stay up-to-date with the latest in cloud technology! Ready to take your cloud infrastructure to the next level? Please reach out to us

Simplifying AWS Notifications: A Guide to User Notifications

Β· 4 min read

Introduction​

In cloud operations, timely notifications are crucial. Whether dealing with a security incident from AWS GuardDuty, a backup failure, or any other significant event, having a streamlined process to receive and act upon alerts is essential. Traditionally, AWS users set up notifications through complex patterns involving AWS CloudTrail, EventBridge, and Lambda. However, AWS has recently introduced a new service designed to simplify this process significantly: AWS User Notifications.

In this blog, we'll walk through the benefits of this new service and how it streamlines the notification setup process compared to the traditional methods.

The Traditional Notification Setup​

Historically, setting up notifications involved several AWS services:

  1. CloudTrail : Events captured by CloudTrail.
  2. EventBridge : Rules in EventBridge to capture and process these events.
  3. Lambda : Lambda functions to parse events and send formatted notifications.
  4. SNS : For sending out emails or SMS notifications.

For instance, if AWS GuardDuty detected a potential security incident, you'd need to:

  • Create a rule in EventBridge to catch GuardDuty findings.
  • Write Lambda functions to process these events.
  • Use SNS to send notifications, often requiring custom formatting in Lambda.

While effective, this setup can be complex and involves considerable manual configuration and coding.

The New AWS User Notifications Service​

AWS has introduced a more straightforward approach with the AWS User Notifications service. This new service allows you to set up notifications with minimal configuration, bypassing the need for complex EventBridge rules and Lambda functions.

Setting Up Notifications with AWS User Notifications​

Here's a step-by-step guide on how to set up notifications using the new service:

  1. Access AWS User Notifications

    • Go to the AWS Management Console and search for "User Notifications."
    • Open the User Notifications configuration page.

Search

  1. Create a New Notification Configuration

    • Click β€œCreate Notification Configuration.”
    • Provide a name for the notification, such as "GuardDuty Notification."
    • Optionally, add a description.

New Notification Configuration

  1. Choose the Notification Source

    • Select the source of your notification. For example, choose "CloudWatch" for monitoring AWS CloudWatch events.
    • Specify the type of events you want to receive notifications for, such as "GuardDuty findings."
  2. Configure Notification Details

    • Choose the AWS region you want to monitor, such as "Virginia."
    • Set up advanced filters if needed. This helps narrow down the events you want to capture, like focusing only on critical findings.
    • Decide on the aggregation period (e.g., 5 minutes, 12 hours) if you want to aggregate notifications.
  3. Specify Notification Recipients

    • Enter the email addresses or other notification channels where alerts should be sent. You can use AWS's built-in options or integrate with chat channels.
  4. Review and Create

    • Review your configuration.
    • Click "Create Notification Configuration" to finalize.

Comparing AWS User Notifications with Traditional Methods​

Simplicity : User Notifications significantly reduce complexity by eliminating the need for multiple services like EventBridge and Lambda for basic notification setups. You configure everything in a single interface with minimal coding.

Customization : While traditional setups offer extensive customization through Lambda functions, User Notifications provide a more user-friendly approach with options for advanced filters and predefined notification formats.

Speed : The new service allows for quicker setup and deployment of notifications, making it easier to address urgent issues promptly without extensive configuration.

Use Cases​

  1. GuardDuty Alerts : Set up notifications for any security findings immediately, ensuring you can respond to potential threats without delay.

  2. AWS Config : Receive alerts for configuration changes, focusing on non-compliant changes to avoid information overload.

  3. Backup Failures : Get notifications for failed backup jobs to ensure data protection measures are always active.

  4. Health Checks : Monitor AWS service health events to stay informed about the operational status of your AWS environment.

Conclusion​

AWS User Notifications is a game-changer for simplifying the notification setup process. It reduces the complexity involved in configuring notifications and allows you to focus on addressing issues rather than managing notification infrastructure. By leveraging this new service, you can ensure that critical alerts are delivered promptly and efficiently.

For detailed guides and additional information, check out the AWS documentation and stay updated with the latest AWS features.

Feel free to reach out with any questions or comments, and don't forget to subscribe for more updates!

Comprehensive Guide to Centralized Backups in AWS Organizations

Β· 4 min read

Centralized Management of AWS Services Using AWS Organizations​

AWS Organizations provides a unified way to manage and govern your AWS environment as it grows. This blog post details how you can use AWS Organizations to centrally manage your services, thereby simplifying administration, improving security, and reducing operational costs.


Why Use AWS Organizations?​

AWS Organizations enables centralized management of billing, control access, compliance, security, and resource sharing across AWS accounts. Instead of managing services individually in each account, AWS Organizations lets you administer them from a single location.


Advantages of Centralized Management:​

a. Efficiency: Manage multiple AWS accounts from a single control point. b. Cost Savings: Reduce operational costs through centralized management. c. Enhanced Security: Apply consistent policies and compliance standards across all accounts. d. Simplified Operations: Streamline monitoring, backup, and administrative tasks.


Step-by-Step Guide to Centralized Backup Management​


Backup


Managing backups across multiple AWS accounts can be complex. AWS Backup allows you to centralize and automate data protection across AWS services. Here’s how you can set up centralized backup management using AWS Organizations:


1. Setting Up AWS Organizations:​

a. Create an AWS Organization: i) Navigate to the AWS Organizations console. ii) Click on "Create organization" and follow the prompts.

b. Add Accounts to Your Organization: i) Add existing accounts or create new ones. ii) Ensure all accounts you want to manage are part of the organization.


2. Enabling Centralized Backup:​


Enabling


a. Navigate to AWS Backup: i) Open the AWS Backup console from the management account. ii) This is where you'll configure backup plans and policies.

b. Create a Backup Plan:


Create


i) Click on "Create backup plan." ii) Define your backup rules (e.g., frequency, retention period).

  • Specify the resources to back up (e.g., EC2 instances, RDS databases).

c. Assign the Backup Plan: i) Use tags to assign resources to the backup plan. ii) For instance, tag all EC2 instances you want to back up with Backup:Production.


3. Delegating Administration:​


Delegating


a. Create a Delegated Administrator Account: i) Designate one account as the delegated administrator. ii) This account will handle backup management for all other accounts.

b. Set Up Cross-Account Roles: i) Create IAM roles in each member account. ii) Assign these roles the necessary permissions for backup operations. iii) Ensure the roles allow cross-account access to the delegated administrator account.


4. Configuring Backup Policies:​

a. Enable Backup Policies: i) From the AWS Backup console, enable backup policies. ii) Define and apply these policies to all accounts within the organization.

b. Monitor Backups: i) Use AWS Backup's centralized dashboard to monitor the status of your backups. ii) Set up notifications for backup failures or successes.


5. Using Additional AWS Services:​

AWS Organizations supports various other services that can be centrally managed. Some examples include:

  • a. AWS GuardDuty: Centralized threat detection.
  • b. AWS Config: Compliance auditing and monitoring.
  • c. AWS CloudTrail: Logging and monitoring account activity.
  • d. AWS Identity and Access Management (IAM): Centralized access control and user management.

Ready to take your cloud infrastructure to the next level? Please reach out to us Contact Us


Conclusion​

Leveraging AWS Organizations can streamline the management of your AWS environment, ensuring consistent backup policies, enhancing security, and reducing operational overhead. Centralized management not only simplifies your administrative tasks but also provides a unified view of your organization's compliance and security posture.


AWS services that support Containers: Containers!=Kubernetes.

Β· 4 min read

When it comes to choosing the right container service for your application, AWS offers a myriad of options, each tailored to specific needs and use cases. This guide aims to provide a comprehensive overview of when and how to use various AWS container services, based on our extensive research and industry experience.

Please refer The Ultimate AWS ECS and EKS Tutorial


Understanding Containers and Their Use Cases​

Containers have revolutionized the way applications are developed and deployed. They offer portability, consistency, and efficiency, making them ideal for various scenarios, from microservices architectures to machine learning orchestration. Alt text


Service Orchestration​

Service orchestration involves managing and coordinating multiple services or microservices to work together seamlessly. Containers play a crucial role in this by ensuring that each service runs in its isolated environment, thereby reducing conflicts and improving scalability.

  1. Kubernetes Service

    • Pros: Fully managed, scalable, extensive community support.
    • Cons: Complex setup, significant operational overhead.
  2. Red Hat OpenShift on AWS (ROSA)

    • Overview: A third-party service similar to Kubernetes, managed by OpenShift.
    • Pros: Robust management platform, popular among enterprise clients.
    • Cons: Similar complexity to Kubernetes.
  3. AWS Elastic Container Service (ECS)

    • Overview: AWS's native container orchestration service.
    • Pros: Seamless integration with AWS services, flexible deployment options (EC2, Fargate).
    • Cons: Limited to AWS ecosystem.

Machine Learning Orchestration​

Deploying machine learning models in containers allows for a consistent and portable environment across different stages of the ML pipeline, from training to inference.

  1. AWS Batch
    • Overview: A native service designed for batch computing jobs, including ML training and inference.
    • Pros: Simplifies job scheduling and execution, integrates well with other AWS ML services.
    • Cons: Best suited for batch jobs, may not be ideal for real-time inference.

Web Applications.Please check out our web services Refer website solutions
​

Containers can also streamline the deployment and management of web applications, providing a consistent environment across development, testing, and production.

  1. AWS Elastic Beanstalk

    • Overview: A legacy service that simplifies application deployment and management.
    • Pros: Easy to use, good for traditional web applications.
    • Cons: Considered outdated, fewer modern features compared to newer services.
  2. AWS App Runner

    • Overview: A newer service that simplifies running containerized web applications and APIs.
    • Pros: Supports container deployments, integrates with AWS ECR.
    • Cons: Limited to ECR for container images, still relatively new.

Serverless Options​

For applications that don't require a full-fledged orchestration setup, serverless options like AWS Lambda can be a good fit.

  1. AWS Lambda

    • Pros: Scalable, supports multiple languages, cost-effective for short-running functions.
    • Cons: Limited to 15-minute execution time, may require step functions for longer processes.
  2. Amazon EC2 vs. Amazon LightSail

    • Amazon EC2: Provides full control over virtual machines, suitable for custom setups.
    • Amazon LightSail: Simplifies VM deployment with pre-packaged software, ideal for quick deployments like WordPress.

Decision Tree for Choosing AWS Container Services​

To help you choose the right service, consider the following decision tree based on your specific needs:

  1. Service Orchestration Needed?

    • Yes: Consider Kubernetes, ROSA, or ECS.
    • No: Move to the next question.
  2. Serverless Invocation?

    • Yes: If processing time < 15 minutes, use AWS Lambda. If > 15 minutes, consider App Runner.
    • No: Proceed to provisioned infrastructure options.
  3. Provisioned Infrastructure?

    • Yes: Choose between Amazon EC2 for full control or Amazon LightSail for simplified setup.
  4. Machine Learning Orchestration?

    • Yes: Use AWS Batch for batch jobs.
    • No: Skip to web application options.
  5. Web Application Deployment?

    • Yes: Use Elastic Beanstalk for legacy applications or App Runner for modern containerized applications.

Conclusion​

AWS offers a robust set of services for container orchestration, machine learning, web applications, and serverless computing. Understanding the strengths and limitations of each service can help you make informed decisions and optimize your application architecture. Ready to take your cloud infrastructure to the next level? Please reach out to us Contact Us

A Detailed Overview Of AWS SES and Monitoring - Part 2

Β· 6 min read

In our interconnected digital world, managing email efficiently and securely is a critical aspect of business operations. This post delves into a sophisticated setup using Amazon Web Services (AWS) that ensures your organization's email communication remains robust and responsive. Specifically, we will explore using AWS Simple Email Service (SES) in conjunction with Simple Notification Service (SNS) and AWS Lambda to handle email bounces and complaints effectively.

Understanding the Components​

Before diving into the setup, let's understand the components involved:

  • AWS SES: An email service that enables you to send and receive emails securely.
  • AWS SNS: A flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients.
  • AWS Lambda: A serverless compute service that runs your code in response to events and automatically manages the underlying compute resources.

Read about SES Part - 1

The Need for Handling Bounces and Complaints​

Managing bounces and complaints efficiently is crucial for maintaining your organization’s email sender reputation. High rates of bounces or complaints can affect your ability to deliver emails and could lead to being blacklisted by email providers.

Step-by-Step Setup​

Step 1: Configuring SES​

SES

First, configure your AWS SES to handle outgoing emails. This involves:

  • Setting up verified email identities (email addresses or domains from which you'll send emails).
  • Creating configuration sets in SES to specify how emails should be handled and tracked.

Step 2: Integrating SNS for Notifications​

The next step is to set up AWS SNS to receive notifications from SES. This is crucial for real-time alerts on email bounces or complaints:

  • Create an SNS topic that SES will publish to when specified events (like bounces or complaints) occur.
  • Configure your SES configuration set to send notifications to the created SNS topic.
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-1:<account number>:SES-tracking",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<account number>"
},
"StringLike": {
"AWS:SourceArn": "arn:aws:ses:*"
}
}
}
]
}

Step 3: Using AWS Lambda for Automated Responses​

With SNS in place, integrate AWS Lambda to automate responses based on the notifications:

  • Create a Lambda function that will be triggered by notifications from the SNS topic.
  • Program the Lambda function to execute actions like logging the incident, updating databases, or even triggering remedial workflows.
import boto3, os, json
from botocore.exceptions import ClientError

# Set the global variables
fromEmail= str(os.getenv('from_email','from email address'))
ccEmail = str(os.getenv('cc_email','cc email address'))
toEmail = str(os.getenv('cc_email','to email address'))

awsRegion = str(os.getenv('aws_region','us-east-1'))
# The character encoding for the email.
CHARSET = "UTF-8"

# Create a new SES resource and specify a region.
sesClient = boto3.client('ses',region_name=awsRegion)

def sendSESAlertEmail(eventData):
message = eventData['Records'][0]['Sns']['Message']
print("message = "+message)

bouceComplaintMsg = json.loads(message)
print("bouceComplaintMsg == "+str(bouceComplaintMsg))

json_formatted_str_text = pp_json(message )
if "bounce" in bouceComplaintMsg:
print("Email is bounce")

# The email body for recipients with non-HTML email clients.
BODY_TEXT = "SES: Bounce email notification" +"\r\n"+json_formatted_str_text

bounceEmailAddress = bouceComplaintMsg['bounce']['bouncedRecipients'][0]['emailAddress']
bounceReason = bouceComplaintMsg['bounce']['bouncedRecipients'][0]['diagnosticCode']
print("bounceEmailAddress == "+bounceEmailAddress)
print("bounceReason == "+bounceReason)

subject = "SES Alert: Email to "+bounceEmailAddress+" has bounced"

# The HTML body of the email.
BODY_HTML = """<html>
<head></head>
<body>
<p>Email to %(bounceEmailAddressStr)s has bounced</p>
<p>Reason: %(bounceReasonStr)s</p>
<p>Complete details:%(jsonFormattedStr)s</p>
</body>
</html>""" % { "bounceEmailAddressStr": bounceEmailAddress, "bounceReasonStr": bounceReason, "jsonFormattedStr": json_formatted_str_text}
sendSESEmail (subject, BODY_TEXT, BODY_HTML)
else:
print("Email is Complaint")

# The email body for recipients with non-HTML email clients.
BODY_TEXT = "SES: Complaint email notification" +"\r\n"+json_formatted_str_text

complaintEmailAddress = bouceComplaintMsg['complaint']['complainedRecipients'][0]['emailAddress']
complaintReason = bouceComplaintMsg['complaint']['complaintFeedbackType']
print("complaintEmailAddress == "+complaintEmailAddress)
print("complaintReason == "+complaintReason)

subject = "SES Alert: Email "+complaintEmailAddress+" has raised a Complaint"

# The HTML body of the email.
BODY_HTML = """<html>
<head></head>
<body>
<p>Email %(complaintEmailAddressStr)s has raised a Complaint</p>
<p>Reason: %(complaintReasonStr)s</p>
<p>Complete details:%(jsonFormattedStr)s</p>
</body>
</html>""" % { "complaintEmailAddressStr": complaintEmailAddress, "complaintReasonStr": complaintReason, "jsonFormattedStr": json_formatted_str_text}
sendSESEmail (subject, BODY_TEXT, BODY_HTML)


def sendSESEmail(SUBJECT, BODY_TEXT, BODY_HTML):
# Send the email.
try:
#Provide the contents of the email.
response = sesClient.send_email(
Destination={
'ToAddresses': [
toEmail,
],
'CcAddresses': [
ccEmail,
]
},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': BODY_HTML,
},
'Text': {
'Charset': CHARSET,
'Data': BODY_TEXT,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
Source=fromEmail,
)
print("SES Email Sent.....")
# Display an error if something goes wrong. 
except ClientError as e:
print("SES Email sent! Message ID:"+ e.response['Error']['Message'])
else:
print("SES Email sent! Message ID:" + response['MessageId'])

def pp_json(json_thing, sort=True, indents=4):
if type(json_thing) is str:
print("json is a str")
return (json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents).replace(' ', '&nbsp;').replace('\n', '<br>'))
else:
return (json.dumps(json_thing, sort_keys=sort, indent=indents).replace(' ', '&nbsp;').replace('\n', '<br>'))

def lambda_handler(event, context):
print(event)
sendSESAlertEmail(event)

Step 4: Testing and Validation​

Send test emails

Once configured, it's important to test the setup:

  • Send test emails that will trigger bounce or complaint notifications.
  • Verify that these notifications are received by SNS and correctly trigger the Lambda function.

Step 5: Monitoring and Adjustments​

AWS CloudWatch

Regularly monitor the setup through AWS CloudWatch and adjust configurations as necessary to handle any new types of email issues or to refine the process.

Advanced Considerations​

Consider exploring more advanced configurations such as:

  • Setting up dedicated Lambda functions for different types of notifications.
  • Using AWS KMS (Key Management Service) for encrypting the messages that flow between your services for added security.

Please refer our Newsletter where we provide solutions to creating customer marketing newsletter.

Conclusion​

This setup not only ensures that your organization responds swiftly to critical email events but also helps in maintaining a healthy email environment conducive to effective communication. Automating the handling of email bounces and complaints with AWS SES, SNS, and Lambda represents a proactive approach to infrastructure management, crucial for businesses scaling their operations.