Skip to main content

7 posts tagged with "S3"

View All Tags

Watch Your Fleet Move LIVE - Asset Tracking with Amazon Location & IoT Core

Β· 6 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

Asset tracking is essential in modern logistics and supply chain operations. Knowing where assets such as trucks or delivery vehicles are located can significantly enhance operational efficiency, reduce costs, and prevent losses. In this detailed walkthrough, we'll explore Amazon Location Service, its use cases, and how to set up a fully functional asset tracking application integrated with AWS IoT Core.


🎯 What is Amazon Location?​


Amazon Location is a managed service from AWS that allows developers to add location data and functionality such as maps, geocoding, routing, geofencing, and tracking into their applications. It sources data from trusted global providers like Esri, HERE, and OpenStreetMap.


Key Features:​


  1. Maps & Geospatial Visualization
  2. Real-Time Tracking
  3. Geofence Monitoring
  4. Cost-effective location solutions

Use cases include:


  1. Fleet tracking
  2. Delivery route optimization
  3. Asset protection
  4. Consumer app geolocation

πŸ“Œ Use Cases​


Geofencing and Proximity-Based Alerts​


  1. Use Case: Setting up virtual boundaries (geofences) around specific areas and triggering actions or notifications when devices or users enter or exit these zones.
  2. Benefit: Security alerts (e.g., unauthorized entry into a restricted area), location-based marketing (e.g., promotional offers to customers), and workflow automation (e.g., clocking in/out field employees). A retail store could notify users when they enter a geofence around the store.

Real-time Asset Tracking and Management​


  1. Use Case: Businesses with fleets of vehicles, equipment, or personnel can track their real-time locations on a map.
  2. Benefit: Improved operational efficiency, optimized routing, enhanced security, and better resource allocation. For example, dispatching the nearest available driver for a delivery.

Route Planning and Optimization​


  1. Use Case: Calculating optimal routes for navigation considering traffic, road closures, and preferred transport modes.
  2. Benefit: Reduced travel time, lower fuel costs, improved delivery efficiency, and better user guidance.


🧱 Architecture Overview​


To better understand the technical setup and flow, let's break down the detailed architecture used in this asset tracking solution. This architecture not only supports real-time tracking but also historical location data, scalable device input, and geofence event handling.


Core Components:​


  1. Amazon Location Service: Provides maps, geofences, and trackers.
  2. AWS IoT Core: Acts as the entry point for location data using MQTT.
  3. Amazon Kinesis Data Streams: Streams live device location data for processing.
  4. AWS Lambda: Used for transforming data and invoking downstream services like Amazon Location or notifications.
  5. Amazon SNS: Sends real-time alerts or notifications to subscribed users (e.g., when a geofence is breached).
  6. Amazon Cognito: Authenticates users for frontend access and API interactions.
  7. Amazon CloudFront + S3: Hosts the web-based frontend application securely and globally.

Data Flow:​


  1. A GPS-enabled device or simulation sends a location update to AWS IoT Core using MQTT.
  2. The update is routed to Kinesis Data Streams for real-time processing.
  3. An AWS Lambda function processes the Kinesis records and forwards the location to the Amazon Location Tracker.
  4. If the location triggers a geofence event, another Lambda function can be used to publish a message to Amazon SNS.
  5. SNS sends out a notification to subscribers, such as mobile users, application dashboards, or administrators.
  6. The frontend web application, hosted on S3 + CloudFront, visualizes live and historical positions by querying Amazon Location services directly using the credentials from Amazon Cognito.

The architecture consists of Amazon Location for geospatial services, AWS Lambda for processing events, and Amazon SNS to send notifications to end users.


Sample Architecture Diagram


πŸ›  Setting Up the Project​


To demonstrate Amazon Location's capabilities, we'll build a web application that displays current and historical locations of assets. We'll simulate an IoT device and stream location updates to AWS using MQTT.


1. Clone the Sample Project​


git clone https://github.com/aws-solutions-library-samples/guidance-for-tracking-assets-and-locating-devices-using-aws-iot.git --recurse-submodules
cd guidance-for-tracking-assets-and-locating-devices-using-aws-iot

2. Install Frontend Dependencies​


cd amazon-location-samples-react/tracking-data-streaming
npm install

3. Deploy Location Infrastructure​


chmod +x deploy_cloudformation.sh && export AWS_REGION=<your region> && ./deploy_cloudformation.sh

4. Deploy IoT Core Resources​


cd ../../cf
aws cloudformation create-stack --stack-name TrackingAndGeofencingIoTResources \
--template-body file://iotResources.yml \
--capabilities CAPABILITY_IAM

πŸ–Ό Configuring the Frontend​


Get the CloudFormation stack outputs:


aws cloudformation describe-stacks \
--stack-name TrackingAndGeofencingSample \
--query "Stacks[0].Outputs[*].[OutputKey, OutputValue]"

Set values in configuration.js accordingly:


export const READ_ONLY_IDENTITY_POOL_ID = "us-east-1:xxxx...";
export const WRITE_ONLY_IDENTITY_POOL_ID = "us-east-1:xxxx...";
export const REGION = "us-east-1";
export const MAP = {
NAME: "TrackingAndGeofencingSampleMapHere",
STYLE: "VectorHereExplore"
};
export const GEOFENCE = "TrackingAndGeofencingSampleCollection";
export const TRACKER = "SampleTracker";
export const DEVICE_POSITION_HISTORY_OFFSET = 3600;
export const KINESIS_DATA_STREAM_NAME = "TrackingAndGeofencingSampleKinesisDataStream";

Start the frontend locally:

npm start

Navigate to http://localhost:8080 to see your live map.


🌐 Hosting on CloudFront​


1. Create S3 Bucket​


  1. Go to S3 Console > Create Bucket
  2. Use a unique bucket name

2. Build Frontend​


npm run build

3. Upload to S3​


aws s3 cp ./build s3://<your-bucket-name>/ --recursive

4. Create CloudFront Distribution​


  1. Origin: S3 Bucket
  2. Create a new OAC (Origin Access Control)
  3. Enable WAF protections

5. Update S3 Bucket Policy​


Paste in the policy suggested by CloudFront for the OAC.


Access your site at:


https://<your-distribution>.cloudfront.net/index.html

πŸ”„ Extend with Real Devices​


This tutorial used MQTT message simulation. For real-world scenarios:

  1. Use GPS-enabled IoT devices
  2. Integrate with certified hardware listed in the AWS Partner Device Catalog

βœ… Summary​


In this blog, we:

  1. Introduced Amazon Location Service
  2. Simulated IoT data with AWS IoT Core
  3. Visualized tracking in a React app
  4. Hosted it with Amazon S3 + CloudFront

This powerful combination enables real-time tracking for logistics, delivery, field ops, and more.


πŸ™Œ Final Thoughts​


Whether you are building internal logistics tools or customer-facing tracking apps, Amazon Location and AWS IoT Core offer a scalable, cost-effective foundation. Try building this project and tailor it to your business use case!


πŸ”š Call to Action​


Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Need help launching your app on AWS? Visit arinatechnologies.com for expert help in cloud architecture.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.


πŸ’¬ Comment below:
How do you plan to use Amazon Locations?

Automate Your OpenSearch/Elasticsearch Backups with S3 and Lambda: A Complete Guide

Β· 10 min read

In the world of data management and cloud computing, ensuring data security through regular backups is crucial. OpenSearch and Elasticsearch provide robust mechanisms to back up data using snapshots, offering several approaches to cater to different operational needs. This blog post will walk you through setting up and managing snapshots using AWS, with detailed steps for both beginners and advanced users.



Introduction to Snapshots in OpenSearch and Elasticsearch​


Snapshots are point-in-time backups of your OpenSearch or Elasticsearch data. By taking snapshots at regular intervals, you can ensure your data is always backed up, which is especially important in production environments. Snapshots can be scheduled to run automatically, whether hourly, daily, or at another preferred frequency, making it easy to maintain a stable backup routine.


Setting Up an OpenSearch Cluster on AWS​


Before diving into snapshot creation, its essential to set up an OpenSearch cluster. Here is how:

  1. AWS Console Access: Begin by logging into your AWS Console and navigating to OpenSearch.
  2. Cluster Creation: Create a new OpenSearch domain (essentially your cluster) using the "Easy Create" option. This option simplifies the setup process, especially for demonstration or learning purposes.
  3. Instance Selection: For this setup, select a lower instance size if you are only exploring OpenSearch features and dont require high memory or compute power. For this demo, an m5.large instance with minimal nodes is sufficient.

Configuring the Cluster​


When configuring the cluster, adjust the settings according to your requirements:


Memory and Storage


  1. Memory and Storage: Set minimal storage (e.g., 10 GB) to avoid unnecessary costs.
  2. Node Count: Choose a single-node setup if you are only testing the system.
  3. Access Control: For simplicity, keep public access open, though in production, you should configure a VPC and control access strictly.

Snapshot Architecture: AWS Lambda and S3 Buckets​


 Snapshot Architecture


AWS provides a serverless approach to managing snapshots via Lambda and S3 buckets. Here is the basic setup:

  1. Create an S3 Bucket: This bucket will store your OpenSearch snapshots.

S3 Bucket


  1. Lambda Function for Snapshot Automation: Use AWS Lambda to automate the snapshot process. Configure the Lambda function to run daily or at a frequency of your choice, ensuring backups are consistent and reliable.

Lambda Function


Writing the Lambda Code​


For the Lambda function, Python is a convenient choice, but you can choose other languages as well. The Lambda function will connect to OpenSearch, initiate a snapshot, and store it in the S3 bucket. Here is a simple breakdown of the code structure:

import boto3, os, time
import requests
from requests_aws4auth import AWS4Auth
from datetime import datetime
import logging

from requests.adapters import HTTPAdapter, Retry

# Set the global variables
# include https:// and trailing /
host = str(os.getenv('host'))
region = str(os.getenv('region','eu'))
s3Bucket = str(os.getenv('s3Bucket'))
s3_base_path = str(os.getenv('s3_base_path','daily'))
s3RepoName = str(os.getenv('s3RepoName'))
roleArn = str(os.getenv('roleArn'))

service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

s3 = boto3.client('s3')

def lambda_handler(event, context):
datestamp = datetime.now().strftime('%Y-%m-%dt%H:%M:%S')

# Register repository
# the Elasticsearch API endpoint

path = '_snapshot/'+s3RepoName
url = host + path

snapshotName = 'snapshot-'+datestamp

# Setting for us-east-1. Comment below if another region
payload = {
"type": "s3",
"settings": {
"bucket": s3Bucket,
"base_path": s3_base_path,
"endpoint": "s3.amazonaws.com",
"role_arn": roleArn
}
}

headers = {"Content-Type": "application/json"}

r = requests.put(url, auth=awsauth, json=payload, headers=headers)

print(r.status_code)
print(r.text)

# Take snapshot - Even though this looks similar to above, but this code is required to take snapshot.
# Snapshot to take with datestamp concatanetated - this creates separate snapshots
path = '_snapshot/'+s3RepoName+'/'+snapshotName

url = host + path

string = snapshotName
bucket_name = s3Bucket


s3 = boto3.resource("s3")
s3.Bucket(bucket_name).put_object(Key=s3_path, Body=string)
print(f"Created {s3_path}")
### Text File copying ends here

while True:
response = requests.put(url, auth=awsauth)
status_code = response.status_code
print("status_code == "+str(status_code))
if status_code >= 500:
# Hope it won't 500 a little later
print("5xx thrown. Sleeping for 200 seconds.. zzzz...")
time.sleep(200)
else:
print(f"Snapshot {snapshotName} successfully taken")
break

print(r.text)
  1. Snapshot API Call: The code uses the OpenSearch API to trigger snapshot creation. You can customize the frequency to take snapshots.
  2. Error Handling: In scenarios where snapshots take long, retries and error handling are implemented in to manage API call failures.
  3. Permissions Setup: Grant your Lambda function the necessary permissions to access OpenSearch and the S3 bucket. This includes setting up roles and policies in AWS Identity and Access Management (IAM).
  4. Invocation Permissions: Lambda function will need to have role that allows access to OpenSearch domain. The role should allow Lambda to upload snapshots to s3 bucket:

{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET-NAME>/*"
]
}

Creating an AWS Lambda Layer for the Requests Library​

To create a custom AWS Lambda layer specifically for the requests library, follow these steps. This guide will help you set up the requests package as a Lambda layer so it can be reused across multiple Lambda functions.

Follow these steps to create a custom AWS Lambda layer that includes the requests library.


Step 1: Prepare the Requests Dependency​


Since Lambda layers require dependencies to be packaged separately, we need to install the requests library in a specific structure.


Set Up a Local Directory​

Create a folder structure for installing the dependency.


mkdir requests-layer
cd requests-layer
mkdir python

Install the Requests Library​

Use pip to install requests into the python folder:


pip install requests -t python/

Verify Installation​

Check that the python directory contains the installed requests package:


ls python

You should see a folder named requests, confirming that the package was installed successfully.


Step 2: Create a Zip Archive of the Layer​

After installing the dependencies, zip the python directory:


zip -r requests-layer.zip python

This creates a requests-layer.zip file that you will upload as a Lambda layer.


Step 3: Upload the Layer to AWS Lambda​


  1. Open the AWS Lambda Console
  2. Go to the AWS Lambda Console.
  3. Create a New Layer

 New Layer


  1. Select Layers from the left-hand navigation.

layer


  1. Click Create layer.
  2. Configure the Layer
  3. Name: Provide a name like requests-layer.
  4. Description: Optionally, describe the purpose of the layer.
  5. Upload the .zip file: Choose the requests-layer.zip file you created.
  6. Compatible runtimes: Choose the runtime(s) that match your Lambda function, such as Python 3.8, Python 3.9, or Python 3.10. 11.Create the Layer
  7. Click Create to upload the layer.

Step 4: Add the Layer to Your Lambda Function​


1.Open Your Lambda Function 2.In the Lambda Console, open the Lambda function where you want to use requests. 3.Add the Layer 4.In the Layers section, click Add a layer. 5.Select Custom layers and choose the requests-layer. 6.Select the specific version (if there are multiple versions). 7.Click Add.


OpenSearch Dashboard Configuration​


 OpenSearch


The OpenSearch Dashboard (formerly Kibana) is your go-to for managing and monitoring OpenSearch. Here is how to set up your snapshot role in the dashboard:


  1. Access the Dashboard: Navigate to the OpenSearch Dashboard using the provided domain link.
  2. Role Setup: Go to the security settings and create a new role for managing snapshots. Grant this role permissions to access the necessary indices and S3 bucket. Following is the role that needs to be created:
Trust Policy:​
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"opensearch.amazonaws.com",
"es.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}

Role Policy:​
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": [
"arn:aws:s3:::BUCKET-NAME"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*Object"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME/*"
]
},
{
"Sid": "ESaccess",
"Effect": "Allow",
"Action": [
"es:*"
],
"Resource": [
"arn:aws:es:eu-west-2:<ACCOUNT-NUMBER>:domain/*"
]
}
]
}

  1. Mapping the Role: Map the new role to your Lambda functions IAM role to ensure seamless access.

 Mapping the Role


Setting Up Snapshot Policies in the Dashboard​


The OpenSearch Dashboard allows you to create policies for managing snapshots, making it easy to define backup schedules and retention periods. Here is how:


  1. Policy Configuration: Define your backup frequency (daily, weekly, etc.) and the retention period for each snapshot.
  2. Retention Period: Set the maximum number of snapshots to keep, ensuring that old snapshots are automatically deleted to save space.

Channel


  1. Notification Channel: You can set up notifications (e.g., via Amazon SNS) to alert you if a snapshot operation fails.

Testing and Troubleshooting Your Snapshot Setup​


Once your setup is complete, it is time to test it:

  1. Run a Test Snapshot: Trigger your Lambda function manually and check your S3 bucket for the snapshot data.
  2. Verify Permissions: If you encounter errors, check your IAM roles and permissions. Snapshot failures often occur due to insufficient permissions, so make sure both the OpenSearch and S3 roles are configured correctly.
  3. Monitor Logs: Use CloudWatch logs to review the execution of your Lambda function, which will help in troubleshooting any issues that arise.

Disaster Recovery and Restoring Snapshots​


In the unfortunate event of data loss or a disaster, restoring your data from snapshots is straightforward. Here is a simple guide:

  1. New Cluster Setup: If your original cluster is lost, create a new OpenSearch domain.
  2. Restore Snapshot: Use the OpenSearch API to restore the snapshot from your S3 bucket.
  3. Cluster Health Check: Once restored, check the health of your cluster and validate that your data is fully recovered.

Conclusion​


Using AWS Lambda and S3 for snapshot management in OpenSearch provides a scalable and cost-effective solution for data backup and recovery. By setting up automated snapshots, you can ensure that your data is consistently backed up without manual intervention. With the additional security and monitoring tools provided by AWS, maintaining the integrity and availability of your OpenSearch data becomes a manageable task.


Explore the various options within AWS and OpenSearch to find the configuration that best fits your environment. And as always, remember to test your setup thoroughly to prevent unexpected issues down the line.


For more tips on OpenSearch, AWS, and other cloud solutions, subscribe to our newsletter and stay up-to-date with the latest in cloud technology! Ready to take your cloud infrastructure to the next level? Please reach out to us

Step-by-Step Guide to AWS S3 Cross-Account Replication for Enhanced Business Continuity

Β· 6 min read

Amazon S3 Cross-Region Replication (CRR) is essential for businesses seeking redundancy, disaster recovery, and compliance across geographical boundaries. It enables automatic, asynchronous replication of objects from one bucket to another in a different AWS region. Whether you're managing a small project or working on an enterprise-level setup, understanding the intricacies of setting up S3 replication between accounts can save time and avoid potential debugging nightmares.


Here's a step-by-step guide to help you through the process.



Step 1: Setting Up the IAM Role for Cross-Region Replication​


To start, you need to create an IAM role that will have permissions in both source and destination accounts for handling replication. Follow these guidelines:


IAM Role Creation:​


  1. Navigate to the IAM section in your AWS console and create a new role.

  2. Establish the following trust relationship so that Amazon S3 and Batch Operations can assume this role:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Principal": {
    "Service": [
    "batchoperations.s3.amazonaws.com",
    "s3.amazonaws.com"
    ]
    },
    "Action": "sts:AssumeRole"
    }
    ]
    }

  1. Add a policy to this role that permits actions related to replication, such as reading objects from the source bucket and writing them to the destination. Here is a sample policy:
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "SourceBucketPermissions",
    "Effect": "Allow",
    "Action": [
    "s3:Get*",
    "s3:List*",
    "s3:ReplicateObject",
    "s3:ObjectOwnerOverrideToBucketOwner",
    "s3:Replicate*"
    ],
    "Resource": [
    "arn:aws:s3:::<Source-Bucket>/*",
    "arn:aws:s3:::<Source-Bucket>",
    "arn:aws:s3:::<Destination-Bucket>",
    "arn:aws:s3:::<Destination-Bucket>/*"
    ]
    }
    ]
    }

Step 2: Source and Destination Bucket Configuration​


After setting up the IAM role, the next step is configuring your S3 buckets.


Source Bucket:​


  1. Enable Bucket Versioning - Replication requires versioning to be enabled. You can activate this in the Properties tab of the bucket.
  2. ACL Configuration - Ensure that ACLs are disabled for smoother replication operations.
  3. Bucket Policy - Update the bucket policy to grant the IAM role access to the source bucket for replication purposes.

Destination Bucket:​


  1. Similar to the source bucket, enable versioning and disable ACLs.
  2. Encryption - For simplicity, it is recommended to use SSE-S3 encryption over CMK. Custom-managed keys (CMK) might lead to issues when replicating encrypted objects between accounts.
  3. Permissions - Add the IAM role to the bucket policy to allow object replication and ownership transfer.

Step 3: Creating the Replication Rule​


Once the IAM role and bucket configurations are set, you can create the replication rule in your source bucket as shown:


 Cross- Region Replication


  1. Go to the Management tab in the source bucket and click on Create Replication Rule.
  2. Naming - Provide a unique name for the replication rule (e.g., SourceToDestinationReplication).
  3. Scope - Define the scope of replication where you can choose to replicate all objects or only a subset based on prefix or tags.
  4. Destination Setup - Specify the destination bucket in another AWS account, and input the account ID and bucket name.
  5. Role Assignment - Link the IAM role created in Step 1 to this replication rule.
  6. Encryption - Disable the option to replicate objects encrypted with AWS KMS to avoid encryption-related issues.

Step 4: Testing the Setup​


Now that you have created the replication rule, it is time to test it by uploading an object to the source bucket and checking if it replicates to the destination bucket.


  1. Upload an Object - Add a file to the source bucket.
  2. Wait for a few minutes (replication can take up to 15 minutes) and check the destination bucket to verify that the object is successfully replicated.
  3. Monitor the replication status in the AWS console for errors.

Step 5: Monitoring and Troubleshooting Replication​


To ensure your replication runs smoothly, it is important to monitor its performance and resolve any issues as they arise.


Alarms


Monitoring​


  1. Use CloudWatch metrics to set up custom alarms that notify you if replication fails.
  2. Failed Replication Events - Set alarms to trigger if the number of failed replications exceeds a threshold. You can configure SNS notifications to receive alerts for failed replications.
  3. OK Status Alarms - As a best practice, configure OK status alarms to confirm that replication has resumed successfully after any issues.

Common Troubleshooting Tips​


  • Ensure that encryption settings are aligned across both buckets (SSE-S3 is recommended).
  • Double-check IAM role policies and permissions for any missing actions.
  • Use CloudWatch metrics to identify patterns of failure or latency in replication operations.

Additional Considerations for Enterprise Setups​


For larger, enterprise-level deployments, there are additional considerations:


  • Batch Operations - If replicating large volumes of objects, consider setting up batch operations to manage replication tasks efficiently.
  • Cost Management - Keep an eye on data transfer and storage costs, especially when replicating across regions.
  • Compliance and Governance - Ensure your replication setup adheres to your organization's compliance and data governance policies.

Please reach out to us for your enterprise cloud requirements


Conclusion​


Setting up cross-region replication is a powerful tool for ensuring your data is distributed across multiple regions, enhancing durability and compliance. By following this detailed guide, you can avoid the common pitfalls and ensure a seamless replication process between S3 buckets across AWS accounts. Regular monitoring and fine-tuning your setup will keep your data transfer efficient and error-free.


Ready to take your cloud infrastructure to the next level? Please reach out to us Contact Us


Want to Learn More? Check out our other AWS tutorials and don't forget to subscribe to our newsletter for the latest cloud management tips and best practices.



Call to Action​


Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.

How to Set Up AWS GuardDuty Malware/Virus Protection for S3

Β· 11 min read

In today's digital landscape, protecting your data from malware and other malicious threats is essential to maintaining the integrity of your organization's infrastructure and reputation. AWS GuardDuty has introduced a new feature specifically designed to detect and protect against malware in Amazon S3. In this blog, we will walk you through how to set up and use this feature to safeguard your S3 objects.


Why Use GuardDuty Malware Protection for S3?​


Traditionally, malware protection for AWS services was managed using third-party tools or custom applications. While tools like SonarQube and Cloud Storage Security were effective, there was a need for a more integrated solution directly within AWS. GuardDuty's new malware protection feature for S3 fills this gap by providing comprehensive protection that integrates seamlessly into your AWS environment.


Benefits of AWS GuardDuty Malware Protection for S3​


  • Integrated Threat Detection: Directly built into AWS, it eliminates the need for third-party malware protection tools.
  • Automated Threat Response: Automatically scans new objects uploaded to S3 and flags any suspicious files.
  • Centralized Management: Allows for organization-wide deployment and control, reducing the risk of human error.
  • Cost-Effective: Currently offers a 12-month free tier for scanning new files, encouraging users to adopt the service.

Getting Started with GuardDuty Malware Protection​



Step 1: Enable GuardDuty in Your AWS Account​


Enable GuardDuty


The first step is to log into your AWS account and navigate to the GuardDuty service. Since GuardDuty is region-specific, you will need to enable it for each region where you want protection. Follow these steps to enable the service:

  1. Go to the GuardDuty dashboard in your AWS console.

Enable GuardDuty


  1. Click on Enable GuardDuty.
  2. Choose the default settings or customize the permissions if needed.
  3. You will be offered a two-day free trial to explore the service.

Step 2: Setting Up an Organization-Wide Administrator​


To manage GuardDuty across multiple accounts, you can set up a delegated administrator. This setup allows you to manage malware protection centrally, ensuring that any new S3 buckets created across your organization are automatically protected.

  1. Navigate to GuardDuty Settings.

Delegated Administrator


  1. Assign your AWS account as the Delegated Administrator.
  2. Ensure that all GuardDuty settings apply across the organization for a centralized approach.

Step 3: Configure EventBridge for Alerts(Optional)​


When a threat is detected, you may not always have someone actively monitoring the AWS console. To ensure you receive notifications, configure AWS EventBridge to send alerts to email, SMS, Slack, or other communication tools.

  1. Open the EventBridge dashboard in your AWS console.
  2. Set up a rule to trigger alerts based on GuardDuty findings.
  3. Link this rule to your preferred notification system, such as email or a messaging app.

Here are the detailed steps for Step 4 and additional methods for ensuring malware protection when objects are uploaded to Amazon S3.


Step 4: Enable S3 Malware Protection Using AWS GuardDuty​


Enabling malware protection in AWS S3 using GuardDuty involves configuring settings that automatically scan for and identify malicious files. Follow these steps to set up S3 malware protection effectively:


Enable S3 Malware


  1. Log in to AWS Console: Open the AWS Management Console and sign in with your administrator account.

  2. Navigate to GuardDuty: In the AWS Management Console, go to the Services menu and select GuardDuty under the Security, Identity, & Compliance section.

  3. Enable GuardDuty (if not already enabled):

    1. If GuardDuty is not already enabled, click on the Enable GuardDuty button.
    2. You will see a two-day free trial offered by AWS. You can start with the trial or proceed with your existing plan.

    Note S3 Malware Protection is region specific. So for each region, the service has to be enabled. And S3 Maware scanning can only scan buckets in the region and not another region.

  4. Access the GuardDuty Settings:

    1. Once GuardDuty is enabled, click on Settings in the GuardDuty dashboard.
    2. Look for the section that mentions S3 Protection or Malware Protection for S3.
  5. Enable Malware Protection for S3 Buckets:

    1. Click on Enable S3 Malware Protection.
    2. You may need to specify the S3 buckets you want to protect. Select the bucket(s) where you want to enable malware protection.
    3. Ensure the S3 bucket you are protecting is in the same AWS region as the GuardDuty service.
  6. Create S3 Malware scanning role

    1. Create a role with policy similar to following:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "AllowManagedRuleToSendS3EventsToGuardDuty",
    "Effect": "Allow",
    "Action": [
    "events:PutRule",
    "events:DeleteRule",
    "events:PutTargets",
    "events:RemoveTargets"
    ],
    "Resource": [
    "arn:aws:events:us-east-1:<account-number>:rule/DO-NOT-DELETE-AmazonGuardDutyMalwareProtectionS3*"
    ],
    "Condition": {
    "StringLike": {
    "events:ManagedBy": "malware-protection-plan.guardduty.amazonaws.com"
    }
    }
    },
    {
    "Sid": "AllowGuardDutyToMonitorEventBridgeManagedRule",
    "Effect": "Allow",
    "Action": [
    "events:DescribeRule",
    "events:ListTargetsByRule"
    ],
    "Resource": [
    "arn:aws:events:us-east-1:<account-number>:rule/DO-NOT-DELETE-AmazonGuardDutyMalwareProtectionS3*"
    ]
    },
    {
    "Sid": "AllowPostScanTag",
    "Effect": "Allow",
    "Action": [
    "s3:PutObjectTagging",
    "s3:GetObjectTagging",
    "s3:PutObjectVersionTagging",
    "s3:GetObjectVersionTagging"
    ],
    "Resource": [
    "arn:aws:s3:::<bucket-name>/*"
    ]
    },
    {
    "Sid": "AllowEnableS3EventBridgeEvents",
    "Effect": "Allow",
    "Action": [
    "s3:PutBucketNotification",
    "s3:GetBucketNotification"
    ],
    "Resource": [
    "arn:aws:s3:::<bucket-name>"
    ]
    },
    {
    "Sid": "AllowPutValidationObject",
    "Effect": "Allow",
    "Action": [
    "s3:PutObject"
    ],
    "Resource": [
    "arn:aws:s3:::<bucket-name>/malware-protection-resource-validation-object"
    ]
    },
    {
    "Effect": "Allow",
    "Action": [
    "s3:ListBucket"
    ],
    "Resource": [
    "arn:aws:s3:::<bucket-name>"
    ]
    },
    {
    "Sid": "AllowMalwareScan",
    "Effect": "Allow",
    "Action": [
    "s3:GetObject",
    "s3:GetObjectVersion"
    ],
    "Resource": [
    "arn:aws:s3:::<bucket-name>/*"
    ]
    },
    {
    "Sid": "AllowDecryptForMalwareScan",
    "Effect": "Allow",
    "Action": [
    "kms:GenerateDataKey",
    "kms:Decrypt"
    ],
    "Resource": "arn:aws:kms:us-east-1:<account-number>:key/*",
    "Condition": {
    "StringLike": {
    "kms:ViaService": "s3.*.amazonaws.com"
    }
    }
    }
    ]
    }
    1. For each new bucket that needs to be scanned, add the bucket name following the above pattern
    2. Following should be the Role trust policy:
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Principal": {
    "Service": "malware-protection-plan.guardduty.amazonaws.com"
    },
    "Action": "sts:AssumeRole"
    }
    ]
    }
  7. Set Up Tag-Based Access Control (Optional): To enable more detailed control over your S3 objects, configure tag-based access controls that will help you categorize and manage the scanning process.

  8. Review and Confirm the Settings:

    1. Confirm your settings by reviewing all the configurations.
    2. Click Save Changes to apply the settings.
  9. Testing the Setup:

    1. Upload a test file to your S3 bucket to see if the GuardDuty malware protection detects it.
    2. Verify that the scan results are displayed in the GuardDuty Findings dashboard, which will confirm the configuration is active.

Step 5: Test the Setup with a Sample File​


Testing your setup is crucial to ensure that GuardDuty is actively scanning and detecting malware. You can use a harmless test file designed to simulate malware to see how GuardDuty responds.


EICAR


  1. Upload a benign test file from the EICAR organization, specifically designed for antivirus testing.
  2. GuardDuty should detect this file and classify it as a threat.
  3. Check the GuardDuty findings to confirm that the detection process is working as expected.

Step 6: Review GuardDuty Findings​


GuardDuty Findings


The GuardDuty dashboard provides a clear view of all security findings, including details about detected threats. This is where you can monitor the state of your S3 objects and identify any security risks.

  1. Navigate to the Findings section in GuardDuty.
  2. Review each finding to understand the severity and nature of the threat.
  3. Use the information to make informed decisions about your security posture.

Step 7: Continuous Monitoring and Alerting​


To ensure that you always stay on top of potential threats, configure continuous monitoring and alerts:

  1. Set up rules in EventBridge to send notifications whenever a new threat is detected.
  2. Export findings to an S3 bucket or a centralized monitoring system if needed.
  3. Regularly review your GuardDuty setup to incorporate any new AWS security features.

Best Practices for S3 Malware Protection​


  • Enable GuardDuty across all regions: Malware protection needs to be enabled in every region where you store S3 data to avoid vulnerabilities.
  • Use tag-based access controls: This allows you to apply security policies more precisely to different S3 objects.
  • Centralize management: Use a delegated administrator account to manage all GuardDuty settings for better efficiency and control.
  • Test regularly: Periodically upload test files to ensure that your malware detection setup is functioning correctly.

Additional Methods for Ensuring Malware Protection on S3


Apart from using AWS GuardDuty, there are other methods to ensure that objects uploaded to S3 are scanned for malware and viruses to protect your infrastructure.


Method 1: Use AWS Lambda with Antivirus Scanning​


  1. Set Up AWS Lambda Function:

    • Create an AWS Lambda function that triggers automatically whenever a new object is uploaded to the S3 bucket.
    • Configure the Lambda function to perform antivirus scanning using an open-source antivirus tool like ClamAV.
  2. Create an S3 Trigger:

    • Set up an S3 event trigger to call the Lambda function whenever a file is uploaded to the S3 bucket.
  3. Configure Antivirus Scanning Logic:

    • The Lambda function should download the object, run the ClamAV scan, and determine if the file is infected.
    • If a threat is detected, the Lambda function can delete the file or quarantine it for further analysis.
  4. Notify the Administrator:

    • Use AWS Simple Notification Service (SNS) to send an alert to the system administrator whenever malware is detected.

Method 2: Integrate with Third-Party Security Tools​


  1. Choose a Third-Party Security Tool:

    • Use third-party services like Cloud Storage Security or Trend Micro Cloud One that specialize in malware detection and data protection.
  2. Set Up Integration with S3:

    • Configure the third-party service to automatically scan new objects uploaded to your S3 bucket.
    • Follow the provider's specific guidelines to integrate the service with your AWS account.
  3. Monitor and Manage Alerts:

    • Set up alerts for any suspicious activity or identified threats using the third-party tool's notification features.
    • Maintain a security dashboard to track malware detection events.

Method 3: Implement an Intrusion Detection System (IDS)​


  1. Deploy an IDS Tool:

    • Use intrusion detection systems like AWS Network Firewall or Snort to monitor traffic and identify malicious activities targeting your cloud environment.
  2. Monitor S3 Traffic:

    • Configure the IDS to inspect traffic to and from your S3 buckets for signs of malware or unauthorized data transfer.
  3. Automate Responses:

    • Automate responses to potential threats detected by the IDS, such as blocking malicious IP addresses or disabling compromised user accounts.

Summary of Methods

MethodDescriptionTools Needed
AWS GuardDutyBuilt-in malware detection for S3 using GuardDuty.AWS GuardDuty, S3, IAM
AWS Lambda with ClamAVLambda triggers antivirus scans on new S3 uploads.AWS Lambda, S3, ClamAV, SNS
Third-Party Security ToolsUses external tools for malware protection.Cloud Storage Security, Trend Micro, AWS S3
Intrusion Detection SystemMonitors traffic and detects threats in real-time.AWS Network Firewall, Snort, AWS CloudTrail

These methods provide a multi-layered approach to protect your S3 buckets from malware threats, ensuring the safety of your data and maintaining your organization's security posture.


Conclusion


AWS GuardDuty's malware protection for S3 is a powerful tool to enhance your cloud security. Its seamless integration with AWS services, combined with automated threat detection and centralized management, makes it an essential part of any organization's security strategy. Set up GuardDuty today and ensure that your S3 buckets are protected from potential malware threats.


πŸ”š Call to Action​


Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.


πŸ’¬ Comment below:
Which tool is your favorite? What do you want us to review next?

One Bucket, One Key: Simplify Your Cloud Storage!

Β· 4 min read

In today's cloud-centric environment, data security is more crucial than ever. One of the common challenges faced by organizations is ensuring that sensitive data stored in AWS S3 buckets is accessible only under strict conditions. This blog post delves into a hands-on session where we set up an AWS Key Management Service (KMS) policy to restrict access to a single S3 bucket using a customer's own encryption key.

Introduction to AWS S3 and KMS​

Amazon Web Services (AWS) offers robust solutions for storage and security. S3 (Simple Storage Service) provides scalable object storage, and KMS offers managed creation and control of encryption keys.

Scenario Overview​

The need: A customer wants to use their own encryption key and restrict its usage to a single S3 bucket. This ensures that no other buckets can access the key.

Setting Up the KMS Key​

Step 1: Creating the Key​

Creating the Key

  • Navigate to the Key Management Service: Start by opening the AWS Management Console and selecting KMS.
  • Create a new key: Choose the appropriate options for your key. For simplicity, skip tagging and advanced options during this tutorial.

Step 2: Configuring Key Policies​

Configuring Key Policies

  • Permission settings: Initially, you might be tempted to apply broad permissions. However, to enhance security, restrict the key’s usage to a specific IAM user and apply a policy that denies all other requests.

Crafting a Bucket Policy​

Step 1: Creating the Bucket​

Creating the Bucket

  • Unique bucket name: Remember, S3 bucket names need to be globally unique. Create the bucket intended for the exclusive use of the KMS key.
  • Disable bucket versioning: If not required, keep this setting disabled to manage storage costs.

Step 2: Policy Configuration​

Policy Configuration

  • Deny other buckets: The crucial part of this setup involves crafting a bucket policy that uses a "Deny" statement. This statement should specify that if the bucket name doesn’t match your specific bucket, access should be denied.
  • Set conditions: Use conditions to enforce that the KMS key can only encrypt/decrypt objects when the correct S3 bucket is specified.
{
"Version": "2012-10-17",
"Id": "key-consolepolicy-3",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-number>:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Deny access to key if the request is not for a yt-test-bucket",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:GenerateDataKey",
"kms:Decrypt"
],
"Resource": "*",
"Condition": {
"StringNotEquals": {
"kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::yt-s3-bucket"
}
}
}
]
}

Testing the Configuration​

  • Validate with another bucket: Create an additional S3 bucket and try to use the KMS key. The attempt should fail, confirming that your policy works.
  • Verify with the correct bucket: Finally, test the key with the correct bucket to ensure that operations like uploading and downloading are seamless.

Conclusion​

This setup not only strengthens your security posture but also adheres to best practices of least privilege by limiting how and where the encryption key can be used. Implementing such precise controls is critical for managing sensitive data in the cloud.


πŸ”š Call to Action​

Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.

Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.

πŸ’¬ Comment below:
How is your experience with Mac on EC2? What do you want us to review next?

Mastering Data Transfer Times for Cloud Migration

Β· 7 min read

First, let's understand what cloud data transfer is and its significance. In today's digital age, many applications are transitioning to the cloud, often resulting in hybrid models wherecomponents may reside on-premises or in cloud environments. This shift necessitates robustdata transfer capabilities to ensure seamless communication between on-premises and cloud components.

Businesses are moving towards cloud services not because they enjoy managing data centers, but because they aim to run their operations more efficiently. Cloud providers specialize in managing data center operations, allowing businesses to focus on their core activities. This fundamental shift underlines the need for ongoing data transfer from onpremises infrastructure to cloud environments.

To give you a clearer picture, we present an indicative reference architecture focusing on Azure (though similar principles apply to AWS and Google Cloud). This architecture includes various components such as virtual networks, subnets, load balancers, applications, databases, and peripheral services like Azure Monitor and API Management. This setup exemplifies a typical scenario for a hybrid application requiring data transfer between cloud and on-premises environments.

Indicative Reference Architecture

Calculating Data Transfer Times

A key aspect of cloud migration is understanding how to efficiently transfer application data. We highlight useful tools and calculators that have aided numerous cloud migrations. For example, the decision between using AWS Snowball, Azure Data Box, or internet transfer is a common dilemma. These tools help estimate the time required to transfer data volumes across different bandwidths, offering insights into the most cost-effective and efficient strategies. Following calculators should be used to calculate data transfer costs.

Ref: https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#time

Ref: https://learn.microsoft.com/en-us/azure/storage/common/storage-choose-data-transfer-solution

Following image from Google documentation provides a good chart on data size with respect to network bandwidth:

Calculating Data Transfer Times

Cost-Effective Data Transfer Strategies

Simplification is the name of the game when it comes to data transfer. Utilizing simple commands and tools like Azure's azcopy, AWS S3 sync, and Google's equivalent services can significantly streamline the process. Moreover, working closely with the networking team to schedule transfers during off-peak hours and chunking data to manage bandwidth utilization are strategies that can minimize disruption and maximize efficiency.

[x] Leverage SDK and APIs where applicable [x] Work with the organizations network team [x] Try to split data transfers and leverage resumable transfers [x] Compress & Optimize the data [x] Use Content Delivery Networks (CDNs), caching and regions closer to data [x] Leverage cloud provider products to its strength and do your own analysis

Deep Dive Comparison

We compare data transfer services across AWS, Azure, and Google Cloud, covering direct connectivity options, transfer acceleration mechanisms, physical data transfer appliances, and services tailored for large data movements. Each cloud provider offers unique solutions, from AWS's Direct Connect and Snowball to Azure's ExpressRoute and Data Box, and Google Cloud's Interconnect and Transfer Appliance.

AWSAzureGCP
AWS Direct ConnectAzure ExpressRouteCloud Interconnect
Provides a dedicated network connection from on-premises to AWS.Offers private connections between Azure data centers and infrastructure.Provides direct physical connections to Google Cloud.
Amazon S3 Transfer AccelerationAzure Blob Storage TransferGoogle Transfer Appliance
Speeds up the transfer of files to S3 using optimized network protocols.Accelerates data transfer to Blob storage using Azure's global network.A rackable high-capacity storage server for large data transfers.
AWS Snowball/SnowmobileAzure Data BoxGoogle Transfer appliance
Physical devices for transporting large volumes of data into and out of AWS.Devices to transfer large amounts of data into Azure Storage.Is a high-capacity storage device that can transfer and securely ship data to a Google upload facility. The service is available in two configurations: 100TB or 480TB of raw storage capacity, or up to 200TB or 1PB compressed.
AWS Storage GatewayAzure Import/ExportGoogle Cloud Storage Transfer Service
Connects on-premises software applications with cloud-based storage.Service for importing/exporting large amounts of data using hard drives and SSDs.Provides similar but not ditto same services such as DataPrep.
AWS DataSyncAzure File SyncGoogle Cloud Storage Transfer Service
Automates data transfer between on-premises storage and AWS services.Synchronizes files across Azure File shares and on-premises servers.Automates data synchronization from and to GCP Storage from external sources.
CloudEndureAzure Site RecoveryMigrate 4 Compute Engine
AWS CloudEndure works with both Linux and Windows VMs hosted on hypervisors, including VMware, Hyper-V and KVM. CloudEndure also supports workloads running on physical servers as well as cloud-based workloads running in AWS, Azure, Google Cloud Platform and other environmentsHelp your business to keep doing businessβ€”even during major IT outages. Azure Site Recovery offers ease of deployment, cost effectiveness, and dependability.To lift & shift on-prem apps to GCP.

Conclusion

As we wrap up our exploration of the data transfer speed and corresponding services provided by AWS, Azure, and GCP, it should be clear what options to consider for what data size and that each platform offers a wealth of options designed to meet the diverse needs of businesses moving and managing big data. Whether you require direct network connectivity, physical data transport devices, or services that synchronize your files across cloud environments, there is a solution tailored to your specific requirements.

Choosing the right service hinges on various factors such as data volume, transfer frequency, security needs, and the level of integration required with your existing infrastructure. AWS shines with its comprehensive services like Direct Connect and Snowball for massive data migration tasks. Azure's strength lies in its enterprise-focused offerings like ExpressRoute and Data Box, which ensure seamless integration with existing systems. Meanwhile, GCP stands out with its Interconnect and Transfer Appliance services, catering to those deeply invested in analytics and cloud-native applications.

Each cloud provider has clearly put significant thought into how to alleviate the complexities of big data transfers. By understanding the subtleties of each service, organizations can make informed decisions that align with their strategic goals, ensuring a smooth and efficient transition to the cloud.

As the cloud ecosystem continues to evolve, the tools and services for data transfer are bound to expand and innovate further. Businesses should stay informed of these developments to continue leveraging the best that cloud technology has to offer. In conclusion, the journey of selecting the right data transfer service is as critical as the data itself, paving the way for a future where cloud-driven solutions are the cornerstones of business operations.

Call to Action​

Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.

Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.

How do I deploy ECS Task in a different account using CodePipeline that uses CodeDeploy

Β· 10 min read

(Account 1) Create a customer-managed AWS KMS key that grants usage permissions to account 1's CodePipeline service role and account 2​


  1. In account 1, open the AWS KMS console.
  2. In the navigation pane, choose Customer managed keys.
  3. Choose Create key. Then, choose Symmetric.
    Note: In the Advanced options section, leave the origin as KMS.
  4. For Alias, enter a name for your key.
  5. (Optional) Add tags based on your use case. Then, choose Next.
  6. On the Define key administrative permissions page, for Key administrators, choose your AWS Identity and Access Management (IAM) user. Also, add any other users or groups that you want to serve as administrators for the key. Then, choose Next.
  7. On the Define key usage permissions page, for This account, add the IAM identities that you want to have access to the key. For example: The CodePipeline service role.
  8. In the Other AWS accounts section, choose Add another AWS account. Then, enter the Amazon Resource Name (ARN) of the IAM role in account 2.
  9. Choose Next. Then, choose Finish.
  10. In the Customer managed keys section, choose the key that you just created. Then, copy the key's ARN.

Important: You must have the AWS KMS key's ARN when you update your pipeline and configure your IAM policies.


(Account 1) Create an Amazon S3 bucket with a bucket policy that grants account 2 access to the bucket​


  1. In account 1, open the Amazon S3 console.
  2. Choose an existing Amazon S3 bucket or create a new S3 bucket to use as the ArtifactStore for CodePipeline.
  3. On the Amazon S3 details page for your bucket, choose Permissions.
  4. Choose Bucket Policy.
  5. In the bucket policy editor, enter the following policy:

Important: Replace codepipeline-source-artifact with the SourceArtifact bucket name for CodePipeline. Replace ACCOUNT_B_NO with the account 2 account number.



{
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Principal': {
'Service': 'logs.us-east-1.amazonaws.com'
},
'Action': 's3:GetBucketAcl',
'Resource': 'arn:aws:s3:::current-account-pipeline-bucket',
'Condition': {
'StringEquals': {
'aws:SourceAccount': ' <<Account 1>>'
},
'ArnLike': {
'aws:SourceArn': 'arn:aws:logs:us-east-1: <<Account 1>>:*'
}
}
},
{
'Effect': 'Allow',
'Principal': {
'Service': 'logs.us-east-1.amazonaws.com'
},
'Action': 's3:PutObject',
'Resource': 'arn:aws:s3:::current-account-pipeline-bucket/*',
'Condition': {
'StringEquals': {
'aws:SourceAccount': ' <<Account 1>>',
's3:x-amz-acl': 'bucket-owner-full-control'
}
}
},
{
'Effect': 'Allow',
'Principal': {
'AWS': 'arn:aws:iam::<<Account2>>:root'
},
'Action': [
's3:Get*',
's3:Put*'
],
'Resource': 'arn:aws:s3:::current-account-pipeline-bucket/*'
},
{
'Effect': 'Allow',
'Principal': {
'AWS': 'arn:aws:iam::<<Account2>>:root'
},
'Action': 's3:ListBucket',
'Resource': 'arn:aws:s3:::current-account-pipeline-bucket'
}
]
}

  1. Choose Save.

(Account 2) Create a cross-account IAM role​


Create an IAM policy that allows the following
a. The pipeline in account 1 to assume the cross-account IAM role in account 2.
b. CodePipeline and CodeDeploy API actions.
c. Amazon S3 API actions related to the SourceArtifact
1. In account 2, open the IAM console.
2. In the navigation pane, choose Policies. Then, choose Create policy.
3. Choose the JSON tab. Then, enter the following policy into the JSON editor:


Important: Replace codepipeline-source-artifact with your pipeline's Artifact store's bucket name.


{
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Action': [
's3:List*',
's3:DeleteObjectVersion',
's3:*Object',
's3:CreateJob',
's3:Put*',
's3:Get*'
],
'Resource': [
'arn:aws:s3:::current-account-pipeline-bucket/*'
]
},
{
'Action': [
'kms:DescribeKey',
'kms:GenerateDataKey',
'kms:Decrypt',
'kms:CreateGrant',
'kms:ReEncrypt*',
'kms:Encrypt'
],
'Resource': [
'arn:aws:kms:us-east-1: <<Account 1>>:key/f031942c-5c7b-4e9f-9215-56be4cddab51'
],
'Effect': 'Allow',
'Sid': 'KMSAccess'
},
{
'Effect': 'Allow',
'Action': [
's3:Get*',
's3:ListBucket'
],
'Resource': [
'arn:aws:s3:::current-account-pipeline-bucket'
]
},
{
'Effect': 'Allow',
'Action': [
'cloudformation:*',
'iam:PassRole'
],
'Resource': '*'
}
]
}

4. Choose Review policy.
5. For Name, enter a name for the policy.
6. Choose Create policy.


Create a second IAM policy that allows AWS KMS API actions​


1. In account 2, open the IAM console.
2. In the navigation pane, choose Policies. Then, choose Create policy.
3. Choose the JSON tab. Then, enter the following policy into the JSON editor:
Important: Replace arn:aws:kms:REGION:ACCOUNT_A_NO:key/key-id with your AWS KMS key's ARN that you copied earlier.


{
'Action' : [
'kms:DescribeKey',
'kms:GenerateDataKey',
'kms:Decrypt',
'kms:CreateGrant',
'kms:ReEncrypt*',
'kms:Encrypt'
],
'Resource': [
'arn:aws:kms:us-east-1: <<Account 1>>:key/f031942c-5c7b-4e9f-9215-56be4cddab51'
],
'Effect': 'Allow',
'Sid': 'KMSAccess'
}

4. Choose Review policy.
5. For Name, enter a name for the policy.
6. Choose Create policy.


Create the cross-account IAM role using the policies that you created​


1. In account 2, open the IAM console.
2. In the navigation pane, choose Roles.
3. Choose Create role.
4. Choose Another AWS account.
5. For Account ID, enter the account 1 account ID.
6. Choose Next: Permissions. Then, complete the steps to create the IAM role.
7. Attach the cross-account role policy and KMS key policy to the role that you created. For instructions, see Adding and removing IAM identity permissions.


(Account 1) Add the AssumeRole permission to the account 1 CodePipeline service role to allow it to assume the cross-account role in account 2​


1. In account 1, open the IAM console.
2. In the navigation pane, choose Roles.
3. Choose the IAM service role that you're using for CodePipeline.
4. Choose Add inline policy.
5. Choose the JSON tab. Then, enter the following policy into the JSON editor:


Important: Replace ACCOUNT_B_NO with the account 2 account number.​

{
'Version': '2012-10-17',
'Statement': {
'Effect': 'Allow',
'Action': 'sts:AssumeRole',
'Resource': [
'arn:aws:iam::ACCOUNT_B_NO:role/*'
]
}
}

6. Choose Review policy, and then create the policy.


(Account 2) Create a service role for CodeDeploy and if using EC2 you need to also setup AutoScaling for EC2 service role that includes the required permissions for the services deployed by the stack​


Note:

1. In account 2, open the IAM console.

2. In the navigation pane, choose Roles.

3. Create a role for AWS CloudFormation to use when launching services on your behalf.

4. Apply permissions to your role based on your use case.


Important: Make sure that your trust policy allows resources in Account 1: < Account 1 > 0 to access services that are deployed by the stack.


(Account 1) Update the CodePipeline configuration to include the resources associated with account 2​


Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, confirm that you're running a recent version of the AWS CLI.


You can't use the CodePipeline console to create or edit a pipeline that uses resources associated with another account. However, you can use the console to create the general structure of the pipeline. Then, you can use the AWS CLI to edit the pipeline and add the resources associated with the other account. Or, you can update a current pipeline with the resources for the new pipeline. For more information, see Create a pipeline in CodePipeline.


1. Get the pipeline JSON structure by running the following AWS CLI command: aws codepipeline get-pipeline --name MyFirstPipeline >pipeline.jsonCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy Code

2. In your local pipeline.json file, confirm that the encryptionKey ID under artifactStore contains the ID with the AWS KMS key's ARN. Note: For more information on pipeline structure, see create-pipeline in the AWS CLI Command Reference.

3. The RoleArn inside the 'name': 'Deploy' configuration JSON structure for your pipeline is the role for the CodePipeline role in Account 2. This is IMPORTANT as using this role the pipeline in Account 1 will be able to know the ECS Service/task is in which account.

4. Verify that the role is updated for both of the following:

a. The RoleArn inside the action configuration JSON structure for your pipeline.

b. The roleArn outside the action configuration JSON structure for your pipeline.

Note: In the following code example, RoleArn is the role passed to AWS CloudFormation to launch the stack. CodePipeline uses roleArn to operate an AWS CloudFormation stack.


{
'pipeline': {
'name': 'svc-pipeline',
'roleArn': 'arn:aws:iam:: <<Account 1>>:role/codepipeline-role',
'artifactStores': {
'eu-west-2': {
'type': 'S3',
'location': 'codepipeline-eu-west-2-419402304744',
'encryptionKey': {
'id': 'arn:aws:kms:us-east-1: <<Account 1>>:key/f031942c-5c7b-4e9f-9215-56be4cddab51',
'type': 'KMS'
}
},
'us-east-1': {
'type': 'S3',
'location': 'codepipeline-us-east-1- <<Account 1>>'
}
},
'stages': [
{
'name': 'Source',
'actions': [
{
'name': 'Source',
'actionTypeId': {
'category': 'Source',
'owner': 'AWS',
'provider': 'CodeCommit',
'version': '1'
},
'runOrder': 1,
'configuration': {
'BranchName': 'develop',
'OutputArtifactFormat': 'CODE_ZIP',
'PollForSourceChanges': 'false',
'RepositoryName': 'my-ecs-service'
},
'outputArtifacts': [
{
'name': 'SourceArtifact'
}
],
'inputArtifacts': [],
'region': 'us-east-1'
}
]
},
{
'name': 'TST_Develop',
'actions': [
{
'name': 'Build-TST',
'actionTypeId': {
'category': 'Build',
'owner': 'AWS',
'provider': 'CodeBuild',
'version': '1'
},
'runOrder': 1,
'configuration': {
'ProjectName': 'codebuild-project'
},
'outputArtifacts': [
{
'name': 'BuildArtifact'
}
],
'inputArtifacts': [
{
'name': 'SourceArtifact'
}
],
'region': 'us-east-1'
},
{
'name': 'Build-Docker',
'actionTypeId': {
'category': 'Build',
'owner': 'AWS',
'provider': 'CodeBuild',
'version': '1'
},
'runOrder': 2,
'configuration': {
'PrimarySource': 'SourceArtifact',
'ProjectName': 'codebuild_docker_prj'
},
'outputArtifacts': [
{
'name': 'ImagedefnArtifactTST'
}
],
'inputArtifacts': [
{
'name': 'SourceArtifact'
},
{
'name': 'BuildArtifactTST'
}
],
'region': 'us-east-1'
},
{
'name': 'Deploy',
'actionTypeId': {
'category': 'Deploy',
'owner': 'AWS',
'provider': 'ECS',
'version': '1'
},
'runOrder': 3,
'configuration': {
'ClusterName': '<<Account2>>-ecs',
'ServiceName': '<<Account2>>-service'
},
'outputArtifacts': [],
'inputArtifacts': [
{
'name': 'ImagedefnArtifactTST'
}
],
'roleArn': 'arn:aws:iam::<<Account2>>:role/codepipeline-role',
'region': 'eu-west-2'
}
]
}
],
'version': 3,
'pipelineType': 'V1'
},
'metadata': {
'pipelineArn': 'arn:aws:codepipeline:us-east-1: <<Account 1>>:pipeline',
'created': '2024-01-25T16:53:19.957000-06:00',
'updated': '2024-01-25T18:57:07.565000-06:00'
}
}

5. Remove the metadata configuration from the pipeline.json file. For example


'metadata': {
'pipelineArn': 'arn:aws:codepipeline:us-east-1: <<Account 1>>:Account1-pipeline',
'created': '2024-01-25T16:53:19.957000-06:00',
'updated': '2024-01-25T18:57:07.565000-06:00'
}

Important: To align with proper JSON formatting, remove the comma before the metadata section.


6. (Optional) To create a pipeline and update the JSON structure, run the following command to update the pipeline with the new configuration file: aws codepipeline update-pipeline --cli-input-json file://pipeline.jsonCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy Code
7. (Optional) To use a current pipeline and update the JSON structure, run the following command to create a new pipeline: aws codepipeline create-pipeline --cli-input-json file://pipeline.jsonCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy CodeCopy Code


Important: In your pipeline.json file, make sure that you change the name of your new pipeline.​


Deploy ECS tasks across accounts seamlessly with CodePipeline and CodeDeploy for efficient multi-account management.


Call to Action​


Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.