Skip to main content
Cloud & AI Engineering
View all authors

Azure vs AWS vs Oracle Cloud Infrastructure (OCI): Accounts, Tagging and Organization - Part 1

· 7 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

As businesses increasingly rely on cloud platforms, understanding how to manage accounts, tags, and resources efficiently is critical for operational success. This blog explores how three major cloud providers— Azure, AWS, and OCI — handle account management, tagging, and resource organization.


Introduction

Choosing a cloud platform often requires a detailed understanding of its account structure, tagging capabilities, and resource organization. This guide will:



  1. Compare account management across platforms.
  2. Dive into resource grouping and tagging.
  3. Highlight key differences and use cases.

ServicesAmazon Web ServicesAzureOracle Cloud InfrastructureComments
Object StorageAmazon Simple Storage Service (S3)Blob StorageObject StorageObject storage manages data as discrete units (objects) with associated metadata and unique identifiers, offering scalable and durable storage for unstructured data like documents, images, and backups.
Archival StorageAmazon S3 GlacierBlob Storage (archive access tier)Archive StorageArchival storage is a cost-effective solution for storing infrequently accessed or long-term data, optimized for durability and retrieval over extended periods.
Block StorageAmazon Elastic Block Store (EBS)Managed disksBlock VolumesBlock storage provides raw storage volumes that are divided into fixed-size blocks, allowing for high-performance and flexible storage solutions, typically used for databases and virtual machines.
Shared File SystemAmazon Elastic File SystemAzure FilesFile StorageA shared file system allows multiple users or systems to access and manage the same file storage simultaneously, enabling collaborative work and data consistency across different environments.
Bulk Data TransferAWS SnowballImport/Export Azure Data BoxData Transfer ApplianceBulk data transfer refers to the process of moving large volumes of data between storage systems or locations in a single operation, often using specialized tools or services to ensure efficiency and reliability.
Hybrid Data MigrationAWS Storage GatewayStorSimpleOCIFS (Linux)Hybrid data migration involves transferring data between on-premises systems and cloud environments, leveraging both local and cloud-based resources to ensure a seamless, integrated data transition.

Account Management

Cloud platforms organize user access and control through accounts or subscriptions. Here's how the concept varies across the three providers:


AWS:
  1. Accounts serve as isolated environments that provide credentials and settings.
  2. Managed through AWS Organizations, allowing centralized billing and policy control.

Azure:
  1. Uses Subscriptions for resource management, analogous to AWS accounts.
  2. Supports Management Groups for hierarchical organization, enabling policy application at both parent and child levels.

OCI:
  1. Employs Tenancies, acting as the root container for resources.
  2. Supports Compartments, offering logical grouping of resources within a tenancy.

Resource Organization

Efficient resource organization ensures streamlined operations and better control over costs and security.


AWS:
  1. Resources are grouped into Resource Groups.
  2. Tags can be applied to EC2 instances, RDS databases, and more, allowing logical groupings based on attributes like environment or application type.

Azure:
  1. Resource Groups organize assets by project or application.
  2. Tags provide additional metadata for billing and tracking.

OCI:
  1. Introduced the Compartment concept, similar to resource groups in AWS/Azure.
  2. Compartments are logical containers that allow tagging for organization and access control.

Tagging Resources

Tags enable adding metadata to cloud resources for better tracking and reporting.


AWS:
  1. Tags are applied directly to resources like VMs, databases, and S3 buckets.
  2. Example: Grouping EC2 instances by environment using tags such as "Environment: Production."

Azure:
  1. Tags can be added during or after resource creation.
  2. Commonly used for cost management and reporting, e.g., tagging VMs with "Department: Finance."

OCI
  1. Tags are part of resource creation in compartments.
  2. Include attributes like region, security, and virtual private cloud (VPC) settings.

Multi-Account/Subscription Management

Handling multiple accounts is a challenge for large organizations.


AWS
  1. AWS Organizations allow managing multiple accounts under a single parent account.
  2. Supports policy application through Service Control Policies (SCPs).

Azure
  1. Management Groups facilitate organizing multiple subscriptions.
  2. Policies can be applied at root or group levels.

OCI
  1. Offers central management of tenancies and compartments.
  2. Policies and billing can be aligned across multiple subscriptions.

Best Practices

  1. Use Tags Effectively:
    1. Tags are essential for billing and operational tracking.
    2. Create a consistent tagging policy (e.g., Environment: Dev/Prod).

  1. Centralized Account Management:
    1. Use AWS Organizations, Azure Management Groups, or OCI compartments for streamlined oversight.

  1. Leverage Resource Groups:
    1. Group related resources to simplify access control and cost tracking.

  1. Apply Security Best Practices:
    1. Regularly review IAM permissions and service control policies.

Conclusion

While AWS, Azure, and OCI share similar foundational concepts for account management, resource grouping, and tagging, each platform offers unique features tailored to specific use cases.


  1. AWS is ideal for scalability and detailed control.
  2. Azure simplifies management with unified billing and hierarchical structures.
  3. OCI, with its focus on Oracle database integration, suits enterprise-grade organizations.

Call to Action Choosing the right platform depends on your organizations needs. For more insights, subscribe to our newsletter for insights on cloud computing, tips, and the latest trends in technology. or follow our video series on cloud comparisons.


Interested in having your organization setup on cloud? If yes, please contact us and we'll be more than glad to help you embark on cloud journey.

AWS CloudFormation Best Practices: Create Infrastructure with VPC, KMS, IAM

· 7 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

In today's fast-paced tech world, automating infrastructure setup is key to maximizing efficiency and reducing human error. One of the most reliable tools for this is AWS CloudFormation, which allows users to define their cloud resources and manage them as code. While AWS provides a Console for managing CloudFormation, the AWS Command Line Interface (CLI) is a powerful alternative that offers speed, control, and flexibility. In this blog, we'll walk you through setting up CloudFormation using AWS CLI, covering essential components like VPCs, KMS keys, and IAM roles.


1. Introduction to AWS CloudFormation


Before diving into technical details, it's important to understand what AWS CloudFormation is and why it's so beneficial.


What is AWS CloudFormation?


AWS CloudFormation is an Infrastructure-as-Code (IaC) service provided by AWS that allows you to model, provision, and manage AWS and third-party resources. You define your resources using template files (JSON or YAML) and deploy them via AWS CloudFormation, which takes care of the provisioning and configuration.


CloudFormation manages the entire lifecycle of your resources, from creation to deletion, allowing for automation and consistent environments.



Benefits of Using CloudFormation


  1. Automation: CloudFormation automates the entire infrastructure setup, from VPC creation to IAM role configuration, reducing manual work and errors.

  2. Version Control: Treat your infrastructure like code. With CloudFormation, you can manage your infrastructure in repositories like Git, making it easy to version, track, and rollback changes.

  3. Consistency: CloudFormation ensures that the same template can be used to create identical environments, such as development, staging, and production.

  4. Cost Efficiency: With CloudFormation, resources can be automatically deleted when no longer needed, preventing unnecessary costs from unused resources.


2. Why Use AWS CLI Over the Console?


AWS CLI vs Console: Which One is Better for You?


The AWS Management Console offers an intuitive, visual interface for managing AWS resources, but it's not always the most efficient way to manage infrastructure, especially when it grows complex. Here's how AWS CLI compares:

FeatureAWS ConsoleAWS CLI
Ease of UseEasy, intuitive UIRequires knowledge of CLI commands
SpeedSlower, due to manual clicksFaster for repetitive tasks
AutomationLimitedFull automation via scripting
Error HandlingManual rollbackAutomated error handling
ScalabilityHard to manage large infraIdeal for large, complex setups

Advantages of Using AWS CLI


  1. Automation: CLI commands can be scripted for automation, allowing you to run tasks without manually navigating through the Console.
  2. Faster Setup: CLI allows you to automate stack creation, updates, and deletion, significantly speeding up the setup process.
  3. Better Error Handling: You can incrementally update stacks and fix errors on the go with AWS CLI, making it easier to debug and manage resources.

3. Prerequisites


Before we start building with CloudFormation, let’s go over the prerequisites.


Setting Up AWS CLI


AWS CLI is a powerful tool that allows you to manage AWS services from the command line. To get started:


  1. Install AWS CLI:

  2. Verify Installation: After installation, verify that the AWS CLI is installed by typing the following command in your terminal:

    aws --version

    If successfully installed, the version information will be displayed.


Configuring AWS Profiles


Before using AWS CLI to interact with your AWS account, you'll need to configure a profile:


aws configure

You'll be prompted to provide:

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region name (e.g., us-west-2)
  • Default output format (choose JSON)

This configuration will allow the CLI to authenticate and interact with your AWS account.


4. Step-by-Step Guide to AWS CloudFormation with AWS CLI


Now that your CLI is set up, let us get into how to deploy AWS CloudFormation stacks using it.


Setting Up Your First CloudFormation Stack


We will start with a simple example of how to create a CloudFormation stack. Suppose you want to create a Virtual Private Cloud (VPC).


  1. Create a YAML Template: Save the following template in a file named vpc.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
MyVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
Tags:
- Key: Name
Value: MyVPC

  1. Deploy the Stack: To create the VPC, run the following command:

aws cloudformation create-stack --stack-name my-vpc-stack --template-body file://vpc.yaml --capabilities CAPABILITY_NAMED_IAM

This command will instruct CloudFormation to spin up a VPC using the specified template.


  1. Check the Stack Status: To verify the status of your stack creation, use:

aws cloudformation describe-stacks --stack-name my-vpc-stack

Deploying a Virtual Private Cloud (VPC)


A VPC is essential for defining your network infrastructure in AWS. Here’s how you can add more resources to your VPC, such as an Internet Gateway:


Resources:
MyVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
Tags:
- Key: Name
Value: MyVPC
InternetGateway:
Type: AWS::EC2::InternetGateway
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref MyVPC
InternetGatewayId: !Ref InternetGateway

Deploy this using the same create-stack command.


Setting Up Security with KMS (Key Management Service)


Next, we will add encryption keys for securing data:


  1. KMS Template:

Resources:
MyKMSKey:
Type: AWS::KMS::Key
Properties:
Description: Key for encrypting data
Enabled: true

  1. Deploy KMS:

aws cloudformation create-stack --stack-name my-kms-stack --template-body file://kms.yaml --capabilities CAPABILITY_NAMED_IAM

Managing Access with IAM Roles


IAM Roles allow secure communication between AWS services. Here’s an example of how to create an IAM role:


Resources:
MyIAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
Path: /

Use the same create-stack command to deploy this.


5. Best Practices for AWS CloudFormation


Use Nested Stacks


Avoid large, monolithic stacks. Break them down into smaller, nested stacks for better manageability.

Resources:
ParentStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/path/to/nested-stack.yaml

Parameterization


Use parameters to make your stacks reusable across different environments.


Parameters:
InstanceType:
Type: String
Default: t2.micro
Description: EC2 Instance Type

Exporting and Referencing Outputs


Export important resource values for use in other stacks:


Outputs:
VPCId:
Value: !Ref MyVPC
Export:
Name: VPCId

Incremental Stack Updates


Always update your stacks incrementally to avoid failures.

aws cloudformation update-stack --stack-name my-stack --template-body file://updated-template.yaml

6. Advanced CloudFormation Features


Handling Dependencies and Stack Failures


Use the DependsOn attribute to specify dependencies between resources to avoid issues with resource creation order.


Custom Resource Creation


For advanced use cases, you can create custom resources by using Lambda functions or CLI.


7. Conclusion and Next Steps


By using AWS CloudFormation with AWS CLI, you can automate your infrastructure, reduce errors, and scale your environment effortlessly. Continue learning by experimenting with more complex templates, incorporating advanced features like stack sets, and automating further with scripts.

Code shown in the video can be accessed from https://github.com/arinatechnologies/cloudformation

Querying Compressed S3 Logs Using AWS Athena: A Step-by-Step Guide

· 5 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

Managing logs in cloud environments can often come with challenges, particularly when using solutions like Elasticsearch, which require managing clusters, shards, and other components that increase both cost and complexity. If you're looking for an alternative that provides the same querying capabilities without the added overhead, AWS Athena is a powerful tool to consider. In this blog post, we'll walk you through how to leverage Athena with Amazon S3 to efficiently query logs from CloudWatch without breaking the bank. Refer How to Export AWS CloudWatch Logs to OpenSearch (Elasticsearch) for details on exporting CloudWatch logs to OpenSearch/Elasticsearch.



Challenges with Opensearch/Elasticsearch

When managing logs through Opensearch/Elasticsearch, several issues can arise:

  • Cluster Management: You need to spin up a cluster to query logs, which can get complicated based on the number of log groups.
  • Cost: You incur costs for compute resources (such as EC2 instances), storage (which can scale to gigabytes), and managing shards and replication.
  • Policies: There is a need to create cost-optimization policies, manage shards, and regularly review the cluster.

These factors make Opensearch/Elasticsearch a more costly and complex solution, especially for organizations with large-scale logging needs. So, what's the alternative?


AWS Athena to the Rescue

Athena is a serverless query service that allows you to analyze structured and semi-structured data directly stored in Amazon S3 using standard SQL. Here's why Athena is an excellent choice:

  • Serverless: There's no need to manage servers or clusters.
  • Cost-Effective: You pay only for the queries you run, rather than for maintaining a persistent cluster.
  • No Data Transfer Needed: You can query data directly from files stored in S3 without having to move the data into a database.
  • Supports Multiple Formats: Athena supports compressed data formats, such as Gzip, which help reduce both storage costs and improve query performance.

Step-by-Step Guide to Querying Logs with Athena


Here is a quick walkthrough of how to query CloudWatch logs using Athena:


  1. Export Logs from CloudWatch to S3:

  • Create an S3 bucket for exporting the cloudWatch logs.

S3 bucket
  • You can do this by creating a new S3 bucket (or using an existing one) and configuring CloudWatch to store logs there.

permissions

  • Ensure that CloudWatch has appropriate permissions to write to the S3 bucket.
  • On log group, select Action --> Export data to S3 as shown in following figure:

Export Logs from CloudWatch to S3
  1. Create a Database in Athena:
    • Navigate to the Athena console and create a new database. This database will store your logs in a structured format, making it easy to query them using SQL.

Database in Athena
  • For this setup, specify the compression format (e.g., Gzip) and the S3 location where your logs are stored.

  1. Set Up Queries:
    • Once the database is ready, you can create tables that reflect the structure of your logs. This will enable you to query log data stored in various formats (structured, semi-structured, or unstructured) without any transformations.
    • Write your SQL queries to extract the relevant data, and run them directly in the Athena query editor.

Sample query:
CREATE EXTERNAL TABLE IF NOT EXISTS log_database.application_logs ( timestamp_iso STRING, log_date STRING, log_time STRING, message STRING
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES ( "input.regex" = "^([0-9\\-T:.Z]+)\\s([0-9\\-]+)\\s([0-9:.]+)\\s.*\\s-\\s(.*)$"
)
STORED AS TEXTFILE
LOCATION 's3://yt-athena-logs/cloudwatch/7f344873-86a8-4d17-b242-7302242a12d2/'
TBLPROPERTIES ('skip.header.line.count'='0', 'compressionType'='GZIP')

tables
  1. Analyze the Data:
    • After running your SQL queries, you’ll be able to see the output in an organized format, similar to what you’d expect from a traditional database system.
    • The querying process may take a few seconds depending on the volume of data, but it's far more efficient than manually decrypting or moving files into a traditional database.

SQL queries

Why Choose Athena Over Opensearch/Elasticsearch?

  • Lower Costs: With Athena, there is no need for persistent EC2 instances or complex cluster management. You pay only for the queries you run, making it ideal for sporadic log analysis.
  • Scalability: Since Athena queries logs directly from S3, it leverages the scalability of Amazon S3 to handle vast amounts of data.
    Refer How to Export AWS CloudWatch Logs to OpenSearch (Opensearch/Elasticsearch): Step-by-Step Tutorial to know how to export CloudWatch logs to OpenSearch
  • Ease of Use: Writing SQL queries for log analysis in Athena is much simpler and more straightforward compared to managing shards and replicas in Opensearch/Elasticsearch.

Conclusion


Using AWS Athena to query logs stored in S3 provides a simpler, more cost-effective solution compared to Opensearch/Elasticsearch. Whether you are dealing with structured or semi-structured data, Athena's powerful querying capabilities make it an excellent choice for those looking to reduce operational overhead and costs.



Ready to take your cloud infrastructure to the next level? Please reach out to us Contact Us


Understanding AWS Account Migration: A Step-by-Step Guide

· 3 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

Hello everyone! In today's blog, we'll explore how to invite an AWS management account that is already part of another organization into a new organization. This process can be a bit tricky, but we'll walk you through it step by step. Let's get started!


Why Migration Can Be Complicated

Inviting a management account that is part of an existing AWS organization into a new organization isn't straightforward. This is mainly because the management account is deeply integrated within its current organization. The process involves several steps to ensure the transition is smooth and does not disrupt existing resources.



Step-by-Step Process


1. Understanding the Current Setup


Current Setup


You have an AWS account (Account A) that you wish to invite into a new organization. However, this account is a management account and is already part of another organization.


2. Sending the Invitation


Sending the Invitation


Initially, you might think of sending an invitation to this account directly. However, if the account is already a management account within another organization, it will not receive the invitation due to existing restrictions.


3. Removing the Management Account from Its Current Organization


Management Account


To proceed, you need to remove the management account from its current organization. Here's how you can do it:

  • Access the Management Account: Log in to the management account that you want to migrate.

  • Delete the Organization: Navigate to the settings section and opt to delete the organization. This action will not impact existing resources associated with the account. For instance, EC2 instances, security groups, and elastic IPs will remain intact.

    Ensure that all critical resources are noted and checked to confirm they will remain unaffected post-deletion.


4. Deleting the Organization


Deleting the Organization Deleting the Organization


Type the organization ID when prompted and proceed to delete the organization. This step will disband the organization but will not affect the account's resources. This deletion is necessary to migrate the management account to another organization.


5. Accepting the Invitation


Accepting Invitation


Once the organization is deleted:

  • Check Invitations: Go back to the account and check for the pending invitations.
  • Accept the Invitation: You should now see the invitation from the new organization. Accept this invitation to complete the migration.

Important Considerations

  • Resource Continuity: Deleting the organization will not affect existing resources. It is crucial to verify this by checking resources like EC2 instances, security groups, etc., before and after the deletion.
  • Management Account Restrictions: Management accounts have specific restrictions that require these steps to migrate them properly. Ready to take your cloud infrastructure to the next level? Please reach out to us Contact Us

Conclusion

Migrating an AWS management account to a new organization involves a detailed process of deleting the existing organization and accepting a new invitation. While this may seem complex, following these steps ensures a smooth transition without impacting your AWS resources.

We hope this guide was helpful. Don't forget to like, subscribe, and share our channel for more insightful content on AWS management and other cloud solutions.

Desirable Techniques: Understanding Modern Messaging with ActiveMQ and ActiveMQ Artemis

· 5 min read
Cloud & AI Engineering
Arina Technologies
Cloud & AI Engineering

ActiveMQ vs Artemis

What is Apache ActiveMQ?

Apache ActiveMQ is one of the oldest and most trusted open-source message brokers. Written in Java, it supports multiple protocols and client APIs and offers message persistence, delivery guarantees, and advanced routing. It’s common in enterprise environments where robustness and scalability are critical.

Introduction to Artemis

ActiveMQ Artemis is the modern successor to “ActiveMQ Classic,” originating from the HornetQ codebase. It focuses on performance and scalability with simplified clustering, advanced replication, and a streamlined configuration model.

Key Features & Comparison

For consulting or automation help, read more about our services:
Read about Arina Consulting

Both ActiveMQ and Artemis are robust, but they diverge in design and capabilities:

  1. Performance & Storage:
    ActiveMQ uses KahaDB (journal + index). Artemis uses an append-only journal and no separate index, which improves performance.

  2. Protocol Support:
    ActiveMQ supports MQTT, STOMP, OpenWire, etc. Artemis broadens protocol coverage and simplifies configuration; WebSockets are supported out of the box.

  3. Clustering & HA:
    ActiveMQ provides basic clustering. Artemis offers easier setup, automatic failover, and live-backup modes for stronger HA.

  4. Management & Configuration:
    Artemis modernizes configuration (e.g., broker.xml) and ships with better defaults and management ergonomics.

CriteriaActiveMQ ClassicArtemis
IO connectivity layerTCP (sync) and NIO (non-blocking)Netty-based NIO; one acceptor can serve multiple protocols; WebSockets out of the box
Message storeKahaDB (journal + index)Append-only journal (no index)
Paging under memory pressureCursors cache messages; falls back to store; requires journal indexJournal resides in memory; producer-side paging to sequential files; no index needed
Message addressing & routingNon-OpenWire protocols translated internally to OpenWireAnycast for point-to-point across protocols
Broker instance modelOptional separation of install/configExplicit broker instance creation
Main config filesconf/activemq.xml, JAAS in pluginsetc/broker.xml, etc/artemis.profile
Loggingetc/logging.propertiesetc/logging.properties
Foreground startbin/activemq consolebin/artemis run
Service startbin/activemq startbin/artemis-service start
JMS supportJMS 1.1JMS 2.0
Durable subscribersPer-destination durable subs may duplicate across nodesModeled as queues—avoids duplication
AuthenticationGroups via JAAS (plugins in conf/activemq.xml)Roles
Authorizationconf/activemq.xmletc/broker.xml
PoliciesDestination policies (e.g., write)Fine-grained queue policies (e.g., send)
Project statusMature, widely adoptedActive, successor to Classic
PerformanceGood, with some limitsHigh performance
PersistenceKahaDB / JDBCFast journal
ArchitectureTraditional broker; can bottleneck under very high throughputAsynchronous design aimed at high throughput
High availabilityMaster–slaveLive–backup, shared-nothing failover
ClusteringNetwork of brokersAdvanced clustering with automatic client failover
ManagementJMX; more manualImproved JMX, web console, protocol-level management
FilteringBasic selectorsAdvanced filtering; plugin support
SecurityAuthN / AuthZ supportedEnhanced features incl. SSL/TLS, JAAS
FederationCustom configNative support for geo-distributed clusters

Practical Applications

  • Choose ActiveMQ Classic if you need a proven, conservative broker compatible with legacy systems.
  • Choose Artemis for modern workloads needing higher throughput, simpler HA, cleaner config, and broader protocol handling.

Ready to take your architecture to the next level?
Contact us

Conclusion

Both brokers deliver reliable messaging. Your decision should align with current requirements and future scale. If you’re starting fresh or planning to scale aggressively, Artemis is usually the better bet.


🔚 Call to Action

Choosing the right platform depends on your organization’s needs.
Subscribe to our newsletter for cloud tips and trends, or follow our video series on cloud comparisons.
Interested in a guided setup? Contact us—we’re happy to help.