The AWSControlTowerExecution role is required in the target account for the Control Tower management account to perform various activities. In the above case the item which failed was a service-linked role creation. I have reviewed AWSControlTowerExecution in my test environment for the default logging account and see it has the AdministratorAccess managed policy atached and the following trust relationship: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::< Control Tower management account ID >:root" }, "Action": "sts:AssumeRole" } ] } In one of the accounts which enrolled correctly please review the AWSControlTowerExecution role and compare with the role in the account which failed to enroll to confirm they are aligned and have the same permissions. - More Information - From the Control Tower documentation - please see resources [1] and [2] Before you can enroll an existing AWS account into AWS Control Tower you must give permission for AWS Control Tower to manage, or govern, the account. AWS Control Tower requires permission to establish trusted access between AWS CloudFormation and AWS Organizations on your behalf, with this trusted access, the AWSControlTowerExecution role conducts activites required to manage each account. To enroll an existing account - these prerequisites are required before you can enroll an account in AWS Control Tower: 1. The AWSControlTowerExecution role must be present in the account you're enrolling. 2. We recommend that the account should not have an AWS Config configuration recorder or delivery channel. These may be deleted or modified through the AWS CLI before you can enroll an account. If you do have a Config Recorder with data which cannot be deleted you can 3. The account that you wish to enroll must exist in the same AWS Organizations organization as the AWS Control Tower management account. The account that exists can be enrolled only into the same organization as the AWS Control Tower management account, in an OU that already is registered with AWS Control Tower. 4. Before you can enroll an existing account in AWS Control Tower, the account must have the following roles, permissions, and trust relationships in place. Otherwise, enrollment will fail. - Role Name: AWSControlTowerExecution - Role Permission: AdministratorAccess (AWS managed policy) - Role Trust Relationship - as above
Principal Architect ( Data & AI) Over 22 yrs. of experience in IT. Global Delivery Models.
Sunday, December 4, 2022
Thursday, September 1, 2022
Friday, August 26, 2022
Thursday, August 25, 2022
Snowflake - Architecture
Snowflake
It is an analytic data warehouse provided as Software-as-a-Service (SaaS). There is no hardware (virtual or physical) to select, install, or configure, there is no software to install, all ongoing maintenance and tunning is handled by Snowflake.
Database Storage
When data is loaded into Snowflake, Snowflake organizes the data into multiple micro partitions that are structured as an internal optimized, compressed, columnar format. Snowflake stores this optimized data in cloud storage. Data is stored in the cloud storage and works as a shared-disk model thereby providing simplicity in data management. This makes sure users do not have to worry about data distribution across multiple nodes in the shared-nothing model. Snowflake manages all aspects of how this data is stored — the organization, file size, structure, compression, metadata, statistics, and other aspects of data storage are handled by Snowflake.
Query Processing
Query execution is performed in the processing (compute) layer. Snowflake is processing queries using “virtual warehouses”. Snowflake separates the query processing layer from the disk storage. Each virtual warehouse is a Massively Parallel Processing (MPP) compute cluster composed of multiple compute nodes allocated by Snowflake from a cloud provider. Each virtual warehouse is an independent compute cluster that does not share compute resources with other virtual warehouses. As a result, each virtual warehouse has no impact on the performance of other virtual warehouses.
Cloud Services
The cloud services layer is a collection of services that coordinate activities across Snowflake. These services tie together all of the different components of Snowflake in order to process user requests, from login to query dispatch. The cloud services layer also runs on compute instances provisioned by Snowflake from the cloud provider.
Among the services in this layer:
Authentication
Infrastructure management
Metadata management
Query parsing and optimization
Access control
Row-based vs Columnar-based storage organization
Snowflake Arch and COmponents
Types of Data
Warehouse Architecture
• Single-Tier Architecture: This type aims to
extract data in order to reduce the amount of data stored.
• Two-tiered Architecture: This type of
architecture aims to separate the actual Data Sources from the Database. This
enables Data Warehouse to expand and support multiple end users.
• Three-tiered Architecture: This type of
architecture has 3 phases in it. The section below contains Data Warehouse
Server Databases, an intermediate section of Online Analytical Processing
(OLAP) Server used to provide a vague view of Websites, and finally, the
advanced section Advanced Client Framework that includes tools and APIs used to
extract data.
The 4
components of the Data Warehouse.
1.
Database Warehouse Database
The
website forms an integral part of the Database. Database stores and provides
access to corporate data. Amazon Redshift and Azure SQL come under cloud-based
Database services.
2. Extraction, Transform, and Load
(ETL) Tools
All
activities associated with the extraction, conversion, and uploading (ETL) of
data in a warehouse fall under this category.
3. Metadata
Metadata
provides the framework and definitions of data, allowing for the creation,
storage, management, and use of data.
4. Database Access Tools
These Warehouse tools include Data Reporting
Tools, Data Inquiry Tools, Application Development Tools, Data Mining Tools,
and OLAP Tools.
Snowflake Features
Features of Snowflake Data Warehouse
1. Data Protection and Protection: Snowflake
data repository provides advanced authentication by providing Multi-Factor
Authentication (MFA), federal authentication and Single Login (SSO) and OAuth.
communication between client and server is secured by TLS.
2. Standard and Extended SQL Support: Snowflake
data repository supports multiple DDL and SQL DML commands. It also supports
advanced DML, transactions, lateral views, saved processes, etc.
3. Connectivity: Snowflake Database supports a
comprehensive set of client and driver connectors such as Python connector,
Spark connector, Node.js driver, .NET driver, etc.
4. Data Sharing: You can securely share data
with other Snowflake accounts.
1. Simple: has a simple and intuitive user interface.
2. Fault-Tolerant: Hevo offers a faulty-tolerant
structure. It can automatically detect what is confusing and alert you
immediately
3. Real-Time: has a
real-time live streaming system, which ensures your data is always ready for
analysis.
4. Schema Map: will
automatically detect the schema from your incoming data and map it to your
destination schema.
5. Data Transformation: Provides a simple
visual interface to complete, edit, and enrich the data you want to transfer.
6. Live Support: The Hevo team is available
around the clock to provide specialized support via chat, email, and support
phone
Friday, August 19, 2022
Thursday, August 18, 2022
AWS Control Tower
Your landing zone is now available.
AWS Control Tower has set up the following:
- 2 organizational units, one for your shared accounts and one for accounts that will be provisioned by your users.
- 3 shared accounts, which are the management account and isolated accounts for log archive and security audit.
- A native cloud directory with preconfigured groups and single sign-on access.
- 20 preventive guardrails to enforce policies and 3 detective guardrails to detect configuration violations.
Enroll existing accounts in AWS Control Tower
You can enroll existing accounts from your AWS Organizations organization in AWS Control Tower and manage them in the same way that you manage accounts created with account factory. Some additional work is required for enrollment.Wednesday, May 11, 2022
Terraform execution
1) Create ROOT account - done ( one time)
2) Create 4 sub accounts and set up password ( one time)
1 for management
1 for staging
1 for production
3) Login as as ROOT for each sub account ( one time)
set up Administrator user with AWSAdminRole
set up access id and secret
4) Set up MFA for each AWS Sub Account - in progress ( one time)
5) Set up aws config and credentials for each sub account and verify each profile with sample aws cli like aws s3 ls
Set up profiles for each sub account like verumex-stating, verumex-stating-management, verumex-stating-production
RUn STS generate token for each sub account and update your credentials for each profile
overall you will see Config has 3 profiles ( one time)
[deafult]
[verumex-management]
[verumex-staging]
credentials file
---------------------
[profile default]
//from the AWS Sub Account (one time)
[profile verumex-management]
AccessKey= xxxxx
Secret Key = yyyy
//from the AWS Sub Account (one time)
[profile verumex-stging]
AccessKey= xxxxx
Secret Key = yyyy
// below 3 keys are generated by STS Assume Role [ every time when the token is expired ]
[profile verumex-management-role]
AccessKey= xxxxx
Secret Key = yyyy
SesstionTokenID = xxxxxxxxxxxxxxxxxxxx [ long alpha numerical string]
// below 3 keys are generated by STS Assume Role - Token is valid for 8 or 12 hours ( after that we need to re-generate the token)
[profile verumex-staging-role]
AccessKey= xxxxx
Secret Key = yyyy
SesstionTokenID = xxxxxxxxxxxxxxxxxxxx [ long alpha numerical string]
6) Create one S3 bucket in each Sub account [ one time ]
7) Update S3 bucket and profile and region based on where you created S3 in each folder where there terraform
data.tf
-- bucket name
-- profile
-- region ( we can use same region that we have in the original code)
8) Order of execution in each env/ sub account
1) bootstrap
2) networking
3) iam -- now list the dependencies on some resources and policies.. like s3, eks etc...
9) once you run bootstrap it generates terraform.state json in the S3 bucket under each sub account
10) Copy the terraform.state into your local folders like networking/iam etc.. as teeraform.tf.json
11) deprecated - map --> tomap is latest api
permissions used to list of strings --> permission to be a string
permissions="[READ, WRITE]" ==> permission='READ'
terraform is suggesting what is deprecated and whta we should use
Saturday, March 19, 2022
Azure - How Tp
Azure CLI Setup
1) Download
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli
Commands
Resource Group az group
Virtual machines az vm
Storage accounts az storage account
Key vault az keyvault
Web Applications az webapp
SQL databases az sql server
Cosmos DB az cosmosdb