terça-feira, 24 de setembro de 2019

What should we know about AWS Cost Management?

This is a summary with some bullets and some parts of text extracted from AWS papers and from come online courses like cloud guru, linux academy, udemy. To study concepts about Cost Management.
Cloud computing helps businesses in the following ways:
  • Reduces costs and complexity
  • Adjusts capacity on demand
  • Reduces time to market
  • Increases opportunities for innovation
  • Enhances security

When you decouple from the data center, you’ll be able to:
  • Decrease your TCO: Eliminate many of the costs related to building and maintaining a data center or colocation deployment. Pay for only the resources you consume.
  • Reduce complexity: Reduce the need to manage infrastructure, investigate licensing issues, or divert resources.
  • Adjust capacity on the fly: Add or reduce resources, depending on seasonal business needs, using infrastructure that is secure, reliable, and broadly accessible.
  • Reduce time to market: Design and develop new IT projects faster.
  • Deploy quickly, even worldwide: Deploy applications across multiple
  • geographic areas.
  • Increase efficiencies: Use automation to reduce or eliminate IT management activities that waste time and resources.
  • Innovate more: Spin up a new server and try out an idea. Each project moves through the funnel more quickly because the cloud makes it faster (and cheaper) to deploy, test, and launch new products and services.
  • Spend your resources strategically: Switch to a DevOps model to free your IT staff from operations and maintenance that can be handled by the cloud services provider.
  • Enhance security: Spend less time conducting security reviews on infrastructure. Mature cloud providers have teams of people who focus on security, offering best practices to ensure you’re compliant, no matter what your industry.

AWS Economics
The AWS infrastructure serves more than one million active customers in over 190 countries and offers the following benefits to its users:
  1. Global operations
  2. High availability
  3. Low costs due to high volume
  4. Only pay for what you use
  5. Economies of scale
  6. Financial flexibility

Capital Expenses (CapEx)
Money to spent to long-term assets like property, buildings and equipment. 

Operational Expenses (OpEx)
Money spent for on-going cost for running the business. 
Usually considered variable expenses.

Total Cost of Ownership (TCO)
A comprehensive look at the entire cost model of a given decision or option, often including both hard cost and soft costs.

Return of Investments (ROI)
The amount an entity can expect to receive back within a certain amount of time given an investment.

TCO vs ROI

  • Many times, organizations don’t have a good handle on their full on-prem data center cost.
  • Soft cost are rarely tracked or even understood as a tangible expense.
  • Learning curve will be very different from person to person.
  • Business plan usually include many assumptions which in turn require support organizations to create derivate assumptions, sometimes layers deep.

Cost Optimization strategy.

  • Appropriate Provisioning
    • Provision the resources you need and nothing more
    • Consolidate where possible for greater density and lower complexity 
    • CloudWatch can help by monitoring utilization
  • Right Sizing
    • Using lowest cost resource that still meets the technical specifications
    • Architecting for most consistence use of resources is best versus spikes and valleys.
    • Loosely coupled architectures using SNS,SQS, Lambda and DynamoDB can smooth demand and create more predictability and consistency.
  • Purchase Options
    • For permanent applications to needs, Reserved instances provide the best cost advantages.
    • Spot instances are best for temporary horizontal scaling.
  • EC2 Fleet lets you define target mix On-Demand, Reserved and Spot instances 
                   

  • Geographic Selection
    • AWS pricing can vary from region to region
    • Consider potential saving by locating resources in a remote region if local access if not required.
    • Route 53 and CloudFront can be used to reduce potential latency of a remote region. 
  • Managed Services
    • Lavarage managed services such as MySql RDS over self-managed options such as MySQL on EC2.
    • Cost Saving gained through lower complexity and manual intervention.
    • RDS,RedShift,Fargate, and EMR are great examples of fully-managed service that replace traditionally complex and difficult installations with push-button ease.
  • Optimized Data Transfer 
    • Data going out and between AWS regions can become significant cost components.
    • Direct Connect can be a more cost-effective option given data volume and speed.

    AWS Tagging 
    Most resources can have up to 50 tags.

    AWS Resources Groups
    • Resources Groups are grouping of AWS assets defined by tags.
    • Create custom consoles to consolidate metrics, alarms, and config details around given tags.
    • Common Resources Groupings:
      • Environments (DEV,QA,PRD)
      • Project Resources 
      • Collection of resources supporting key business process 
      • Resources allocated to various departments or cost centres.


    Spot and Reserved Instances

    Reserved instances:
    • Purchases usage of EC2 instance advance for a significant discount over On-Demand pricing.
    • Provides capacity reservation when used in a specific AZ.
    • AWS Billing automatically applies discounts rates when you launch an instances that matches your purchases RI.
    • EC2 has three RI types: Standards, Convertible, and Scheduled.
    • Can be shared across multiple accounts within Consolidate Billing.
    • If you find you don’t need your RI’s, you can try to sell them on the Reserved instances Marketplace.



    Standart Convertible
    Terms 1 year,3 years 1 year, 3 years
    Average Discount  off  On-Demand 40% - 60% 31% - 54%
    Change AZ , instances Size, NetWorking Type  Yes via Modify Reserved instances API or Console Yes via Exchange Reserved Instances API or Console
    Change instances family,OS, Tenancy, Payment Options   No Yes
    Benefits from Price Reductions No  Yes
    Sellable on Reserved instances Marketplace Yes (Sale proceeds must be deposited in US bank account) Not yet

    RI Attributes

    • Instances Types - designates CPU, memory, networking capability.
    • Plataform- linux, SUSE linux, RHEL, Windows, SQL Server
    • Tenancy- Default tenancy or Dedicate tenancy
    • Availability  Zone -  If AZ is selected, RI is reserved and discount applies to that AZ (Zonal RI). If no AZ is specified, no reservation is create but the discount is applied to any instances in the family in any AZ in the region (Regional RI) 

    Spot instances:
    • Excess EC2 capacity that AWS tires to sell on an markets exchanges basis.
    • Customer create a Spot Request and specific AMI, desired instances types, and other key information.
    • Customer defines highest price willing to pay for instances. If capacity is constrained and other are willing ti pay more, your instances might get terminated or stopped.
    • Spot request can be a fill and kill, maintain or duration-based .
    • For  one-time-request, instances is terminated and ephemeral data lost.
    • For Request and maintain, instance can be configured to terminate, stop or hibernate until price point can be met again.

    Dedicate instances and Hosts:

    Dedicate instances
    • Virtualized instances on hardware just for you
    • May share hardware with other non-dedicate instances in the same account.
    • Available as On-Demand, Reserved Instances, and Spot Instances.
    • Cost additional $2 per hour per region

    Dedicate Host
    • Physical servers dedicate to just your use.
    • You the have control over which instances are deployed on that host.
    • Available as On-Demand or with Dedicate Host Reservation.
    • Useful if you have server bound software licences that use metrics like per-core, per-socket or per VM
    • Each dedicate host can only run on EC2 instance size and type.

    AWS Budget
    • Allow you to set pre-defined limits and notifications if nearing a budget to exceeding the budget.
    • Can be based on Cost, Usage, Reserved Instances Utilization or Reserved Instances Coverage.
    • Useful as method to distributed cost and Usage awareness and responsibility to platform users.

    Trust Advisor
    • Runs a series of checks on your resources and proposes suggest improvements.
    • Can help recommend cost optimization adjustment like reserved instances or scaling adjustment
    • Core checks are available to all customers
    • Full Trusted Advisor benefits require a Business or Enterprise supports plan.  

    Cost Optimization

    Design Principles

    Keep these design principles in mind as we discuss best practices for cost optimization:

    • Adopt a consumption model: Pay only for the computing resources you consume, and increase or decrease usage depending on business requirements—not with elaborate forecasting.
    • Measure overall efficiency: Measure the business output of the system and the costs associated with delivering it.
    • Stop spending money on data center operations: AWS does the heavy lifting of racking, stacking, and powering servers, so you can focus on your customers and business projects rather than on IT infrastructure.
    • Analyze and attribute expenditure: The cloud makes it easier to accurately identify the usage and cost of systems, which then allows transparent attribution of IT costs to revenue streams and individual business owners.
    • Use managed services to reduce cost of ownership: In the cloud, managed services remove the operational burden of maintaining servers for tasks like sending email or managing databases.

    Cost optimization in the cloud is composed of four areas:
    • Cost-effective resources
    • Matching supply with demand
    • Expenditure awareness
    • Optimizing over time

    Cost-Effective Resources

    Using the appropriate services, resources, and configurations for your workloads is key to cost savings. In AWS there are a number of different approaches:
    • Appropriate provisioning
    • Right sizing
    • Purchasing options: On Demand Instances, Spot Instances, and Reserved Instances
    • Geographic selection
    • Managed services
    • Optimize data transfer

    Keep in mind three key considerations when you perform right-sizing exercises:
    • The monitoring must accurately reflect the end-user experience.
    • Select the correct granularity for the time period of analysis that is required to cover any system cycles.
    • Assess the cost of modification against the benefit of right sizing

    The following table contrasts the traditional funding model against the cloud funding model.

    Funding ModelCharacteristics

    Traditional Data Center
    • A few big purchase decisions are made by a few people every few years.
    • Typically overprovisioned as a result of planning up front for spikes in usage.
    Cloud

    • Decentralized spending power.
    • Small decisions made by a lot of people.
    • Resources are spun up and down as new services are designed and then decommissioned.
    • Cost ramifications felt by the organization as a whole are closely monitored and tracked.
    Start with an Understanding of Current Costs

    Evaluate the following when calculating your on-premises computing costs:

    • Labor. How much do you spend on maintaining your environment?
    • Network. How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?
    • Capacity. How do you plan for capacity? What is the cost of over- provisioning for peak capacity? What if you need less capacity? Anticipating next year?
    • Availability/Power. Do you have a disaster recovery (DR) facility? What was your power utility bill for your data centers last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N (parallel redundancy) power? If not, what happens when you have a power issue to your rack?
    • Servers. What is your average server utilization? How much do you overprovision for peak load? What is the cost of over-provisioning?
    • Space. Will you run out of data center space? When is your lease up?

    References:

domingo, 25 de agosto de 2019

AWS Architecting to Scale in a nutshell

I did and I used this summary with some bullets and some parts of text extracted from AWS papers and from come online courses like cloud guru, linux academy, udemy. To study concepts about  AWS Architecting to Scale.
I hope it could be useful to someone.
Scaling in cloud is totaly related with Microservice architectures, this approach to software development to speed up deployment cycles, foster innovation and ownership, improve maintainability and scalability of software applications, and scale organizations delivering software and services by using an agile approach that helps teams to work independently from each othe

Microservices architectures are not a completely new approach to software engineering, but rather a combination of various successful and proven concepts such as:
  • Agile software development
  • Service-oriented architectures
  • API-first design
  • Continuous Integration/Continuous Delivery (CI/CD)

In many cases, design patterns of the Twelve-Factor App are leveraged for microservices.

Distributed Data Management

Monolithic applications are typically backed by a large relational database, which defines a single data model common to all application components. In a microservices approach, such a central database would prevent the goal of building decentralized and independent components. Each microservice component should have its own data persistence layer.

Distributed data management, however, raises new challenges. As a consequence of the CAP Theorem,distributed microservices architectures inherently trade off consistency for performance and need to embrace eventual consistency.

In a distributed system, business transactions can span multiple microservices. Because they cannot leverage a single ACID transaction, you can end up with partial executions. In this case, we would need some control logic to redo the already processed transactions. For this purpose, the distributed Saga pattern is commonly used. In the case of a failed business transaction, Saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions.


Saga execution coordinator:

Loosely Coupled Architecture 

Loosely couple architectures have several benefits but the main benefit in terms of scalability is atomic functional units. These discrete units of work can scale independently.

  • Layers of abstraction
  • Permits more flexibility
  • Interchangeable components
  • More atomic functional units
  • Can scale components independetely

Horizontal vs Vertical Scaling

Horizontal Scaling Vertical Scaling
Add more instances as demand increase Add more CPU and/or more RAM to existing instance as demand increase
No downtime required to scale up or scale down Requires restart to scale up or down
Automatic using Auto-Scaling Groups Would require scripting to automate
(Theoretically) Unlimited Limited by increase sizes

Auto-Scaling Groups 

If your scaling is not picking up the load fast enough to maintain a good service level, reducing the cooldown can make scaling more dramatic and responsive

  • Automatically provides horizontal scaling for your landscape.
  • Triggered by an event or scaling actions to either launch or terminate instances.
  • Availability, Cost and System Metrics can all factor into scaling.

Four Scaling options:
  • Maintain- Keep a specific or minimum number of instances running
  • Manual - Use maximum, minimum, or specific number of instances
  • Schedule - Increase or decrease instances based on schedule
  • Dynamic- Scale based on real-time metrics of the systems

Launch Configuration:
  • Specify VPC and subnets for scaled instances
  • Attach to a ELB
  • Define a Health Check Grace Period
  • Define size of group to stay at initial size.
  • Use scaling policy which can be based from metrics


Scaling Types:
Scaling Type What When
Maintain Hands-off way to maintain X number of instances “I need 3 instances always"
Manual Manually change desired capacity via console or CLI “My needs change so rarely that I can just manually add and remove"
Scheduled Adjust min/max instances based on specific times “Every Monday morning, we get a rush an our website"
Dynamic Scale in response to behaviour of elements in the environment. “When CPU utilization gets to 70% on current instances, scale up."
Scaling Policies:
Scaling What
Target Tracking Policy Scaled based on a predefined or custom metrics in relation to a target value
Simple Scaling Policy Waits until health check and cool down period expires before evaluating new need.
Step Scaling Policy Responds to scaling needs with more sophistication and logic.

Scaling Cooldowns

  • Configurable duration that give your scaling a chance to “come up to speed “ and absorb load.
  • Default couldown period is 300 seconds.
  • Automatically applies to dynamic scaling and optionally to manual scaling but not supported by scheduled scaling.
  • Can override default couldown via scaling-specific cool down.

Kinesis

Kinesis Data Streams can immediately accept data that has been pushed into a stream, as soon as the data is produced, which minimises the chances of data loss at the producer stage. 
The data does not need to be batched first. 
They can also extract metrics, generate reports and perform analytics on the data in real-time, therefore the first two options are correct. 
Kinesis Data Streams is not a long term storage solution as the data can only be stored within the shards for a maximum of 7 days. Also, they can't handle the loading of the streamed data directly into data stores such as S3. 

Although data can be read (or consumed) from shards within Kinesis Streams using either the Kinesis Data Streams API or the Kinesis Consumer Library (KCL), AWS always recommend using the KCL. The KPL (Kinesis Producer Library) will only allow writing to Kinesis Streams and not reading from them. You can not interact with Kinesis Data Streams via SSH.

  • Collections of services for processing streams of various data.
  • Data is processed in “shards” - with each shard able to ingest 1000 records per second.
  • A default limit of 500 shards, but you can request an increase to unlimited shards
  • Record consists of Partitions Key, Sequence Number and Data Blob (up to 1MB).
  • Transient Data Store -  Default retention of 24 hours, bu can be configured for up to 7 Days. 

Kinesis Video Streams


Kinesis Data Streams



Kinesis Data Analytics 


Kinesis Data Streams Key Concepts



DynamoDB

Throughput :
  • Read Capacity Units
  • Write Capacity Units
Max item size is 400KB

Terminology:
  • Partition: A physical space where DynamoDB data is stored.
  • Partition key: A unique identifier for each record, sometimes called a Hash Key.
  • Sort Key: In combination with a partition key, optional second part of a composite key that defines storage order, sometimes called a Range Key.

To determine the partitions, we need to know the table size, the RCUs and the WCUs. But we know we will at least have 3 partitions given the 25GB size ( 25 / 10 = 2.5 rounded up to 3 )

Partition Calculation:
  • By Capacity:  Total RCU/ 3000 + Total WCU/ 1000 
  • By Size: Total Size / 10 GB
  • Total Partition: Round up for the MAX (By Capacity, By Size)





Wrong way to use Partition key with date and sort key with ID:
 

When ask for all the sensor readings for 2018-01-01, will search in the same partition.

In right way, Partition key with ID and Sort Key with date.


When ask for all the sensor readings for 2018-01-01, will search in all partition.

Auto Scaling for DynamoDB:

  • Using target tracking method to try to stay close to target utilisation.
  • Currently does not scale down if table’s consumption drops to zero.
  • Workaround 1: Send request to the tables unit auto scales down.
  • Workaround 2: Manually reduce the max capacity to be the same as minimum capacity.
  • Also supports Global Secondary indexes- think of them like a copy of table.

Cloud Front

Behaviors allow us to define different origins depending on the URL path. This is useful when we want to serve up static content from S3 and dynamic content from an EC2 fleet for example for the same website.

  • Can delivery content to your users faster by caching static and dynamic content at edge locations.
  • Dynamic content delivery is achieved using HTTP cookies forwarded from your origin.
  • Supports Adobe Flash Media Server’s RTMP protocol but you have to choose RTMP delivery method.
  • Web distribution also supports media streaming and live streaming but use HTTP or HTTPS. 
  • Origins can be S3, EC2,ELB or another web server.
  • Multiple origins can be configured.
  • Use behaviour to configure serving up origin content based on URL paths.

Invalidation Requests
  1. Simply delete file from the origin and wait for the TTL to expire.
  2. Use the AWS Console to request invalidation for all content or a specific path such as /images/*
  3. Use the CloudFront API to submit an invalidation request. 
  4. Use third-party tools to perform CloudFont invalidation (CloudBerry,Ylastic,CDN Planet, CloudFront Purge Tool)

Simple Notification Service (SNS)
  • Enables a Publish/Subscribe design pattern.
  • Topics = A channel for publishing a notification
  • Subscription = Configuring and endpoint to receive messages published on the topic
  • Endpoint protocols include HTTP(S), Email, SMS, SQS, Amazon Device Messaging (push notification )and Lambda 
 



Simple Queue Service (SQS)

  • Reliable, highly-scalable, hosted message queue service
  • Available integration with KMS for encrypted messaging.
  • Transient storage default 4 days, mas 14 days.
  • Optionally supports First-in First-out queue ordering.
  • Maximum message size of 256KB but using a special Java SQS SDK, you can have message as large as 2GB. 

Amazon MQ

  • Managed implementation of Apache ActiveMQ
  • Fully managed and highly available within a region.
  • ActiveMQ API and supports for JMS, NMS, MQQT, WebSocket.
  • Design as a drop-in replacement for on-premises message brokers.
  • Use SQS if you creating a new application from scratch.
  • Use MQ if you want an easy low-hassle path to migrate from existing message brokers to AWS.

Lambda

  • Allows you to run code on-demand without the need for infrastructure.
  • Supports Node,js, Python, Java, Go and C#.
  • Extremely useful option for creating rerverless architectures.
  • Code is stateless and execute on an event basis (SNS,SQS, S3, DynamoDB Streams etc.).
  • No fundamental limits to scaling a function since AWS dynamically allocates capacity in relation to events.

Simple Workflow Service (AWS SWF)

  • Create distributed asynchronous system as workflows.
  • Supports both sequencial and parallel processing.
  • Tracks the state of your workflow which you interact and update via API.
  • Best suited for humans enable workflow like a order fulfilment or procedural requests.
  • AWS recommends new applications look like at step functions over SWF.
 
 
Example:

























Step Functions

  • Managed workflow and orchestration platform 
  • Scalable and highly available
  • Define your app as state machine
  • Create tasks, sequencial steps, parallel steps, branching paths or timers.
  • Amazon State Language declarative JSON.
  • Apps can interact and update the stream via Step Function API
  • Visual interface describe flow and realtime status.

AWS Batch

Management tools for creating, managing and executing batch-oriented tasks using EC2 instances.
  1. Create a Computer Environment: Management to Unmanaged, Spot or On-Demand, vCPUs
  2. Create a Job Queue with priority and assigned to a Computer Environment
  3. Create Job Definition; Script to JSON, environment variables, mount points, IAM role, containers images. etc
  4. Schedule the Job


When Use Case
Step Function Out-of-the-Box  coordination of AWS service components  Order Processing Flow
Simple workflow Service Need to support external processes or specialized execution logic Loan Application Process with Manual Review Steps
Simple Queue Service Messaging Queue, Store and forward patterns Image Resize Process
AWS Batch Scheduled or reoccurring tasks that do not required heavy logic Rotate Logs Daily on Firewall Appliance

Elastic Map Reduce (EMR)

The Zoo


 

  • Managed Hadoop framework for processing huge amount of data
  • Also supports Apache Spark, HBase, Presto and Flink
  • Most commonly used for log analysis, financial analysis or extract, translate and loading ETL activity.
  • A Step is a programatic tasks for performing some process on the data
  • A cluster is a collection of EC2 instances provisioned by EMR to run your steps 

Components of AWS EMR


AWS EMR Process



An Overview of Traditional Web Hosting


The same kind of application on AWS 

 
Security groups in a web application

Memcached vs. Redis

Memcached—a widely adopted in-memory key store, and historically the gold standard of web caching. ElastiCache is protocol-compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service. Memcached is also multithreaded, meaning it makes good use of larger Amazon EC2 instance sizes with multiple cores.

Redis—an increasingly popular open-source key-value store that supports more advanced data structures such as sorted sets, hashes, and lists. Unlike Memcached, Redis has disk persistence built in, meaning that you can use it for long-lived data. Redis also supports replication, which can be used to achieve Multi-AZ redundancy, similar to Amazon RDS.

Architecture with ElastiCache for Memcached


Architecture with ElastiCache for Redis


sábado, 10 de agosto de 2019

What should we know about AWS Migration?

I did and I used this summary with some bullets and some parts of text extracted from AWS papers and from come online courses like cloud guru, linux academy, udemy. To study concepts about  AWS Migration.
I hope it could be useful to someone.
Cloud Adoption Framework
Business:
Creation of a strong business case for cloud adoption.
Business goal are congruent with cloud objective.
Ability to mensure benefits (TCO,ROI)

People:
Evaluate  organizacional roles and structure, new skill and process needs and identify gaps.
Incentives and Career Management aligned with evolving roles.
Training options appropriate for learning styles.

Plataform:
Resource provisioning can happen with standardisation.
Architecture pattern adjusted to leverage cloud native.
New application development skills and processes enable more agility.

Security:
Identity and access Management modes change.
Logging and audit capabilities will envolve.
Shared Responsibility Model removes some and adds some facets.

Operations: 
Services monitoring has potential to be high automated.
Performance management can scale as needed.
Business continuity and disaster recovery takes on new methods in the cloud. 


Cloud Adoption Phases





Hybrid Architecture

Hybrid Architectures make use of cloud resources along with on-premises resources.
Very common first step as a pilot for cloud migrations.
Infrastructure can argument or simple be extensions of on-premises platform- VMWare for example.
Ideally, integrations are loosely coupled, meaning each end can exits without extensive knowledge of the other side.

 

Storage Gateway creates a bridge between on-premises and AWS.
Seamless to end-users.
Common first step into the cloud due to low risk and appealing economics.

 


Middleware often a great way to leverage cloud services.
Loosely coupled canonical based.

 

VMWare vCenter plugin allows transparent migration of VMs to and from AWS.
VMWare Cloud furthers this concept with more public cloud native features.

Migration tools 

AWS Server Migration Service

Automates migration of on-premises VMWare vSphere or Microsoft Hyper-V/SCVMM virtual machines to AWS.
Replicates VMs to AWS, syncing volumes and creating periodic AMIs.
Minimizes cutover downtime by syncing VMs incrementally.
Supports Windows and linux VMs only.
The Server Migration Connector is downloaded ad a virtual appliance into your on-perm vSphere or Hyper-V setup.

Database Migration Service

DMS along with the Schema Conversion helps customers migrate databases to AWS RDS or EC2-bases database.
Schema Conversion Tools (SCT) can copy database schemas for homogeneous migration(Same database) and contest schemas for heterogeneous migration(different database ).
DMS is used for smaller, simpler conversion and also supports MongoDB and DynamoDB.
Schema Conversion Tools (SCT) used for large, more complex datasets like data warehouses.
DMS has replication function for on-premises to AWS or to Snowball or S3.

 

Application Discovery Service

Gathers information about on-premises data centers to help in cloud migration planning.
Often customers don’t know the full inventory or status of all their data centers assets, so this tool helps with that inventory.
Collects configs, usage and behaviour data from yours servers to help in estimating TCO (Total Cost of Ownership) of running on AWS .
Can run as agent-less (VMWare environment) or agent-based (non-VMWare environment).
Only supports those OSes that AWS supports (Linux and Windows)./

AWS Migration HUB

Migration hub simplifies and accelerate discovery and migration form your data centers to the AWS cloud.

CIDR Reservation

Ensure your IP addresses will not overlap between VPC and on-premises.
VPCs supports IPV4 netmasks range from /16 to /28.
  • /16 = 255.255.0.0 = 65,024 Hosts 
  • /28 = 255.255.255.240 = 16 Hosts

5 IPs are reserved in every VPC subnet (example: 10.0.0.0/24).
  • 10.0.0.0 Network Address
  • 10.0.0.1 Reserved by AWS for VPC router
  • 10.0.0.2 Reserved by AWS for DNS
  • 10.0.0.3 Reserved by AWS for future use
  • 10.0.0.255 VPCs don’t supports broadcast so AWS reserves this address.

Network Migration

Most organisation start with a VPC connection to AWS.
As usage grows, they might choose Direct connect but keep the VPN as a Backup.
Transition from VPN to Direct Connect can be relatively seamless using BGP.
Once Direct Connect is set-up, configure both VPN and Direct Connect within the same BGP prefix.
From the AWS side, the Direct Connect path is always preferred.

Amazon Snow Family

Evolution of AWS Import/Export process.
Move massive amounts of data to and from AWS.
Data transfer as fast or as slow as you ’re willing to pay an common carrier.
Encrypted at rest.

AWS Import/Export: Ship an external hard driver to AWS. Someone at AWS plugs it in and copies your data to S3.
AWS SnowBall: Ruggedized NAS in box AWS ships to you. You copy over up to 80TB of your data and ship it back to AWS. They copy the data over to S3. 
AWS SnowBall Edge: Same as Snowball, but will onboard Lambda and clustering.
AWS Snowmobile: A literal shipping container full of storage (up to 100PB) and a truck to transport it.


AWS CAF perspectives


Organizational change management to accelerate your cloud transformation


Business Drivers

The number one reason customers choose to move to the cloud is for the agility they gain. The AWS Cloud provides more than 90 services including everything from compute, storage, and databases, to continuous integration, data analytics, and artificial intelligence.

Common drivers that apply when migrating to the cloud are:

  • Operational Costs
  • Workforce Productivity
  • Cost Avoidance
  • Operational Resilience
  • Business Agility

Migration Strategy

The 6 R’s”: 6 Application Migration Strategies


Migration Pattern
Transformation Impact
Complexity
Refactoring
Rearchitecting and recoding require investment in new High capabilities, delivery of complex programs and projects, and
potentially significant business disruption. Optimization for the
cloud should be realized.
High
Replatforming
Amortization of transformation costs is maximized over larger High migrations. Opportunities to address significant infrastructure
upgrades can be realized. This has a positive impact on
compliance, regulatory, and obsolescence drivers.
Opportunities to optimize in the cloud should be realized.
High
Repurchasing
A replacement through either procurement or upgrade. Medium Disposal, commissioning, and decommissioning costs may be
significant.
Medium
Rehosting
Typically referred to as lift and shift or forklifting. Automated Medium and scripted migrations are highly effective.

Medium
Retiring
Decommission and archive data as necessary. Low

Low
Retaining
This is the do nothing option. Legacy costs remain and Low obsolescence costs typically increase over time.
Low

  • 1. Re-host (Referred to as a “lift and shift.”)
    • Move applications without changes
  • 2. Re-platform (Referred to as “lift, tinker, and shift.”)
    • Make a few cloud optimizations to achieve a tangible benefit.
  • 3. Re-factor / Re-architect  
    • Re-imagine how the application is architected and developed using cloud-native features.
  • 4. Re-purchase  
    • Move from perpetual licenses to a software-as-a-service model.
  • 5. Retire  
    • Remove applications that are no longer needed
  • 6. Retain ( Referred to as re-visit.)
    • Keep applications that are critical for the business but that require major refactoring before they can be migrated


Comparison of cloud migration strategies



Your migration strategy  should address the following questions:
  • Is there a time sensitivity to the business case or business driver, for example, a data center shutdown or contract expiration?
  • Who will operate your AWS environment and your applications? Do you use an outsourced provider today? What operating model would you like to have long-term?
  • What standards are critical to impose on all applications that you migrate?
  • What automation requirements will you impose on applications as a starting point for cloud operations, flexibility, and speed? Will these requirements be imposed on all applications or a defined subset? How will you impose these standards?

Building a Business Case for Migration

A migration business case has four categories:
1) run cost analysis
2) cost of change
3) labor productivity
4) business value.

A business case for migration addresses the following questions:

  • What is the future expected IT cost on AWS versus the existing (base) cost?
  • What are the estimated migration investment costs?
  • What is the expected ROI, and when will the project be cash flow
  • positive?
  • What are the business benefits beyond cost savings?
  • How will using AWS improve your ability to respond to business changes?

The data from each value category shown in the following table provides a compelling case for migration.

 

The following are key elements of the platform work stream:

AWS landing zone – provides an initial structure and pre-defined configurations for AWS accounts, networks, identity and billing frameworks, and customer-selectable optional packages.

Account structure – defines an initial multi-account structure and pre- configured baseline security that can be easily adopted into your organizational model.

Network structure – provides baseline network configurations that support the most common patterns for network isolation, implements baseline network connectivity between AWS and on-premises networks, and provides user- configurable options for network access and administration.

Pre-defined identity and billing frameworks – provide frameworks for cross-account user identity and access management (based on Microsoft Active Directory) and centralized cost management and reporting.

Pre-defined user-selectable packages – provide a series of user-selectable packages to integrate AWS-related logs into popular reporting tools, integrate with the AWS Service Catalog, and automate infrastructure.

Application Migration Process


Migration Steps & Tools

Application migration to AWS involves multiple steps, regardless of the database engine:
1. Migration assessment analysis
2. Schema conversion to a target database platform
3. SQL statement and application code conversion
4. Data migration
5. Testing of converted database and application code
6. Setting up replication and failover scenarios for data migration to the target platform
7. Setting up monitoring for a new production environment and go live with the target environment

 

Each application is different and may require extra attention to one or more of these steps:

 

Tools for automate migration 

AWS Schema Conversion Tool (AWS SCT) – a desktop tool that automates conversion of database objects from different database migration systems (Oracle, SQL Server, MySQL, PostgreSQL) to different RDS database targets (Aurora, PostgreSQL, Oracle, MySQL, SQL Server).

AWS Database Migration Service (DMS) – a service for data migration to and from AWS database targets. 


AWS SCT and AWS DMS can be used independently. For example, AWS DMS can be used to synchronize homogeneous databases between environments, such as refreshing a test environment with production data. However, the tools are integrated so that the schema conversion and data migration steps can be used in any order. Later in this guide we will look into specific scenarios of integrating these tools.

AWS Database Migration Service

You can migrate data in two ways:
  • As a full load of existing data
  • As a full load of existing data, followed by continuous replication of data changes to the target

CDC offers two ways to implement ongoing replication:

  • Migrate existing data and replicate ongoing changes - implements ongoing replication by:
        a. (Optional) Creating the target schema.
        b. Migrating existing data and caching changes to existing data as it is migrated.
        c. Applying those cached data changes until the database reaches a steady state.
        d. Lastly, applying current data changes to the target as soon as they are received by the replication instance.

  • Replicate data changes only – replicate data changes only (no schema) from a specified point in time.

Challenges and Barriers

Your organization needs to overcome the following key challenges and barriers during this stage of the transformation:

• Limited knowledge and training
• Executive support and funding
• Purchasing public cloud services
• Purchasing public cloud services
• IT ownership and direction

Reference: