Recovering your AMAZON SES SMTP credentials!!

How to recover your Amazon SES smtp credentials:-

Important Note:

Your SMTP password is not the same as your AWS secret access key. Do not attempt to use your AWS credentials to authenticate yourself against the SMTP endpoint. 

There are two ways to generate your SMTP credentials. You can either use the Amazon SES console or you can generate your SMTP credentials from your AWS credentials.

Use the Amazon SES console to generate your SMTP credentials if:

  • You want to get your SMTP credentials using the simplest method.
  • You do not need to automate SMTP credential generation using code or a script.

Generate your SMTP credentials from your AWS credentials if:

  • You have an existing  IAM  user and you want that user to be able to send emails using the Amazon SES SMTP interface.
  • You want to automate SMTP credential generation using code or a script.

 A user’s SMTP username is the same as their AWS Access Key ID, so you just need to generate the SMTP password. 

Here I am doing a Java implementation  that converts an AWS SECRET ACCESS KEY to an Amazon SES SMTP password.

For example:  I am creating a file called “smtp.java”  and in this file I create a class named “smtp” and inside you need to give your AWS SECRET ACCESS KEY 

smtp

You can download this file from here.

You  Need to execute this file as:

ex

This will create a file called “smtp” and you need to execute again as :

sm

And this will prompt your AMAZON SES SMTP PASSWORD 🙂

AWS-ElasticIP-Swapping

Attach and Detach Public IP in AWS

This script is used for detach elastic ip from one server and attach it to the secondary private ip of the other server.

For example:-

We have two servers with same content named “server01” and “server02” with primary and secondary private IP in AWS and each instance have a public IP  (ie. Elastic IP). This two public IP’s are pointed to the DNS.

If “server01” goes down, only you need to detach the elastic IP and attached it to the “server02” to the secondary private IP.

You can find the script in the below link:-

Elastic-IP-Swap

 

 

 

 

 

Automated VM Generation with veewee, Vagrant, Jenkins and Amazon S3

Recently at my job there has been an increasing amount of LAMP development going on. One part of the learning curve is getting developers set up with an LAMP environment to develop on. Currently this is done by developers using a shared environment. I wanted to explore the idea of developers being able to easily spin up a local LAMP environment using Vagrant and Puppet.

There are a large number of Vagrant base boxes available but people (like your boss) probably shy away from the idea of using a base box built by somebody else. Who knows what could be on it without doing some kind of audit? Luckily there is a tool called veewee that helps automate the process of building your own base boxes.

I thought it would be fun to try and automate the whole process using Jenkins and publish the box to Amazon S3. Later I’ll create a Vagrant project and reference the base box I created using vewee and work towards turning it into a LAMP box.

I’ll be using the following tools:

  • RVM – Used to install and manage our Ruby environment.
  • Ruby – veewee and Vagrant are Ruby projects.
  • RubyGems – veewee and Vagrant are provided as Ruby Gems.
  • Bundler – Used to define and resolve the Ruby Gems used by our project.
  • VirtualBox – veewee and Vagrant use VirtualBox to build and run Virtual Machines.
  • veewee – The tool we use to build our own custom box.
  • Vagrant – The tool we use to configure and run our Virtual Machines.
  • CentOS – A free Linux distribution equivalent to RedHat.
  • Puppet – The tool will be using to automatically provision the VM – install packages and setup service.
  • GitHub – Where we’ll store out project code.
  • Jenkins – A Continuous Integration server we’ll use to automate the building of our box.
  • Amazon S3 – Used as a public place to host our base box.

Assumptions

I’m assuming the following about you:

  • you’re familiar with Ruby
  • you’re familiar with Unix/Linux
  • you know how to use Git fairly well
  • you know how to set up an Amazon S3 Bucket

If you don’t know any of those things then Google is your best friend.

Prepare Host Machine

I’m doing this on my Mac so there were a few things I needed to set up before getting started.

VirtualBox

Vagrant uses VirtualBox so I installed that first. You can download it from https://www.virtualbox.org/wiki/Downloads.

Ruby

veewee and Vagrant are both Ruby Gems so you’ll need to have Ruby and RubyGems installed. My recommended way of doing this is to use RVM.

Install Ruby 1.9.3

Once you have RVM installed you can install Ruby 1.9.3.

$ rvm install 1.9.3

Then switch to Ruby 1.9.3

$ rvm use 1.9.3

veewee Project

We’re going to make a veewee project that defines and builds the CentOS base box we want to use in Vagrant. Let’s start by making a directory for our veewee project:

$ mkdir veewee-centos63
$ cd veewee-centos63

Define Gem Dependencies

Since we’re using veewee and Vagrant, we need to install those Gems for our project to work. The best way to do this is using a Gemfile which lists the version of each Gem we want to use. So create a Gemfile in your project’s directory with the following contents:

source :rubygems

gem 'vagrant', '1.0.5'
gem 'veewee', '0.3.7'

Then to install the gems simple run:

$ bundle

Create Base Box Definition

Now we create the definition files required for making a base box using veewee:

$ bundle exec veewee vbox define 'centos63' 'CentOS-6.3-x86_64-minimal'

This will create a number of files in a definitions folder. You can tweak these to your liking but I’m leaving them as is for now.

FYI: The bundle exec makes sure we’re using the version of veewee defined in our Gemfile.

If you’re curious, you can see a full list of available templates by running:

$ bundle exec veewee vbox templates

Build Base Box

Now it’s time to actually build, validate and export the base box:

$ bundle exec veewee vbox build 'centos63'
$ bundle exec veewee vbox validate 'centos63'
$ bundle exec vagrant basebox export 'centos63'

This will create a file named centos63.box in your veewee-centos63 directory.

To immediately add the box to the host machine’s Vagrant boxes:

$ vagrant box add 'centos63' 'centos63.box'

If the box already exists you’ll need to first run the following:

$ bundle exec vagrant box remove 'centos63'

Later on we’ll be pushing this box to Amazon S3 and referencing it from a Vagrant project to get it installed.

Automating the Build

Since we’ll be building this box using a CI server, we want to automate the process as much as possible, so let’s write a little script called build.sh to run the above for us:

#!/bin/bash

bundle install

bundle exec veewee vbox build 'centos63' --force --auto --nogui
bundle exec veewee vbox validate 'centos63'

bundle exec vagrant basebox export 'centos63' --force

Make sure you allow executable permissions on that script so you can run it:

$ chmod u+x build.sh

Then run the script to make sure it works:

$ ./build.sh

Version Control

Finally it’s time to get this little project into version control. We’ll put it on GitHub so there’s a public place for our Jenkins server to access the code.

First let’s create a .gitignore file so we prevent the resulting box from getting checked into version control accidentally. Add the following at a minimum:

centos63.box

Now set up a local Git repo:

$ git init
$ git add definitions Gemfile Gemfile.lock build.sh
$ git commit -m "Initial project"

Set up a new GitHub repository and push you local repo to your GitHub one:

$ git remote add origin https://github.com/spilth/veewee-centos63.git
$ git push -u origin master 

Automatic Box Building with Jenkins

Now it’s time to set up Jenkins to build your base box for you whenever there’s a change to it’s definition.

Installation

There’s a Jenkins native package for most operating systems, so I suggest you download it from http://jenkins-ci.org/. The install package should automatically start Jenkins and you’ll be able to get to it from your browser using: http://localhost:8080/

Jenkins Plugins

You’ll need a few plugins to help build the project. From the main Jenkins screen choose Manage Jenkins, then click Manage Plugins. Click on the Available tab and in the Filter box search for the following:

Configure Git Plugin

From the main screen of Jenkins choose Manage Jenkins, then Configure System. Find the section titled Git plugin and enter values for Global Config user.name and Global Config user.email.

Configure Amazon S3 Plugin

On the Configure System screen also look for the section titled Amazon S3. Click on the Add button and set up a new profile with you access key and secret key.

Creating a Jenkins Job

Now we need to create a job in Jekins that will check out your code from GitHub, run the build script we made, store the centos63.box it generates and push that box to our Amazon S3 Bucket.

  • From the main screen click on New Job.
  • Give the job a name
  • Choose Build a free-style project
  • Click OK
  • Under Source Code Management choose Git and enter the Read-Only URL for your GitHub repository.
  • Under Build Environment choose Run the build in a RVM-managed environment and enter 1.9.3 in the Implementation field.
  • Under Build click Add build step and choose Execute shell. Enter ./build.sh in the Command text area.
  • Under Post-build Actions click Add post-build action and choose Publish artifacts to S3 Bucket. Choose the profile you created above, then click Add. In the Source field put centos63.box and put the name of your S3 bucket in the Destination bucket field.
  • Click Save at the bottom of the page.
  • Finally, click the Build Now link in the left-side nav to kick off your job. Once the job starts you can click the datestamp, then Console Output to see the log of the build script.

Note that it will take some time to download the CentOS ISO and Virtual Box extensions during the first build. It will download them to an iso directory in the job’s workspace so future builds won’t take as long.

Additionally, depending on the speed of your connection, uploading to Amazon S3 might take a while.

AWS Storage Gateway

intro

 

Using Amazon AWS Storage Gateway :
AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage in the form of an iSCSI devices.
Storage Gateways Documentation – assume a scenario of Data Centre VM where we use S3 for storage solution.

There are 2 type of iSCSI devices:
1. Gateway-Cached Volume Solution :  create your storage volumes and mount them as iSCSI devices from your on-premises application servers – Data is stored on Amazon S3 and frequently accessed data is stored on the on-premises storage hardware.

2. Gateway-Stored Volume Solution : store all your data locally in storage volumes on your on-premises storage hardware. The gateway periodically takes snapshots as incremental backups and stores them in Amazon S3.

[Youtube Video on StorageGateway walkthrough on Windows]

AWS Storage Gateway uses two different hosting environments: VMware virtualization environment and an Amazon Elastic Compute Cloud (Amazon EC2) environment.

VMware virtualization environment
Amazon Elastic Compute Cloud (Amazon EC2) environment

 

working

Setting Up AWS Storage Gateway

Steps :
Click Storage Gateway from the AWS Consol
Click – Deploy a New Gateway on Amazon EC2
Click – Lauch Gateway AMI
Click – Select
Select – Lauch with EC2 Console
Click Accept Terms
One of the following AMI can be choosen

Region    ID
US East (Virginia)                               ami-29f27a40
US West (Oregon)                               ami-4847cc78
US West (Northern California)          ami-36b39373
EU West (Ireland)                               ami-04393670
Asia Pacific (Singapore)                     ami-4a94d618
Asia Pacific (Tokyo)                            ami-d941fbd8
South America (Sao Paulo)                ami-6526fe78

The instance type must be at least a Standard XL (m1.large) or the instance will not launch.

With default setup – 2 more EBS also needs to be added,  one for cache storage and one for upload buffer.

NOTE: For a gateway-cached setup, you can add up to 18 TB of storage comprised of up to 2 TB allocated to upload buffer and up to 16 TB allocated to cache storage.

Equip with Elasticache

Memcache ? and Its Facts !!

Fotolog, as they themselves point out, is probably the largest site nobody has ever heard of, pulling in more page views than even Flickr.
Fotolog has 51 instances of memcached on 21 servers with 175G in use and 254G available.

Memached is: A high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.The magic is that none of the memcached servers need know about each other. To scale up you just add more servers and the key hashing algorithm makes it all work out right. Memcached is not redundant, has no failover, and has no authentication. It’s simple server for storing and getting data, the complex bits must be implemented by applications.

Case Study

Amazon ElastiCache supports nodes with cache sizes ranging from 6 to 67 GB. A DNS name is assigned to each Cache Node when it is created.

Memcache Hasing

AWS Blog on Elasticache

Understanding Elasticache Internals

Why is memcached not recommended for sessions? Everyone does it!
If a session disappears, often the user is logged out. If a portion of a cache disappears, either due to a hardware crash or a simple software upgrade, it should not cause your users noticable pain. This overly wordy post explains alternatives. Memcached can often be used to reduce IO requirements to very very little, which means you may continue to use your existing relational database for the things it’s good at.

Like keeping your users from being knocked off your site.
In detail why we dont use memcache for sessions – http://dormando.livejournal.com/495593.html

What about the MySQL query cache?
The MySQL query cache can be a useful start for small sites. Unfortunately it uses many global locks on the mysql database, so enabling it can throttle you down. It also caches queries per table, and has to expire the entire cache related to a table when it changes, at all. If your site is fairly static this can work out fine, but when your tables start changing with any frequency this immediately falls over.

Memory is also limited, as it requires using a chunk of what’s directly on your database.

Can using memcached make my application slower?
Yes, absolutely. If your DB queries are all fast, your website is fast, adding memcached might not make it faster.

Memcache FAQ on Google Code

Elasticache Setup

launch-amazon-elasticache-cluster-1

Name: This is the Cache Identifier name and should be unique for an Amazon EC2 region(per account).

Node Type: Cache Capacity type with Memory and CPU. If you want 20 GB of distributed Cache you can choose either 3 Cache.M1.Large or 2 Cache.M1.Xlarge Node types. Usually users prefer Node types with more memory rather than High CPU node types (Not sure what kind of workload needs cache.c1.Xlarge capacity on memory hungry applications). Recently AWS has introduced Cache.M3.Class Node type which fits Memcached kind of use cases very well. The Node type cannot be modified after creating an Amazon ElastiCache Cluster, so please plan your base capacity in advance with some thought. To know more about ElastiCache deployment strategies refer URL: http://harish11g.blogspot.in/2012/11/amazon-elasticache-memcached-ec2.html

Number of Nodes: Number of Node types you want the Amazon ElastiCache Cluster to launch. There is a limit to Cache nodes you can launch per account. Please increase this limit using ElastiCache Limit Increase Request form.

Version: Memcached 1.4.5

Cache Port: The default port 11211 in which the Amazon ElastiCache node accepts connections.

Preferred Zone: It is the Preferred Amazon EC2 Availability Zone in which you want to launch the Amazon ElastiCache Cluster. It is recommended to keep the Web/App EC2 instances and Amazon ElastiCache nodes on same Availability zone for low latency processing. In case multi-AZ is applicable your architecture, standard Amazon EC2 Regional Data Transfer charges of $0.01 per GB in/out apply when transferring data between an Amazon EC2 instance and an Amazon ElastiCache Node in different Availability Zones of the same Region, you are only charged for the Data Transfer in or out of the Amazon EC2 instance.

The Cache security group will allow request access between your EC2 instances and ElastiCache Nodes. The Security group of your EC2 instances should be added in this ElastiCache Security group for opening the access. This setting applies to all the existing and new cache nodes inside the Amazon ElastiCache cluster. You can either create a new Amazon ElastiCache Security group or make changes in the default security group as well. In case you have Multitude of cache clusters with variety of EC2 tiers accessing them on various workflows I would strongly recommend creating your own cache security groups as best practice. Currently Amazon ElastiCache cannot be accessed outside Amazon EC2 (It does not make sense unless you have Gigabit NW to make productive use of Cache with low latency). Also in future, we can expect Amazon ElastiCache to work inside Amazon Virtual Private Cloud (VPC) network as well.
Cache Parameter Group: You can either create a new parameter group or use the default memcached parameter group. AWS provided default will suffice for most use cases. The parameters will be applied to all the cache nodes in the Amazon ElastiCache cluster.

what is an elasticache cluster node?
A cache node is the smallest building block of an Amazon ElastiCache deployment. It is a fixed-size chunk of secure, network-attached RAM. Each cache node runs an instance of the Memcached service, and has its own DNS name and port. Multiple types of cache nodes are supported, each with varying amounts of associated memory.

source

What is Configuration Endpoint ?

configurations_endpoint

To use Amazon ElastiCache you have to set up a cache cluster. A cache cluster is a collection of cache nodes. You choose the number and the type of nodes to match the performance needs of your application. In the past, if you changed the nodes in your cache cluster (for example, by adding a new node), you would have to update the list of node endpoints manually. Typically, updating the list of node endpoints involves reinitializing the client by shutting down and restarting the application, which can result in downtime (depending on how the client application is architected). With the launch of Auto Discovery, this complexity has been eliminated.

All ElastiCache clusters (new and existing!) now include a unique Configuration Endpoint, which is a DNS Record that is valid for the lifetime of the cluster. This DNS Record contains the DNS names of each of the nodes that belong to the cluster. Amazon ElastiCache will ensure that the Configuration Endpoint always points to at least one such “target” node. A query to the target node then returns endpoints for all the nodes in the cluster. To be a bit more specific, running a query means sending the config command to the target node. We implemented this command as an extension to the Memcached ASCII protocol (read about Adding Auto-Discovery to Your Client Library for more information).

You can then connect to the cluster nodes just as before and use the Memcached protocol commands such as get, set, incr, and decr. The Configuration Endpoint is accessible programmatically through the ElastiCache API, via the command line tools, and from the ElastiCache Console.

To take advantage of Auto Discovery, you will need to use a Memcached client library that is able to use this new feature. To get started, you can use the ElastiCache Cluster Client, which takes the popular SpyMemcached client and adds Auto Discovery functionality. We have a Java client available now (view source), which can be downloaded from the ElastiCache Console:

We plan to add Auto Discovery support to other popular Memcached client libraries over time; a PHP client is already in the works.

ElastiCache remains 100% Memcached-compatible so you can keep using your existing Memcached client libraries with new and existing clusters, but to take advantage of Auto Discovery you must use an Auto Discovery-capable client.

Do we need to install memcached on server?
You don’t need to have memcache installed, only the memcache pecl module in your php installation. Elasticache is a memcached server, nothing more nothing less. As long as you have the memcache pecl module installed in your php, the memcache option will be available on the W3TC dropdowns.

Installing memcache on PHP
You can install the pecl module with:
#pecl install memcache
OR on an apt based system like debian or ubuntu
#apt-get install php5-memcached

Installing Elasticache Cluster Client Module
source
#apt-get update
#apt-get install gcc g++ php5 php-pear

Download Amazon_Elasticache_cluster_client for PHP version from the Elasticache Management Console.

With root/sudo permission, add a new file memcached.ini under the directory /etc/php5/conf.d, and insert “extension=<absolute path to amazon-elasticache-cluster-client.so>” in it.
#echo “extension=<absolute path to amazon-elasticache-cluster-client.so>” > /etc/php5/conf.d/memcached.ini

Test it by running the below contents inside a .php file
<?php
error_reporting(E_ALL & ~E_NOTICE);

$mc = new Memcached();
$mc->addServer(“localhost”, 11211);

$mc->set(“foo”, “Hello!”);
$mc->set(“bar”, “Memcached…”);

$arr = array(
$mc->get(“foo”),
$mc->get(“bar”)
);
#uncomment below line if detailed results are necessary
#var_dump($arr);
echo “<br> foo = “.$mc->get(“foo”);
echo “<br> bar = “.$mc->get(“bar”);
?>

Observations
Here $mc is an object of the class Memcached() which is defined inside the php module “php5-memcached”.Hence the function addServer() also worked well where we add the endpoint of our Elasticache Cluster Nodes Endpoint/Configuration Endpoint and port number.
Output should be
foo = Hello!
bar = Memcached…
Which means that the memcache module is online.

Developer’s Part in Elasticache Implementation
Internal Working of the Elasticache and Application
Different Elasticache Scenarios

Convey this to the PHP Guy and Provide him with the Configuration Endpoint :
ElastiCache supports Auto Discovery—the ability for client programs to automatically identify all of the nodes in a cache cluster, and to initiate and maintain connections to all of these nodes. With Auto Discovery, your application does not need to manually connect to individual cache nodes; instead, your application connects to a configuration endpoint. The configuration endpoint DNS entry contains the CNAME entries for each of the cache node endpoints; thus, by connecting to the configuration endpoint, you application immediately “knows” about all of the nodes in the cluster and can connect to all of them. You do not need to hardcode the individual cache node endpoints in your application.

Choosing a Cache Node Type and the Number of Cache Nodes

The total memory capacity of your cache cluster is calculated by multiplying the number of cache nodes in the cluster by the capacity of each Node. The capacity of each cache node is based on the cache node type.
The number of cache nodes in the cache cluster is a key factor in the availability of your cache cluster. The failure of a single cache node can have an impact on the availability of your application and the load on your backend database while ElastiCache provisions a replacement for the failed cache node. The scale of this availability impact can be reduced by spreading your memory and compute capacity over a larger number of cache nodes, each with smaller capacity, rather than a fewer number of high capacity nodes.

In a scenario where you want to have 20GB of cache memory, you can set it up in one of the following ways:

Use 15 cache.m1.small cache nodes with 1.3 GB of memory each = 19.5 GB

Use 3 cache.m1.large cache nodes with 7.1 GB of memory each = 21.3 GB

Use 3 cache.c1.xlarge cache nodes with 6.6 GB of memory each = 19.8 GB

These options provide you with similar memory capacity, but different computational capacity for your cache cluster.

If you’re unsure about how much capacity you need, we recommend starting with one cache.m1.small cache node type and monitoring the memory usage, CPU utilization and Cache Hit Rate with the ElastiCache metrics that are published to Amazon CloudWatch.

If your cache cluster does not have the desired hit rate, you can easily add more nodes, thereby increasing the total available memory in your cache cluster. You will need to obtain an updated endpoint list from the ElastiCache CLI, API or AWS Management Console, and configure your clients to use the additional node(s).

If your cache cluster turns out to be bound by CPU but has sufficient hit rate, then try setting up a new cluster with a different cache node type.

ElastiCache supports adding or removing cache nodes from an existing cache cluster using the AWS Management Console, the API, and the command line tools, allowing you to increase both memory and compute capacity of the cluster at any time.

Note:ElastiCache does not currently support dynamically changing the cache node type for a cache cluster after it has been created. If you wish to change the Node Type of a cache cluster, you will need to set up a new cache cluster with the desired Node Type, and migrate your application to that cache cluster.

Configure PHP to use Elasticache for Sesssions

#vim /etc/php5/conf.d/memcached.ini
extension=/root/ecache/amazon-elasticache-cluster-client.so
Editing php.ini file
#vim /etc/php5/apache2/php.ini
session.save_handler = memcached
#We mention the configuration endpoint of the elasticache
session.save_path = “test.nugnqi.cfg.use1.cache.amazonaws.com:11211″
#extension is already inside the conf.d/memcache.ini file hence this file may not be necessary
extension=/root/ecache/amazon-elasticache-cluster-client.so
#The ecache dir is the Elasticache Client software that is downloaded for appropriate PHP version to be in used in place of Memcache extension of PHP.

Install phpMemcachedAdmin

wget “http://phpmemcacheadmin.googlecode.com/files/phpMemcachedAdmin-1.2.2-r262.tar.gz&#8221;
tar xvzf phpMemcachedAdmin-1.2.2-r262.tar.gz   (inside apache documentroot)
chmod +r *
chmod 0777 Config/Memcache.php

Install & Use s3cmd for S3 Storage

Amazon S3 is a reasonably priced data storage service. Ideal for off-site backups, archiving and other data storage needs. It is generally more reliable than your regular web hosting for storing your files and images. Check out About Amazon S3 section to find out more.

S3cmd is a command line tool for uploading, retrieving and managing data in Amazon S3. It is best suited for power users who don’t fear command line. It is also ideal for scripts, automated backups triggered from cron, etc.

S3cmd is an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage. None of these money go to S3cmd developers.

#apt-get install s3cmd

To configure s3cmd
#s3cmd –configure
[Enter Access Key and Secret Key]

Configuration file is saved into
/root/.s3cfg

To get Help
#s3cmd –help

To List Buckets
#s3cmd ls

To Delete Non-Empty Buckets
#s3cmd rb s3://buckt_name -fv

Copy buckets to local machine
#s3cmd get s3://buckt_name -r

Create Buckets
#s3cmd mb s3://buckt_name

Syncing local dir with s3 Buckets
#s3cmd sync local_dir/ s3://buckt_name

AWS Products/Solutions – Admins Capsule

Database

Amazon RDS

  • You can think of a RDS DB Instance as a database environment in the cloud with the compute and storage resources you specify.
  • You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools.
  • Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance.
  • For optional Multi-AZ deployments (currently supported for MySQL and Oracle database engines), Amazon RDS also manages synchronous data replication across Availability Zones and automatic failover.
  • Amazon RDS FAQs
  • By default, customers are allowed to have up to a total of 20 Amazon RDS DB instances.
  • RDS cannot remove storage once it has been allocated. The only way to reduce the amount of storage allocated to a DB Instance is to dump the data out of the DB Instance, create a new DB Instance with less storage space, and load the data into the new DB Instance.
  • Unlike Multi-AZ deployments, Read Replicas use MySQL’s built-in replication and are subject to its strengths and limitations.This means recent database updates made to a standard (non Multi-AZ) source DB Instance may not be present on associated Read Replicas in the event of an unplanned outage on the source DB Instance. As such, Read Replicas do not offer the same data durability benefits as Multi-AZ deployments. While Read Replicas can provide some read availability benefits, they and are not designed to improve write availability. Read Replicas are currently supported for Amazon RDS for MySQL. They can also be used for serving read traffic when the primary database is unavailable.
  • The read replica mechanism uses MySQL’s native, asynchronous replication. This means replicas might be lagging behind the master as they try to catch up with writes. The interesting thing about this is that multi-AZ RDS instances apparently use another, proprietary type of synchronous replication.
  • A Read Replica will stay active and continue accepting read traffic even after its corresponding source DB Instance has been deleted. If you desire to delete the Read Replica in addition to the source DB Instance, you must explicitly delete the Read Replica using the DeleteDBInstance API or AWS Management Console.
  • By default and at no additional charge, Amazon RDS enables automated backups of your DB Instance with a 1 day retention period.
  • During the backup window, storage I/O may be suspended while your data is being backed up. This I/O suspension typically lasts a few minutes at most. This I/O suspension is avoided with Multi-AZ DB deployments, since the backup is taken from the standby.
  • Amazon RDS DB snapshots and automated backups are stored in S3.
  • If you desire to turn off automated backups altogether, you can do so by setting the retention period to 0 (not recommended).
  • When you delete a DB Instance, you have the ability to specify whether a final DB Snapshot is created upon deletion, which enables a DB Snapshot restore of the deleted database instance at a later date. All previously created DB Snapshots of your DB Instance will be retained and billed at $0.15 per GB-month, unless you choose to delete them.
  • Amazon RDS does not currently provide access to the binary logs for your Database Instance.
  • You are not charged for the data transfer incurred in replicating data between your source DB Instance and Read Replica. Billing for a Read Replica begins as soon as the Read Replica has been successfully created (i.e. when status is listed as “active”). The Read Replica will continue being billed at standard Amazon RDS DB Instance hour rates until you issue a command to delete it.
  • Amazon RDS primarily has 3 engines – Mysql Database EngineOracle Database Engine,Microsoft SQL Server Database Engine.
  • Setup RDS CLI.
  • RDS Terminology and Concepts.
  • How to Connect to RDS – MySQL.
  • RDS CLI – References.
  • Creating and Modifying  DB Instance.
  • Backing UP and Restoring DB Instances.
  • Viewing RDS Instance events.
  • working with  DB Parameter Groupssecurity groupsoption groups & viewing DB instance metrics.
  • RDS Security Best Practices !!!
  • Tech Tips – Scaling Databases with Amazon RDS
  • Tech Tips – On Demand Test Databases
  • Tech Tips IV: Best Practices to Avoid an Inoperable RDS MySQL DB Instance
  • Tech Tips V: Defining CloudWatch alarms for Amazon RDS metrics
  • The default storage engine with RDS is InnoDB, but you are free to choose another, like the popular MyISAM. It is important to realize that read replicas on nontransactional storage engines (like MyISAM) require you to freeze your databases, as the consistency cannot be guaranteed when snapshotting. But if you use InnoDB, you are safe, and the only thing you have to do is fire up a new read replica.
  • RDS storage is independent of RDS instance classes. Every class can have from 5 GB to 1 TB of storage associated. Scaling up the storage is easy, and you can do it using the Console. It does require a reboot. On the other hand, scaling down the storage is impossible.
  • Reserved DB Instances page.
  • On-Demand DB Instances.
  • RDS CLI
  • 10 things you should know about RDS.

Amazon DynamoDB

  • A fast, highly scalable NoSQL database service
  • A fully managed service that offers extremely fast performance, seamless scalability and reliability, low cost and more.
  • Video

Amazon SimpleDB

  • A NoSQL database service for smaller datasets.
  • A fully managed service that provides a schemaless database, reliability and more.

Your choice of relational AMIs

  • A relational database you can manage on your own.
  • On Amazon EC2 and EBS that provide scale compute & storage, complete control over instances, and more.

Amazon ElastiCache

  • Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud.
  • Amazon ElasticCache CLI Tool.

Compute

Amazon Elastic Cloud Compute (EC2)

  • Amazon Elastic Compute Cloud delivers scalable, pay-as-you-go compute capacity in the cloud.
  • Amazon EC2 Instance Types
  • EBS-Optimized instances are for selected types only such as – Standard Instances(Large,Extra Large), High-Memory Instances(High-Memory Quadruple Extra Large).
  • Amazon EC2 instances are grouped into seven families: Standard, Micro, High-Memory, High-CPU, Cluster Compute, Cluster GPU, and High I/O.
  • Amazon EC2 API (CLI Tools) and how to setup it.
  • Simple CLI Access to Amazon EC2 and S3

Amazon Elastic MapReduce

  • Amazon Elastic MapReduce is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data.

Auto Scaling

  • Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define.
  • Auto Scaling CLI Tool.

Networking

Elastic Load Balancing

  • ELB API Tools.
  • Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances.