Base install of Elasticsearch 5.5.1 for Ubuntu 16.04 in AWS EC2

Base install of Elasticsearch 5.5.1 for Ubuntu 16.04 in AWS EC2


CONNECT TO YOUR INSTANCE VIA SSH…

Laptop:$ ssh ubuntu@54.174.41.136

INSTALL JAVA/OPENJDK FIRST.

Find OpenJDK in apt…

$ sudo apt search openjdk

As of this writing OpenJDK9 doesnt work with ES…I installed 8…

$ sudo apt-get install openjdk-8-jdk
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
ca-certificates-java fontconfig fontconfig-config fonts-dejavu-core fonts-dejavu-extra hicolor-icon-theme
java-common libasound2 libasound2-data libasyncns0 libatk1.0-0 libatk1.0-data libavahi-client3 libavahi-common-data
libavahi-common3 libcairo2 libcups2 libdatrie1 libdrm-amdgpu1 libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libflac8
libfontconfig1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgif7 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa
libgraphite2-3 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libharfbuzz0b libice-dev libice6 libjbig0 libjpeg-turbo8
libjpeg8 liblcms2-2 libllvm4.0 libnspr4 libnss3 libnss3-nssdb libogg0 libpango-1.0-0 libpangocairo-1.0-0
libpangoft2-1.0-0 libpciaccess0 libpcsclite1 libpixman-1-0 libpthread-stubs0-dev libpulse0 libsensors4 libsm-dev
libsm6 libsndfile1 libthai-data libthai0 libtiff5 libtxc-dxtn-s2tc0 libvorbis0a libvorbisenc2 libx11-dev libx11-doc
libx11-xcb1 libxau-dev libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-render0 libxcb-shm0
libxcb-sync1 libxcb1-dev libxcomposite1 libxcursor1 libxdamage1 libxdmcp-dev libxfixes3 libxi6 libxinerama1
libxrandr2 libxrender1 libxshmfence1 libxt-dev libxt6 libxtst6 libxxf86vm1 openjdk-8-jdk-headless openjdk-8-jre
openjdk-8-jre-headless x11-common x11proto-core-dev x11proto-input-dev x11proto-kb-dev xorg-sgml-doctools xtrans-dev
Suggested packages:
default-jre libasound2-plugins alsa-utils cups-common librsvg2-common gvfs libice-doc liblcms2-utils pcscd
pulseaudio lm-sensors libsm-doc libxcb-doc libxt-doc openjdk-8-demo openjdk-8-source visualvm icedtea-8-plugin
openjdk-8-jre-jamvm libnss-mdns fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei fonts-wqy-zenhei
fonts-indic
The following NEW packages will be installed:
ca-certificates-java fontconfig fontconfig-config fonts-dejavu-core fonts-dejavu-extra hicolor-icon-theme
java-common libasound2 libasound2-data libasyncns0 libatk1.0-0 libatk1.0-data libavahi-client3 libavahi-common-data
libavahi-common3 libcairo2 libcups2 libdatrie1 libdrm-amdgpu1 libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libflac8
libfontconfig1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgif7 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa
libgraphite2-3 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libharfbuzz0b libice-dev libice6 libjbig0 libjpeg-turbo8
libjpeg8 liblcms2-2 libllvm4.0 libnspr4 libnss3 libnss3-nssdb libogg0 libpango-1.0-0 libpangocairo-1.0-0
libpangoft2-1.0-0 libpciaccess0 libpcsclite1 libpixman-1-0 libpthread-stubs0-dev libpulse0 libsensors4 libsm-dev
libsm6 libsndfile1 libthai-data libthai0 libtiff5 libtxc-dxtn-s2tc0 libvorbis0a libvorbisenc2 libx11-dev libx11-doc
libx11-xcb1 libxau-dev libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-render0 libxcb-shm0
libxcb-sync1 libxcb1-dev libxcomposite1 libxcursor1 libxdamage1 libxdmcp-dev libxfixes3 libxi6 libxinerama1
libxrandr2 libxrender1 libxshmfence1 libxt-dev libxt6 libxtst6 libxxf86vm1 openjdk-8-jdk openjdk-8-jdk-headless
openjdk-8-jre openjdk-8-jre-headless x11-common x11proto-core-dev x11proto-input-dev x11proto-kb-dev
xorg-sgml-doctools xtrans-dev
0 upgraded, 100 newly installed, 0 to remove and 25 not upgraded.
Need to get 66.6 MB of archives.
After this operation, 367 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

Install Elastic’s GPG key…

$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
OK

Make sure transport is installed/up to date…

$ sudo apt-get install apt-transport-https
Reading package lists… Done
Building dependency tree
Reading state information… Done
apt-transport-https is already the newest version (1.2.20).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Save the repo definition…

$ echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
deb https://artifacts.elastic.co/packages/5.x/apt stable main

Update and install…

$ sudo apt-get update && sudo apt-get install elasticsearch
Fetched 12.0 MB in 2s (5,955 kB/s)
Reading package lists… Done
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following NEW packages will be installed:
elasticsearch
0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded.
Need to get 33.4 MB of archives.
After this operation, 37.3 MB of additional disk space will be used.
Get:1 https://artifacts.elastic.co/packages/5.x/apt stable/main amd64 elasticsearch all 5.5.1 [33.4 MB]
Fetched 33.4 MB in 0s (40.0 MB/s)
Selecting previously unselected package elasticsearch.
(Reading database … 51035 files and directories currently installed.)
Preparing to unpack …/elasticsearch_5.5.1_all.deb …
Creating elasticsearch group… OK
Creating elasticsearch user… OK
Unpacking elasticsearch (5.5.1) …
Processing triggers for systemd (229-4ubuntu19) …
Processing triggers for ureadahead (0.100.0-19) …
Setting up elasticsearch (5.5.1) …
Processing triggers for systemd (229-4ubuntu19) …
Processing triggers for ureadahead (0.100.0-19) …
$ ps -p 1
PID TTY TIME CMD
1 ? 00:00:02 systemd

Reload the daemon…

$ sudo /bin/systemctl daemon-reload

ES doesnt start up on boot by itself…lets change that…

$ sudo /bin/systemctl enable elasticsearch.service
Synchronizing state of elasticsearch.service with SysV init with /lib/systemd/systemd-sysv-install…
Executing /lib/systemd/systemd-sysv-install enable elasticsearch
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

Start ES…

$ sudo systemctl start elasticsearch.service

We’ve got logs…

$ sudo ls -la /var/log/elasticsearch/
total 12
drwxr-x— 2 elasticsearch elasticsearch 4096 Aug 15 14:55 .
drwxrwxr-x 8 root syslog 4096 Aug 15 14:54 ..
-rw-r–r– 1 elasticsearch elasticsearch 0 Aug 15 14:55 elasticsearch_deprecation.log
-rw-r–r– 1 elasticsearch elasticsearch 0 Aug 15 14:55 elasticsearch_index_indexing_slowlog.log
-rw-r–r– 1 elasticsearch elasticsearch 0 Aug 15 14:55 elasticsearch_index_search_slowlog.log
-rw-r–r– 1 elasticsearch elasticsearch 3552 Aug 15 14:56 elasticsearch.log

Lets look to see how we went with the startup…

$ sudo cat /var/log/elasticsearch/elasticsearch.log
[2017-08-15T14:55:56,662][INFO ][o.e.n.Node ] [] initializing …
[2017-08-15T14:55:56,730][INFO ][o.e.e.NodeEnvironment ] [aZ2tzij] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [27.6gb], net total_space [29gb], spins? [no], types [ext4]
[2017-08-15T14:55:56,731][INFO ][o.e.e.NodeEnvironment ] [aZ2tzij] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-08-15T14:55:56,732][INFO ][o.e.n.Node ] node name [aZ2tzij] derived from node ID [aZ2tzijmSg2Jixolu5X9Kw]; set [node.name] to override
[2017-08-15T14:55:56,732][INFO ][o.e.n.Node ] version[5.5.1], pid[17918], build[19c13d0/2017-07-18T20:44:24.823Z], OS[Linux/4.4.0-1022-aws/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-08-15T14:55:56,732][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-08-15T14:55:57,610][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [aggs-matrix-stats]
[2017-08-15T14:55:57,610][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [ingest-common]
[2017-08-15T14:55:57,610][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [lang-expression]
[2017-08-15T14:55:57,610][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [lang-groovy]
[2017-08-15T14:55:57,610][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [lang-mustache]
[2017-08-15T14:55:57,610][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [lang-painless]
[2017-08-15T14:55:57,611][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [parent-join]
[2017-08-15T14:55:57,611][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [percolator]
[2017-08-15T14:55:57,611][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [reindex]
[2017-08-15T14:55:57,611][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [transport-netty3]
[2017-08-15T14:55:57,611][INFO ][o.e.p.PluginsService ] [aZ2tzij] loaded module [transport-netty4]
[2017-08-15T14:55:57,611][INFO ][o.e.p.PluginsService ] [aZ2tzij] no plugins loaded
[2017-08-15T14:55:59,268][INFO ][o.e.d.DiscoveryModule ] [aZ2tzij] using discovery type [zen]
[2017-08-15T14:55:59,763][INFO ][o.e.n.Node ] initialized
[2017-08-15T14:55:59,763][INFO ][o.e.n.Node ] [aZ2tzij] starting …
[2017-08-15T14:55:59,887][INFO ][o.e.t.TransportService ] [aZ2tzij] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-08-15T14:56:02,944][INFO ][o.e.c.s.ClusterService ] [aZ2tzij] new_master {aZ2tzij}{aZ2tzijmSg2Jixolu5X9Kw}{K5qXuQfDT7mvt_Dne2cwfA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-08-15T14:56:02,967][INFO ][o.e.g.GatewayService ] [aZ2tzij] recovered [0] indices into cluster_state
[2017-08-15T14:56:02,969][INFO ][o.e.h.n.Netty4HttpServerTransport] [aZ2tzij] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-08-15T14:56:02,970][INFO ][o.e.n.Node ] [aZ2tzij] started

Log looks good…Curl the server…

$ curl -XGET 'localhost:9200/?pretty'
{
“name” : “aZ2tzij”,
“cluster_name” : “elasticsearch”,
“cluster_uuid” : “bUCHwEKfTbKfBvNE_lOfVg”,
“version” : {
“number” : “5.5.1”,
“build_hash” : “19c13d0”,
“build_date” : “2017-07-18T20:44:24.823Z”,
“build_snapshot” : false,
“lucene_version” : “6.6.0”
},
“tagline” : “You Know, for Search”
}

Sweet…we’re good to go. Next steps would be to set up your mapping and import some data assuming you dont want to tweak the configs. Configs will be in /etc/elasticsearch/elasticsearch.yml

Advertisements

Useful things

I started a Github repo for some random scripts and other useful things that can be downloaded and tweaked for your use. Check the Readme on the repo for a description of the contents.

https://github.com/sethfloydjr/AWS_Resources

Nginx Config for client_max_body_size in Opsworks

If you are still old school and are using Chef 11 in AWS Opsworks and you need to set “client_max_body_size” to a larger number than default add this to your layers custom json:

"nginx":{
 "client_max_body_size": "10M"
}

This will be applied during Setup so once you have added this go to each of your instances>run command> Setup.

Have fun….and upgrade to Chef 12.

AWS Cloudformation Diagramming Shortcut

A quick little shortcut if you want a visualization of your Cloudformation stack…

  1. Log into the AWS console and go to Cloudformation.
  2. Click DESIGN TEMPLATE
  3. At the bottom of the screen look for the TEMPLATE tab.
  4. Copy your CF template you are working on and paste it in here. Make sure you are overwriting the prefilled JSON thats already there.
  5. In the upper right hand corner look for the Refresh button. It looks like 2 arrows going in a circle.
  6. Now you should see your CF stack visualized. Now you can click the Export button that is near the refresh button you just used.

 

Knife EC2 Server Create Error: Authentication failed

Sometimes with all of the rush and trying to keep track of a 1000 moving parts you might get stumped by a fairly simple issue. Here are a few things to check if you get hung up with an “Authentication failedĀ for user” error when running a “Knife ec2 server create” command.

Waiting for sshd access to become availabledone
Connecting to 52.5.159.42
Failed to authenticate ec2-user - trying password auth
Enter your password:
ERROR: Net::SSH::AuthenticationFailed: Authentication failed for user ec2-user@52.5.159.42@52.5.159.42

Do you have your .pem file downloaded and installed with the correct permissions on your workstation you are running the command from?

It should be in the EC2-USER’s .ssh dir -> /home/ec2-user/.ssh

Make sure its chmodded 400

Make sure you have the knife.rb file set correctly to reference the .pem file for you…otherwise you will have a lot of typing for your command.

knife[:identity_file] = "/home/ec2-user/.ssh/aws-seth.pem"

Make sure you are using the correct user. Unless you have specifically changed something in your configurations by default you will be connecting as the “ec2-user”. So make sure thats what is trying to connect in your error output.

Hopefully these tips will help you narrow down the issue. You have to think about whats really happening and from where with Chef some sometimes these simple issues can really drive you nuts.

Setting up an Apache server using AWS CloudFormation

This a is a brief and simple guide to spinning up an instance, installing Apache, and writing output to a webpage.

Note that by following this guide you can incur charges. Im not responsible for them and will not be picking up your bill.

Setup

1. Have an AWS account. You can signup for one here: http://aws.amazon.com/
2. Have a computer you can work from. This guide assumes you are running Linux. What other OS is there anyways? For me I have an instance on AWS that I startup whenever I want to work on stuff. On this computer you need to install the AWS CLI tool. You can install and setup the CLI tool by going here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html
3. Basic understanding of JSON…not a requirement but it helps to understand what you are reading in the template.

The Template

First here is a link that will guide you further than im going to here on the ins and outs of templates and requirements for CloudFormation: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html

So lets break down the template…

{
 "AWSTemplateFormatVersion" : "2010-09-09",

This is an optional but lets AWS know what version of the template you are using. This is NOT today’s date or something.

"Description" : "Create an Apache webserver with a webpage.",

Gives a description of what your template is for.

"Parameters" : {
    "InstanceType" : {
      "Description" : "Type of EC2 instance to launch",
      "Type" : "String",
      "Default" : "t1.micro"
    },
    "WebServerPort" : {
      "Description" : "TCP/IP port of the web server",
      "Type" : "String",
      "Default" : "80"
    },
    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
    },
    "SSHLocation": {
            "Description": " The IP address range that can be used to SSH to the EC2 instances",
            "Type": "String",
            "MinLength": "9",
            "MaxLength": "18",
            "Default": "0.0.0.0/0",
            "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
            "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
        }
  },

Breaking down the parameters directive…
*If you dont give your parameter a “Default” you will need to pass in the option via the command line. This will be explained further but keep that in mind in this section.
InstanceType – Tells what kind you want to launch. The default here is t1.micro.
WebServerPort – You are telling what port your webserver will accept connections on.
KeyName – This is the name of your AWS key that you use to authenticate with. Use a key that has permissions to setup new instances and connect to them.
SSHLocation – The IP address range that can be used to SSH to the EC2 instances

"Mappings" : {
    "AWSInstanceType2Arch" : {
      "t1.micro"    : { "Arch" : "64" }
    },
    "AWSRegionArch2AMI" : {
      "us-east-1"          : { "64" : "ami-246ed34c" }
    }
  },

Mappings…tells what region and what AMI you want to use. For brevity I already knew what I wanted to use and that I wanted it to fail if not available so my listing of types and regions is VERY short. You can see further info here on mappings: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html

Now we will talk about the Resources section. Its where all the real magic happens. Ill break it down some and then provide the full section at the bottom of this page.

"Resources" : {     
      
    "WebServerInstance": {  
      "Type": "AWS::EC2::Instance",
      "Metadata" : {
        "AWS::CloudFormation::Init" : {
          "configSets" : {
            "InstallAndRun" : [ "Install" ]
          },

          "Install" : {
            "packages" : {
              "yum" : {
                "httpd"        : []              
              }
            },

Here, we are saying that we want to initialize and install something on our instance. Notice that we are using “yum”. The AMI I use is an Amazon AMI which is RHEL based. It comes with some tools already setup and installed specific to AWS so its easier to use this AMI base. Here all we are installing is Apache, or “httpd”.

"files" : {
              "/var/www/html/index.html" : {
              "source" : "https://s3.amazonaws.com/BUCKET_NAME/index.html",  
              "mode"  : "000600",
              "owner" : "apache",
              "group" : "apache"
              },

              "/etc/cfn/cfn-hup.conf" : {
                "content" : { "Fn::Join" : ["", [
                  "[main]\n",
                  "stack=", { "Ref" : "AWS::StackId" }, "\n",
                  "region=", { "Ref" : "AWS::Region" }, "\n"
                ]]},
                "mode"    : "000400",
                "owner"   : "root",
                "group"   : "root"
              },

Here we are pulling down our simple html file from our S3 bucket. I could have easily added code here that would have written to the file the HTML code. Its just cleaner looking code wise to simply pull down the file from your bucket. We are also setting the conf file for cfn-hup. That’s a helper daemon that detects changes and reacts when you are making updates to your stack.

"/etc/cfn/hooks.d/cfn-auto-reloader.conf" : {
                "content": { "Fn::Join" : ["", [
                  "[cfn-auto-reloader-hook]\n",
                  "triggers=post.update\n",
                  "path=Resources.WebServerInstance.Metadata.AWS::CloudFormation::Init\n",
                  "action=/opt/aws/bin/cfn-init -v ",
                  "         --stack ", { "Ref" : "AWS::StackName" },
                  "         --resource WebServerInstance ",
                  "         --configsets InstallAndRun ",
                  "         --region ", { "Ref" : "AWS::Region" }, "\n",
                  "runas=root\n"
                ]]}
              }
            },
            "services" : {
              "sysvinit" : {  
                "httpd"   : { "enabled" : "true", "ensureRunning" : "true" },
                "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true",
                              "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"]}}}}}
},

More cfn helper to manage changes. Also see that we are enabling and making sure that httpd and cfn-hup are running on startup. This could also be done through scripting placed into the template.

"Properties": {
        "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                          { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
        "InstanceType"   : { "Ref" : "InstanceType" },
        "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
        "KeyName"        : { "Ref" : "KeyName" },
        "UserData"       : { "Fn::Base64" : { "Fn::Join" : ["", [
             "#!/bin/bash -xe\n",
             "yum update -y aws-cfn-bootstrap\n",

             "# Install the files and packages from the metadata\n",
             "/opt/aws/bin/cfn-init -v ",
             "         --stack ", { "Ref" : "AWS::StackName" },
             "         --resource WebServerInstance ",
             "         --configsets InstallAndRun ",
             "         --region ", { "Ref" : "AWS::Region" }, "\n",

             "# Signal the status from cfn-init\n",
             "/opt/aws/bin/cfn-signal -e $? ",
             "         --stack ", { "Ref" : "AWS::StackName" },
             "         --resource WebServerInstance ",
             "         --region ", { "Ref" : "AWS::Region" }, "\n"]]}}}
},

Properties for the instance…Calling on the Mappings info and the yet to be mentioned SecurityGroup. This is also where the packages are installed and configured as you can see the scripting here. Also note that this has all been part of “WebServerInstance”. We are just ending that section.

"WebServerSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
        "GroupDescription" : "Enable HTTP access via port 80",
        "SecurityGroupIngress" : [
          {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"},
          {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"}}
        ]
      }
    }
  },

This sets up the security group and allows access to ports 22 and 80 for http and ssh traffic.

"Outputs" : {
    "WebsiteURL" : {
      "Description" : "URL for newly created Apache server",
      "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WebServerInstance", "PublicDnsName" ]}]] }
    }
  }
}

This section will output the info you need to know the URL of your new instance. You will look in the Output section of your CloudFormation stack to see the info which will be a link you can click to see your webpage.

AWS Command

So now you have your AWS CLI tool installed and you have your template written…now its time to fire up your stack and see the fruits of your labor.

The command you will want to run is:

aws cloudformation create-stack --stack-name STACK_NAME --template-url https://s3.amazonaws.com/BUCKET_NAME/template.json --parameters  ParameterKey=KeyName,ParameterValue=YOUR_KEY_NAME

This declares that you are using cloudformation to create a new stack, you are giving the stack name, location of where you will download and read the template from, and finally filling in the parameter of your AWS key.
*Remember earlier when I said that if you do not have a “default” for a parameter that you will need to provide it? This is where I was talking about. For the KeyName param we did not provide a default keyname to use. You can pass a number of parameters on the command line as we have here up to a maximum of 60. Really at that point you should be looking at a scripted option to provide those right?

So once you have executed the above command you can look in your CloudFormation and see your stack being created. If it fails it will stop and rollback the creation. Once it completes go to your Output tab and you should see the URL of your webpage. Click it and you should see whatever you put in your index.html file that is in your S3 bucket.

That’s about it. The best way to learn this is to play around with it and read through the CloudFormation documentation. Let me know if you have comments or questions.

 

Full template for you to copy and paste:

Make sure you use a json validator to be sure that the copy and paste didn’t mangle anything.
{
  "AWSTemplateFormatVersion" : "2010-09-09",

  "Description" : "Create an Apache webserver with a webpage.",

  "Parameters" : {
    "InstanceType" : {
      "Description" : "Type of EC2 instance to launch",
      "Type" : "String",
      "Default" : "t1.micro"
    },
    "WebServerPort" : {
      "Description" : "TCP/IP port of the web server",
      "Type" : "String",
      "Default" : "80"
    },
    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
    },
    "SSHLocation": {
            "Description": " The IP address range that can be used to SSH to the EC2 instances",
            "Type": "String",
            "MinLength": "9",
            "MaxLength": "18",
            "Default": "0.0.0.0/0",
            "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
            "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
        }
  },

  "Mappings" : {
    "AWSInstanceType2Arch" : {
      "t1.micro"    : { "Arch" : "64" }
    },
    "AWSRegionArch2AMI" : {
      "us-east-1"          : { "64" : "ami-246ed34c" }
    }
  },

"Resources" : {     
      
    "WebServerInstance": {  
      "Type": "AWS::EC2::Instance",
      "Metadata" : {
        "AWS::CloudFormation::Init" : {
          "configSets" : {
            "InstallAndRun" : [ "Install" ]
          },

          "Install" : {
            "packages" : {
              "yum" : {
                "httpd"        : []              
              }
            },

            "files" : {
              "/var/www/html/index.html" : {
              "source" : "https://s3.amazonaws.com/BUCKET_NAME/index.html",  
              "mode"  : "000600",
              "owner" : "apache",
              "group" : "apache"
              },

              "/etc/cfn/cfn-hup.conf" : {
                "content" : { "Fn::Join" : ["", [
                  "[main]\n",
                  "stack=", { "Ref" : "AWS::StackId" }, "\n",
                  "region=", { "Ref" : "AWS::Region" }, "\n"
                ]]},
                "mode"    : "000400",
                "owner"   : "root",
                "group"   : "root"
              },

              "/etc/cfn/hooks.d/cfn-auto-reloader.conf" : {
                "content": { "Fn::Join" : ["", [
                  "[cfn-auto-reloader-hook]\n",
                  "triggers=post.update\n",
                  "path=Resources.WebServerInstance.Metadata.AWS::CloudFormation::Init\n",
                  "action=/opt/aws/bin/cfn-init -v ",
                  "         --stack ", { "Ref" : "AWS::StackName" },
                  "         --resource WebServerInstance ",
                  "         --configsets InstallAndRun ",
                  "         --region ", { "Ref" : "AWS::Region" }, "\n",
                  "runas=root\n"
                ]]}
              }
            },

            "services" : {
              "sysvinit" : {  
                "httpd"   : { "enabled" : "true", "ensureRunning" : "true" },
                "cfn-hup" : { "enabled" : "true", "ensureRunning" : "true",
                              "files" : ["/etc/cfn/cfn-hup.conf", "/etc/cfn/hooks.d/cfn-auto-reloader.conf"]}
              }
            }
          }

        }
      },
      "Properties": {
        "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                          { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
        "InstanceType"   : { "Ref" : "InstanceType" },
        "SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
        "KeyName"        : { "Ref" : "KeyName" },
        "UserData"       : { "Fn::Base64" : { "Fn::Join" : ["", [
             "#!/bin/bash -xe\n",
             "yum update -y aws-cfn-bootstrap\n",

             "# Install the files and packages from the metadata\n",
             "/opt/aws/bin/cfn-init -v ",
             "         --stack ", { "Ref" : "AWS::StackName" },
             "         --resource WebServerInstance ",
             "         --configsets InstallAndRun ",
             "         --region ", { "Ref" : "AWS::Region" }, "\n",

             "# Signal the status from cfn-init\n",
             "/opt/aws/bin/cfn-signal -e $? ",
             "         --stack ", { "Ref" : "AWS::StackName" },
             "         --resource WebServerInstance ",
             "         --region ", { "Ref" : "AWS::Region" }, "\n"
        ]]}}        
      }
    },
    
    "WebServerSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
        "GroupDescription" : "Enable HTTP access via port 80",
        "SecurityGroupIngress" : [
          {"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"},
          {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"}}
        ]
      }
    }
  },
  
  "Outputs" : {
    "WebsiteURL" : {
      "Description" : "URL for newly created Apache server",
      "Value" : { "Fn::Join" : ["", ["http://", { "Fn::GetAtt" : [ "WebServerInstance", "PublicDnsName" ]}]] }
    }
  }
}

Setting up a Chef workstation with ChefDK

*This is assuming you are running on CentOS or some other RHEL platform

Download the Chef-DK package…
Go to: http://downloads.getchef.com/chef-dk/
Install the package…

sudo rpm -Uvh ChefDK.....rpm

Once its installed check it and make sure the install was successful…
Do:

sudo chef verify

 


Set System Ruby

Do:

which ruby

You might see something like this: ~/.rvm/rubies/ruby-2.1.1/bin/ruby
If you want to use the version of ruby that came with ChefDK do the following…assuming you are using BASH…
Do:

echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile

Do:

source ~/.bash_profile

Do:

which ruby

Install Git if you dont already have it…

sudo yum git install

 


Setting up the chef-repo

You can do this two ways….download the starter kit from your Chef server OR manually. In this case we will do this manually because I already happen to have a hosted Chef account and am also using my keys on other instances and dont want to have to set them all up again. So…go to your home directory and do:

git clone git://github.com/opscode/chef-repo.git

Then go to ~/chef-repo/ and do:

mkdir .chef

Three files will need to be placed in this directory:
– knife.rb
– ORGANIZATION-validator.pem
– USER.pem

In order to not upload your .chef directory which will house your certs do this:

echo '.chef' >> ~/chef-repo/.gitignore

Now you need to get the 3 files that go into your .chef directory.
Log onto your Chef server. For me this is located at: https://manage.opscode.com

Once logged in click ADMINISTRATION at the top then the name of your organization.

Knife.rb – Click “Generate Knife Config” and download the file. Place it in your .chef directory
ORGANIZATION-validator.pem – can be downloaded by clicking “Reset Validation Key” in the Administration page.
USER.pem – can be downloaded by clicking Users on the left hand side and then choosing your username, and finally clicking “Reset Key

 



Add Ruby to your Path

DO:

echo 'export PATH="/opt/chefdk/embedded/bin:$PATH"' >> ~/.configuration_file && source ~/.configuration_file

Now lets verify that we are all set…
DO:

cd ~/chef-repo

DO:

knife client list

You should see a list of your clients which will only be the one you are on for right now.

That’s it. Let me know if you have questions or run into issues or see mistakes.