Setting up a Chef workstation with ChefDK

*This is assuming you are running on CentOS or some other RHEL platform

Download the Chef-DK package…
Go to: http://downloads.getchef.com/chef-dk/
Install the package…

sudo rpm -Uvh ChefDK.....rpm

Once its installed check it and make sure the install was successful…
Do:

sudo chef verify

 


Set System Ruby

Do:

which ruby

You might see something like this: ~/.rvm/rubies/ruby-2.1.1/bin/ruby
If you want to use the version of ruby that came with ChefDK do the following…assuming you are using BASH…
Do:

echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile

Do:

source ~/.bash_profile

Do:

which ruby

Install Git if you dont already have it…

sudo yum git install

 


Setting up the chef-repo

You can do this two ways….download the starter kit from your Chef server OR manually. In this case we will do this manually because I already happen to have a hosted Chef account and am also using my keys on other instances and dont want to have to set them all up again. So…go to your home directory and do:

git clone git://github.com/opscode/chef-repo.git

Then go to ~/chef-repo/ and do:

mkdir .chef

Three files will need to be placed in this directory:
– knife.rb
– ORGANIZATION-validator.pem
– USER.pem

In order to not upload your .chef directory which will house your certs do this:

echo '.chef' >> ~/chef-repo/.gitignore

Now you need to get the 3 files that go into your .chef directory.
Log onto your Chef server. For me this is located at: https://manage.opscode.com

Once logged in click ADMINISTRATION at the top then the name of your organization.

Knife.rb – Click “Generate Knife Config” and download the file. Place it in your .chef directory
ORGANIZATION-validator.pem – can be downloaded by clicking “Reset Validation Key” in the Administration page.
USER.pem – can be downloaded by clicking Users on the left hand side and then choosing your username, and finally clicking “Reset Key

 



Add Ruby to your Path

DO:

echo 'export PATH="/opt/chefdk/embedded/bin:$PATH"' >> ~/.configuration_file && source ~/.configuration_file

Now lets verify that we are all set…
DO:

cd ~/chef-repo

DO:

knife client list

You should see a list of your clients which will only be the one you are on for right now.

That’s it. Let me know if you have questions or run into issues or see mistakes.

Advertisements

Using AWS CLI to connect to MySQL on RDS

I messing around with RDS on Amazon Web Service for a project im working on and realized that there is not a true way to connect to the instance that the database is running on when you stand up a MySQL instance.

After poking around a little bit I came up with this solution and it worked ok for me. Probably needs to be refined and there is probably an easier way also but this worked for me.

First im going to assume you have a MySQL instance running in RDS. Once you have this done you will need to download and unzip the AWS CLI tool to a directory you have access to. > http://aws.amazon.com/developertools/2928

Once you have done this you will need to unzip the package. You can check out the README if you want or you can just do this…
1. copy the credential-file-path.template file and call it something like “cred-file” then chmod it 600
2. vi cred-file fill in the AWSAccessKeyId and AWSSecretKey values. You can get these values from IAM control panel in AWS under your user. If you are lost on that there’s a handy tool called Google. ūüôā
3. You will need to set AWS_RDS_HOME in your path
4. Make sure JAVA_HOME is set in your path

Once you have followed these steps you can now try it out…
Do: $AWS_RDS_HOME/bin/rds-describe-db-instances --aws-credential-file cred-file --headers
Hopefully your formatting will be better than what you see here but you should see something similar to this:

DBINSTANCE DBInstanceId Created Class Engine Storage Master Username Status Endpoint Address Port AZ SecAZ Backup Retention Multi-AZ Version License Publicly Accessible Storage Type
DBINSTANCE blah 2014-10-15T17:36:01.553Z db.t1.micro mysql 5 blah available blah.czbrj4wkmqs3.us-east-1.rds.amazonaws.com 3306 us-east-1b us-east-1a 7 y 5.6.19 general-public-license y gp2
SECGROUP Name Status
SECGROUP default active
PARAMGRP Group Name Apply Status
PARAMGRP default.mysql5.6 in-sync
OPTIONGROUP Name Status
OPTIONGROUP default:mysql-5-6 in-sync

Now you have tested and know you can access your MySQL db in RDS.

AWS Loadbalancing

For previous setup info see my post on setting up instances using Chef and AWS: HERE

AWS Loadbalancing

If you have completed the steps in the previous post you can repeat them a few times to spin up 3 or more instances. For this post lets assume I have spun up 3 instances named web001, web002, and web003.

1. In your AWS control panel navigate to your EC2 instances and on the left hand column click “Load Balancers” under “Network and Security”.

2. Click the “Create Load Balancer” button at the top.

3. Define Load Balancer Enter a name for your LB and for this example use “EC2-Classic”. By default port 80 is being monitored and since we have 3 web servers running httpd this will be all we need. Click Continue.

4. Configure Health Check For this example the only thing that you need to change is PING PATH. Change it to just “/” dropping the¬†index.html that is already there. Leave everything else the way it is. Continue

5. Add Instances to Load Balancer Check off your 3 instances. Leave the settings for Availability Zone Dist as they are. Continue

6. Keys Here you can create a key and value pair for your load balancer. For example, you could define a tag with key = Name and value = Webserver. Continue

7. Review This will give you a run down of your load balancer before you create it. Click CREATE and it will be started up.

Once your LB has been started it will take a couple of minutes before your instances are in service. If you click the name of your LB  you can see the description tab below. Wait a couple of minutes and then copy the DNS Name which should be the first line that ends with (A Record) Ex. LB-001-201566711211.us-east-1.elb.amazonaws.com Paste this into your browser and you should see your webpage that you created in your instances.

Tabs

You have 7 tabs that contain information about your loadbalancer.

1. Description DNS Name can be found here along with brief info on the status fo the instances connected to this LB

2. Instances Status of the instances that are connected to the LB. You can also drop the instance from the LB here

3. Health Check The main thing to watch here is “Unhealty Threshhold” ¬†You can also edit the Health Check settings here.

4. Monitoring CloudWatch metrics for the selected resources. You can also create alarms for your instances on this page.

5. Security Displays your security groups that are connected to this LB

6. Listeners The ports and protocols you are listening for on your instances.

7. Tags Tags that you created when setting up the LB and you can also create more from this page as well

 

This is a VERY basic run down of how to create a load balancer in Amazon AWS. If you have any questions or input feel free to use the comment section below.

 

Chef: Spin-up and bootstrap a node on AWS

A few quick instructions for spinning up a node on AWS using Chef and the Knife EC2 plugin.
Some of these steps I culled together from various sites and thought it might be helpful to have them all in one spot.

Base box: This is running from an instance I stood up on AWS (redhat)

Prep work:

DO: yum groupinstall Development tools
DO: Install Chef Client: curl -L https://www.opscode.com/chef/install.sh | bash
DO: Download your starter kit from your Chef Hosted account and unzip in /opt/chef/ (assuming you have a hosted account already setup there)
DO: copy your AWS key file (.pem) to your /home/ec2-user/.ssh/ folder so your workstation can connect back to AWS to spin up new instances. (There may be a different or better way of doing this but it worked for me)

 

Install the Knife EC2 plugin…
sudo /opt/chef/embedded/bin/gem install knife-ec2

Configure the knife.rb for AWS…(knife.rb is located at something like… /opt/chef/chef-repo/.chef/knife.rb)

Knife File
____________________________________________________________________________________________

current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "YOUR_USERNAME"
client_key "#{current_dir}/YOUR_USERNAME.pem"
validation_client_name "YOUR_ORGNAME-validator"
validation_key "#{current_dir}/YOUR_ORGNAME-validator.pem"
chef_server_url "https://api.opscode.com/organizations/YOUR_ORGNAME"
cache_type 'BasicFile'
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
cookbook_path ["#{current_dir}/../cookbooks"]

#A note about where to find this info…
#You will need to log into your AWS IAM (https://console.aws.amazon.com/iam) Go to users, click your #username and you should see an area that says “Access Credentials”. You will likely need to create new #ones if you didnt already know what your keys were at this point. Do so and then make sure you download #them and keep in a safe spot. Fill out the next two lines…the rest is optional at this point but #useful to know that you can have this here.

knife[:aws_access_key_id] = “AWS_ACCESS_KEY_ID”
knife[:aws_secret_access_key] = “AWS_SECRET_ACCESS_KEY”

# Default flavor of server (m1.small, c1.medium, etc).
knife[:flavor] = “m1.small”

# Default AMI identifier, e.g. ami-12345678
#knife[:image] = “ami-b06a98d8”

# AWS Region
#knife[:region] = “us-east-1”

# AWS Availability Zone. Must be in the same Region.
#knife[:availability_zone] = “us-east-1b”

# A file with EC2 User Data to provision the instance.
#knife[:aws_user_data] = “”

# AWS SSH Keypair.
##knife[:aws_ssh_key_id] = ‚ÄúName of the keypair you want to use in AWS…Instance> Keypair‚ÄĚ
___________________________________________________________________________________________

 

Final Command

Finally run this command to connect to AWS and start up your instance and install the recipe:
sudo knife ec2 server create -x ec2-user -S *KEY_NAME* -G default,www -r 'recipe[apache2]' -I ami-b06a98d8

-x = user that we’ll connect with SSH
-S = AWS SSH keypair
-G = Security groups
-r = recipes to run once the instance is up
-I = AMI Identifier e.g. ami-12345678

(This is assuming you already know how to store your cookbooks locally and upload them to your Chef server. )

Once this completes you should have a new instance on AWS with a working apache2 instance and should be able to navigate to the webpage you created for Apache. A separate post will be coming soon to cover these steps.

 

Possible Error

You could possibly run into this:
[fog][WARNING] Unable to load the ‘unf’ gem. Your AWS strings may not be properly encoded.
ERROR: Excon::Errors::SocketError: getaddrinfo: Name or service not known (SocketError)

If so install UNF via gem like so:
sudo /opt/chef/embedded/bin/gem install unf
Try your create command again…

Once you are finished playing around you can use Knife EC2 to actually delete the AWS instance and also remove the node from your hosted Chef management Console:
sudo knife ec2 server delete -y –purge INSTANCE_IDENTIFIER
sudo knife ec2 server delete -y --purge i-e639ca0d

 

Quick and Dirty: Install and setup Elasticsearch, Logstash, and Kibana

Quick and Dirty: Install and setup Elasticsearch, Logstash, and Kibana

First you obviously need to download all of the packages. You can get them from HERE. Its also a given that you have Apache webserver installed and running.

For me I like to use the TGZ files. Im kinda funny about not getting files spread out all over the place and I like to keep them all together so I put all the files into /opt and exploded them there…

tar -zxvf *.tar

I setup my symlinks…

ln -s /opt/elasticsearch-1.1.1 ES
ln -s /opt/logstash-1.4.0 LG
ln -s /opt/kibana-3.0.1 KB

From here you pretty much are customizing your install if you already know what you are wanting to log and track. In my example I was just logging all the log files found in /var/*

First I configured a conf file for LG and ES. Here im just telling LG to log all the *log files it finds under /var/* Im doing this as root and this is just on a local vm so im not particularly worried about permissions or security. Just know that you dont want to do it this way in a prod environment.

CONF FILE:  logstash-apache.conf     I created this and put it in /opt/LG/bin/

input {
file {
path => "/var/*/*log"
start_position => beginning
}
}
filter {
if [path] =~ "access" {
mutate { replace => { "type" => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => localhost
}
stdout { codec => rubydebug }
}

For more info on that conf file check out the Logstash tutorial here.

Now you want to copy everything that’s in the /opt/KB/ dir into a dir in your webserver. For me…

cp /opt/KB/* /var/www/html/

You will then need to edit a file. Open “config.js” and find the line that says “elasticsearch:” This is the location of your webserver. Since mine is locally hosted on a VM, mine reads:

elasticsearch: “http://localhost.localdomain:9200”,

From here you are ready to start things up. First start Elasticsearch, then Logstash and then we will fire up Kibana.

/opt/ES/bin/elasticsearch
/opt/LG/bin/logstash -f logstash-apache.conf

Then in your browser go to: http://localhost.localdomain:9200

You should now be looking at the default Kibana page. It will have some further info on getting started and how to change your default startup page in Kibana.

 

Mount a shared drive in virtualbox

This is mainly because I keep forgetting the commands for this…

sharename="whatever.you.want.to.call.it"; 
sudo mkdir /mnt/$sharename
sudo chmod 777 /mnt/$sharename
sudo mount -t vboxsf -o uid=1000,gid=1000 $sharename /mnt/$sharename
ln -s /mnt/$sharename $HOME/Desktop/$sharename

 

Then add this line to your /etc/rc.local file so that it auto-mounts when you start up your vm.

Chef Error: Knife configure

When doing your initial “knife configure -i” command while setting up a chef workstation, if you encounter this error:

ERROR: Errno::EHOSTUNREACH: No route to host - connect(2)

Make sure you check your firewall settings.

On CentOS you can do:

sudo iptables -S

This will show you what is enabled currently. If you don not have port 443 open you will run into issues. To open it you can do this:

-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT

Make sure you save your changes…

sudo service iptables save

…and restart the firewall…

sudo service iptables restart