Bleacher Report's Continuous Integration & Delivery Methodology: Continuous Delivery Through Elastic Beanstalk

Posted Jun 3rd, 2014

elastic_beanstalkThis is the first of a three part series highlighting Bleacher Report's continuous integration and delivery methodology by Felix Rodriguez.  I have been tinkering with computers since I was a kid and I can remember playing Liero on D.O.S. like it was the greatest game ever to exist. I started out building computers and websites, then got into tech support, and now I am a Quality Assurance technician at Bleacher Report - when I'm not cruising around California on my motorcycle, that is. While working at Bleacher Report,  I helped maintain their existing automation suite. I took it upon myself to revamp the collection of long unrelated rspec tests into a more OOP Cucumber-based testing framework. Now we have a integration testing server that I built with an API to build suites and track tests over time. We are starting to move some of our new services over to Elastic Beanstalk because we knew it would be easier for us to manage our stacks and issue deploys. Being a rather new service, we were unable to find any integrations with Travis CI out of the box. After experimenting with some of the custom functionality this tool provides, we were able to issue commands to download the binaries to the VM that Travis spins up and create the files we need in order to issue an Elastic Beanstalk deploy command. This was far simpler and less time consuming than trying to install our deployment software on a Travis VM. After demoing this to our Operations department, they were more than eager to have us switch new applications to Elastic Beanstalk as developers have way more control over how the development environment is configured (think Heroku or Nodejitsu). On my own I was able to build an application and the environment it was contained in, as well as ensure the latest version was being continuously deployed, after a successful travis build, to a staging server, it was able to kick off an integration suite, and return results of each step of the process. This was magic to us; it freed up a lot more time for Operations to focus on making sure our applications scale, allowed QA to focus on writing tests - not running them, and developers to focus on coding their application without having to adhere to the limitations of their environment with old tool sets. If you're using Amazon's Elastic Beanstalk service, or plan on building any new applications, I highly suggest this route to make your life much easier. If not I would skip to "The Hard Way" which allows you to use EB indirectly to update your apps.

The Easy Way

TravisCI unfortunately does not support Elastic Beanstalk out of the box, but using a clever hack you can automate the EB configuration and deploy cycle through a .travis.yml config. You should have keep track of each answer the EB init prompt asks you so we can preseed the responses in the "echo -e" command. I got most of my inspiration from but I was unable to get it working completely, so I had to try something else.
- wget ""
- unzip ""
- AWS-ElasticBeanstalk-CLI-2.6.2/AWSDevTools/Linux/
- mkdir .elasticbeanstalk
- sudo echo "[global]" >> .elasticbeanstalk/config
- sudo echo "AwsCredentialFile=/Users/travis/.elasticbeanstalk/aws_credential_file"
  >> .elasticbeanstalk/config
- sudo echo "ApplicationName=cukebot" >> .elasticbeanstalk/config
- sudo echo "" >> .elasticbeanstalk/config
- sudo echo "EnvironmentName=YOUR_STAGING_ENVIRONMENT_NAME" >> .elasticbeanstalk/config
- sudo echo "Region=us-east-1" >> .elasticbeanstalk/config
- cat .elasticbeanstalk/confi
- cat ~/.elasticbeanstalk/aws_credential_file
- echo "us-east-1" | git aws.config
- echo -e "$AWS_ACCESS_KEY_ID\n$AWS_SECRET_ACCESS_KEY\n1\n\n\n1\n53\n2\nN\n1\n" | AWS-ElasticBeanstalk-CLI-2.6.2/eb/linux/python2.7/eb init
- git aws.push
Now anytime you push code to master and your travis build succeeds you will automatically deploy your new code to the staging enviroment you created.

The Hard Way

TravisCI supports a number of deploy services out of the box, unfortunately for us, we do not use any of those services to deploy our apps. The way we had to approach continous delivery was through travis's custom webhooks. First we must build a small application that accepts posts from travis when a build completes. They provide a sample sinatra application to help you get started: we want to modify it a bit to add a json object we create to our amazon sqs queue.
puts "Received valid payload for repository #{repo_slug}" # "stag",
  :repo => repo,
  :branch => "master",
  :user_name => user,
  :env => "1"
queue.send if payload["branch"] == "master"
From there I added a deploy queue class that we can accept the information passed from the travis payload like so:
require 'aws-sdk'
require 'json'
class DeployQueue
  def initialize(options={})
    @queue_text = {
      :cluster => options[:cluster], # staging or production 
      :repo => options[:repo],
      :branch => options[:branch],
      :env => options[:env],
      :user_name => options[:user_name]
    @sqs =
    @q = @sqs.queues.named("INSERT_NAME_OF_QUEUE")
    puts "Deploy sent to queue: #{options[:repo]}_deploy_queue: #{@queue_text}"
  def send
    msg = @q.send_message(@queue_text.to_s)
Then we can add the following to your .tavis.yml
  webhooks: http://url/where/your/app/is/
  on_success: always
  on_failure: never
Amazon Elastic Beanstalk allows us to build a worker with an easy to use GUI interface that will run commands for each message in our AmazonSQS queue. I created a quick video demonstration for you to see how easy it is! Basically, all we have to do now is wrap our deploy script inside of a small Sinatra web application. Create a Procfile with the following:
worker: bundle exec ruby app/worker.rb
As well as an app/worker.rb file
require 'bundler/setup'
require 'aws-sdk'
require 'sinatra'
require_relative '../lib/deploy_consumer'

enable :logging, :dump_errors, :raise_errors

  :access_key_id => ENV['AWS_ACCESS_KEY_ID'],
  :secret_access_key => ENV['AWS_SECRET_KEY'])

post '/deploy' do
  json =
  puts "json #{json.inspect}"
  data = JSON.parse(json) ## Your Deploy CMD here
The DeployConsumer is not necessary; it's a script that I made that just takes the Json object received from the queue and uses it to determine what environment it should deploy to.  This should be replaced with your own deploy script. If you are interested in what the consumer looks like, you can view it here: Stay tuned next week for part two of this mini series! You can follow Felix on Twitter at . Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

Written by

Bill McGee


CI/CDOpen source