Ben McCann

Co-founder at Connectifier.
ex-Googler. CMU alum.

AngelList Twitter LinkedIn Google+

How to take over the computer of a Jenkins user


I recently began using Jenkins and found quite a bit of security indifference. This is unfortunate because Jenkins is the world’s leading continuous integration server used for testing, building, and deploying code. According to RebelLabs, Jenkins has 70% market share, with the next closest competitor having only 9%. I’ve raised these issues with the Jenkins team and have received only dismissive responses thus far. The response I’ve received and the fact that Jenkins has over 50 open bugs filed against it which are categorized as critical security issues and leaves me with little confidence that the team will move on these issues unless attention is drawn to them, which is why I’ve written this post.

Unsecure installation

Let’s start at the beginning and walk through the install instructions. The very first step on Ubuntu is:

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -

Here are the first two steps on Redhat:

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key

If you haven’t noticed anything wrong yet, you’re not alone. I didn’t either the first time I followed these instructions. The issue here is the http://. When you download software from a Linux repository, the system verifies downloaded packages against a gpg signature. Debian has been using strong crypto to validate downloaded packages since 2005, so this is a long standing best practice. However, if you download this signature over an insecure channel, then there is little point because anyone who could deliver a malicious package could also deliver a malicious signature. For this reason, you should only use https with “apt-key add” or else you are rendering void any security it provides. Indeed if you Google “apt-key add” the very first result you get is a StackOverflow post which says “adding keys you fetch over non-HTTPS breaks any security that signing packages added. Wherever possible, you should download keys over a secure channel (https://)”. If only Jenkins would properly configure their SSL certificate for downloading this file and update their docs to suggest https!

Unsecure updates

Jenkins by default loads the URLs to use for updating plugins from http://updates.jenkins-ci.org/update-center.json. This is a problem because Jenkins will download and install whatever package URLs are listed in this file, so if an attacker can modify this file they can install whatever malicious plugins they want. I attempted to remedy this with a one-character pull request to change http to https which was rejected as being too load intensive upon Jenkins servers. I was told on the bug that I filed for the issue that there’s a signature embedded within the file which makes it secure. The problem here is that you need a key which you received securely to check that signature. Because the key is delivered over HTTP as already discussed, much of its value is lost.

Unsecure plugins

A response I’ve gotten to the preceding issue is “You realize that anyone with a Jenkins-ci.org account can release updates to any plugin, right?” So why bother delivering widely used plugins securely when they could be malicious before they ever leave the Jenkins servers? I could update all the most popular Jenkins plugins with malicious code and no doubt thousands of people would update their plugins and find themselves running malicious code. The plugins are all open source, but I have no idea if I’m running the code that I see open sourced. An attacker could download the code for a plugin, modify it in an evil manner, and release an update to that plugin and there’s no way to know whether the code downloaded matches what is in the open source repository.

The irony here is almost killing me. Using Jenkins to build the plugins instead of letting “anyone with a Jenkins-ci.org account” build them would be a great solution to this problem. I was told that fixing this problem would violate “Jenkins project core principles, so you should probably build a better case than ‘this is wrong’ before you bring it up on the dev list.” Without further explanation I’m left wondering why closing security holes would violate Jenkins project core principles. Looking at the core principles only seem to reinforce the idea that these problems should be fixed. It would lower the barrier to entry by making it such that plugin developers don’t need to figure out how to publish them since a continuous integration server could do it. It seems meritocratic to fix security issues raised by the community. It would increase transparency to know that you’re running the code you see available on GitHub and not some attacker’s code. It would not affect compatibility or code licensing. It certainly would be a more automated solution (someone get Alanis Morissette on the phone before I die).

Unsecure for contributors

You can’t even work on Jenkins without facing security problems. If you try to write a plugin for Jenkins, for example, the docs suggest you add the following to your Maven settings:


Again, downloading software over http is not secure. I was told this is a “cosmetic issue” when I filed a bug though I’m hoping the engineer that the bug is assigned to will see that telling users to connect to http is a bit more than that. To help demonstrate this point, I linked to an article which shows how to exploit exactly this problem in my bug report. As a result of that article, Sonatype (who host the most popular Maven repository) is turning on SSL for all users. It is not yet apparent that this will sway anyone working on Jenkins.


So what can you do by getting someone to install a malicious version of a Jenkins server or plugin and how hard is it? Well, there’s already a proof-of-concept for launching a Man-in-the-middle attack against a Maven repository http download and it’s pretty basic code, so I think it’s fair to say that it can be done. If you go to a Jenkins Meetup there’s a chance you’ll be able to snag someone downloading some Jenkins-related software over an unsecured wi-fi connection and be able to infect them. The types of folks who would install Jenkins on their laptops are also somewhat likely to have access to production systems at their companies. And because Jenkins is used to build software that means a malicious version could potentially inject further maliciousness into the software that it’s building or leak the source code of that software to an attacker.

If you care about building secure software, I hope that you’ll ask the Jenkins team to fix these issues and make sure other Jenkins users are familiar with these holes until then. You can also check out https://www.connectifier.com/careers.

Shared GMail account with SAML


SAML is a protocol which securely provides an identity. Using an identity provider which supports SAML, you can setup Single Sign On. However, if you have multiple people sharing a GMail account, things get a little tricky. Here’s how you can set that up for Okta, which is one such identity provider.


Post Back URL https://www.google.com/a/<domain>/acs
Name ID Format EmailAddress
Recipient https://www.google.com/a/<domain>/acs
Audience Restriction google.com
authnContextClassRef PasswordProtectedTransport
Response Signed
Assertion Signed
Request Compressed
Destination https://www.google.com/a/<domain>/acs
Default Relay State https://gmail.google.com/a/<domain>

Sign On:

SAML Issuer ID google.com/a/<domain>
Default username format Custom – <SharedEmail>

When you assign this application to someone, make sure that the SharedEmail is filled in as the username

OpenSAML: Single Sign On using SAML 2.0


Lately I’ve been playing around with Single Sign On (SSO) using the SAML 2.0 protocol. OneLogin, an online Identity Provider, has a sample project which is a great way to see how SAML works. There’s also an OpenSAML library which is quite helpful in forming SAML requests and handling SAML responses.

Here’s an example of using OpenSAML to create a request:

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.StringWriter;
import java.math.BigInteger;
import java.net.URLEncoder;
import java.security.SecureRandom;
import java.util.zip.Deflater;
import java.util.zip.DeflaterOutputStream;

import org.apache.commons.codec.binary.Base64;
import org.joda.time.DateTime;
import org.opensaml.Configuration;
import org.opensaml.DefaultBootstrap;
import org.opensaml.common.SAMLVersion;
import org.opensaml.common.xml.SAMLConstants;
import org.opensaml.saml2.core.AuthnContextClassRef;
import org.opensaml.saml2.core.AuthnContextComparisonTypeEnumeration;
import org.opensaml.saml2.core.AuthnRequest;
import org.opensaml.saml2.core.Issuer;
import org.opensaml.saml2.core.NameIDPolicy;
import org.opensaml.saml2.core.RequestedAuthnContext;
import org.opensaml.saml2.core.impl.AuthnContextClassRefBuilder;
import org.opensaml.saml2.core.impl.AuthnRequestBuilder;
import org.opensaml.saml2.core.impl.IssuerBuilder;
import org.opensaml.saml2.core.impl.NameIDPolicyBuilder;
import org.opensaml.saml2.core.impl.RequestedAuthnContextBuilder;
import org.opensaml.xml.ConfigurationException;
import org.opensaml.xml.io.Marshaller;
import org.opensaml.xml.io.MarshallingException;
import org.opensaml.xml.util.XMLHelper;
import org.w3c.dom.Element;

public class SamlRequestGenerator {

  static {
    try {
    } catch (ConfigurationException e) {
      throw new IllegalStateException(e);

  public String createRequestUrl() {
    String baseUrl = "https://app.onelogin.com/saml/signon/20956";  // Set this for your app
    String consumerServiceUrl = "http://localhost:8080/consume.jsp";  // Set this for your app
    String website = "https://www.mywebapp.com";  // Set this for your app

    AuthnRequestBuilder authRequestBuilder = new AuthnRequestBuilder();
    AuthnRequest authnRequest = authRequestBuilder.buildObject(SAMLConstants.SAML20P_NS, "AuthnRequest", "samlp");
    authnRequest.setIssueInstant(new DateTime());
    authnRequest.setID(new BigInteger(130, new SecureRandom()).toString(42));

    IssuerBuilder issuerBuilder = new IssuerBuilder();
    Issuer issuer = issuerBuilder.buildObject(SAMLConstants.SAML20_NS, "Issuer", "samlp" );

    NameIDPolicyBuilder nameIdPolicyBuilder = new NameIDPolicyBuilder();
    NameIDPolicy nameIdPolicy = nameIdPolicyBuilder.buildObject();

    RequestedAuthnContextBuilder requestedAuthnContextBuilder = new RequestedAuthnContextBuilder();
    RequestedAuthnContext requestedAuthnContext = requestedAuthnContextBuilder.buildObject();
    AuthnContextClassRefBuilder authnContextClassRefBuilder = new AuthnContextClassRefBuilder();
    AuthnContextClassRef authnContextClassRef = authnContextClassRefBuilder.buildObject(SAMLConstants.SAML20_NS, "AuthnContextClassRef", "saml");


    Marshaller marshaller = Configuration.getMarshallerFactory().getMarshaller(authnRequest);
    Element authDOM;
    try {
      authDOM = marshaller.marshall(authnRequest);
    } catch (MarshallingException e) {
      throw new IllegalArgumentException(e);
    StringWriter requestWriter = new StringWriter();
    XMLHelper.writeNode(authDOM, requestWriter);
    String messageXML = requestWriter.toString();

    Deflater deflater = new Deflater(Deflater.DEFAULT_COMPRESSION, true);
    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
    DeflaterOutputStream deflaterOutputStream = new DeflaterOutputStream(byteArrayOutputStream, deflater);
    try {
      String base64SamlRequest = new String(new Base64().encode(byteArrayOutputStream.toByteArray())).trim();

      return baseUrl + "?SAMLRequest=" + URLEncoder.encode(base64SamlRequest, "UTF-8");
    } catch (IOException e) {
      throw new IllegalStateException(e);


Here’s an example of using OpenSAML to read a response:

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.security.KeyFactory;
import java.security.NoSuchAlgorithmException;
import java.security.PublicKey;
import java.security.cert.CertificateException;
import java.security.cert.CertificateFactory;
import java.security.cert.X509Certificate;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.X509EncodedKeySpec;

import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;

import org.apache.commons.codec.binary.Base64;
import org.opensaml.Configuration;
import org.opensaml.saml2.core.Assertion;
import org.opensaml.saml2.core.Response;
import org.opensaml.xml.XMLObject;
import org.opensaml.xml.io.Unmarshaller;
import org.opensaml.xml.io.UnmarshallerFactory;
import org.opensaml.xml.io.UnmarshallingException;
import org.opensaml.xml.security.x509.BasicX509Credential;
import org.opensaml.xml.signature.Signature;
import org.opensaml.xml.signature.SignatureValidator;
import org.opensaml.xml.validation.ValidationException;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.xml.sax.SAXException;

public class SamlResponseHandler {

  private static final String certificateS = "MIIENTCCAx2gAwIBAgIUDFWeXo2US+Je8Erqdc2IvREy8IswDQYJKoZIhvcNAQEF" +
    "BhMCVVMxGzAZBgNVBAoMEkNvbm5lY3RpZmllciwgSW5jLjEVMBMGA1UECwwMT25l" +
    "I84UQx3N8nwl5ayfOJM3KC4AvExeWQQxfc2nO01SPrgJEy/DLr8OeFIXEVVBPVFe" +
    "MKa2TnOARRImshLFzehOu0S+3AcrTWUnQccjpdpC/VUY8z65ntfm0W0XHtJ3HkVW" +
    "uUMPl63X/OU7RLm0ALKahMs9+WV7LcwP/CkDGYUr2UcXz1Ehrcqh6x8FGx90OJCl" +
    "Ws06mWpZYMSlMhNnT2cjN2+50HpU+51mearoZ6uKhD9SwpU4WkIFvfG1GGqj3ZS2" +
    "mTvw1V7RZ28XV7ou5TUEf5YfpsWZ8FMAisiPZpO/mJCBqTSi2KjWN6P/rwIDAQAB" +
    "cMt/MIGfBgNVHSMEgZcwgZSAFFwXtgC2NizDcjsi2SM+Jzt5cMt/oWakZDBiMQsw" +
    "FAxVnl6NlEviXvBK6nXNiL0RMvCLMA4GA1UdDwEB/wQEAwIHgDANBgkqhkiG9w0B" +
    "d0Ld0d2Dt6Gvsczba6fsbdmka9sdjLAfkA9dasdA3sFkasyqoiMN09123jJAooAI" +
    "AQUFAAOCAQEA0FiaxTnK6D9HwirzOcQ0a7/lqqXHnm9nOw6bUS9TKlMNkoV0CqIq" +
    "I6r8zWcB1CqsvrPsB4c3jB0Uc3u8hl+mOkvPUsMOsfM1fV+iGMFl4bYpd/HxQOpv" +
    "tWMpi0TPat/WrbNOEPikahZwMK/XycoZ09VaXFoooSpYoOAaS4pAEwfabneAt1Pu" +
    "O0IS6PrERgRFOe0ww2K9SNImvDLpH1rd239PUXKFFAtasuZhw6ol+kJwgylcyEHU" +

  public void handle(String responseMessage) {
    try {
    // Read certificate
    CertificateFactory certificateFactory = CertificateFactory.getInstance("X.509");
    InputStream inputStream = new ByteArrayInputStream(Base64.decodeBase64(certificateS.getBytes("UTF-8")));
    X509Certificate certificate = (X509Certificate) certificateFactory.generateCertificate(inputStream);

    BasicX509Credential credential = new BasicX509Credential();
    KeyFactory keyFactory = KeyFactory.getInstance("RSA");
    X509EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(certificate.getPublicKey().getEncoded());
    PublicKey key = keyFactory.generatePublic(publicKeySpec);

    // Parse response
    byte[] base64DecodedResponse = Base64.decodeBase64(responseMessage);

    ByteArrayInputStream is = new ByteArrayInputStream(base64DecodedResponse);
    DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
    DocumentBuilder docBuilder = documentBuilderFactory.newDocumentBuilder();
    Document document = docBuilder.parse(is);
    Element element = document.getDocumentElement();

    UnmarshallerFactory unmarshallerFactory = Configuration.getUnmarshallerFactory();
    Unmarshaller unmarshaller = unmarshallerFactory.getUnmarshaller(element);
    XMLObject responseXmlObj = unmarshaller.unmarshall(element);
    Response responseObj = (Response) responseXmlObj;
    Assertion assertion = responseObj.getAssertions().get(0);
    String subject = assertion.getSubject().getNameID().getValue();
    String issuer = assertion.getIssuer().getValue();
    String audience = assertion.getConditions().getAudienceRestrictions().get(0).getAudiences().get(0).getAudienceURI();
    String statusCode = responseObj.getStatus().getStatusCode().getValue();

    Signature sig = assertion.getSignature();
    SignatureValidator validator = new org.opensaml.xml.signature.SignatureValidator(credential);
    } catch (UnsupportedEncodingException e) {
      throw new IllegalStateException(e);
    } catch (CertificateException e) {
      throw new IllegalStateException(e);
    } catch (ParserConfigurationException e) {
      throw new IllegalStateException(e);
    } catch (SAXException e) {
      throw new IllegalStateException(e);
    } catch (IOException e) {
      throw new IllegalStateException(e);
    } catch (UnmarshallingException e) {
      throw new IllegalStateException(e);
    } catch (ValidationException e) {
      throw new IllegalStateException(e);
    } catch (InvalidKeySpecException e) {
      throw new IllegalStateException(e);
    } catch (NoSuchAlgorithmException e) {
      throw new IllegalStateException(e);

Migrating from MongoDB to TokuMX


First be sure to install the latest version of TokuMX on the target machines, which is currently 1.4.2.

Also, for all long-running commands, you’ll want to run them in a tmux session. You can create a new tmux session with tmux new, attach to the default session with tmux attach -d, and quit a tmux session with exit after you’re in it.

Run the following commands on the MongoDB secondary with credentials and paths updated to match your environment:

sudo service mongodb stop
sudo mongodump -u adminuser -p 'password' --dbpath /var/lib/mongodb --journal

Connect to the Mongo primary admin DB and run rs.status(). Get last timestamp from secondary and use it in the mongo2toku command below. You can now restart the MongoDB secondary with sudo service mongodb start.

If you want to copy a file from one machine to another with scp, you’ll want to ssh to the first machine using the -A option to enable forwarding of the authentication agent connection. Note that if this is a long running copy command, you’ll want to use tmux, but the -A option will only work with tmux new and not tmux attach -d without jumping through a bunch of extra hoops. So, using ssh -A and tmux new copy the files to the new machine:

scp -r dump remoteip:/media/ephemeral0/mongodump

Now run the following on the Toku primary being sure to use your credentials, data paths, and oplog time:

sudo mongorestore --dbpath /media/ephemeral0/tokumx dump
mongo2toku --from rs/primary:27017,secondary:27017 --ruser adminuser --rpass 'password' --host localhost:27017 --authenticationDatabase admin -u adminuser -p 'password' --ts=9999999999:9

Finding the size of all MongoDB collections


Here’s a helpful script for finding the size of every table in MongoDB in MB:

var collNames = db.getCollectionNames();
for (var i = 0; i < collNames.length; i++) {   
  var coll = db.getCollection(collNames[i]); 
  var stats = coll.stats(1024 * 1024); 
  print(stats.ns, stats.storageSize);

Sound Insulation for Noisy Offices


I’m a founder at Connectifier, a fast growing tech startup in Newport Beach, CA. We have a great office with many amenities to make it more comfortable such as a kitchen, ping pong table, and sofas. We also have an open floor plan, which is great for keeping everyone in the loop, but less awesome for quiet concentration. As we grow, a space that originally held two people now holds closer to a dozen. At some point we’ll need to find larger office space for a larger team, but there will probably be some point in between now and then where the office is getting uncomfortably full.

In order to plan ahead for our growth, I investigated several office noise solutions. Here’s an idea of what I found.

Ikea Risor Room Divider – $99

Framery Phone Booths – $7500.00 for a Framery-C and $8500 for a Framery-O unit

Clearsonic MiniMega Isolation Booth – ~$2,750 + $200 shipping

Vocalbooth.com – ~$6000 depending on model. Shipping included

Buzzispace – ~$8,000 for Buzzibooth, $2,371 for Buzzicockpit

Airea Phonebooth with door – ~$6,500 + shipping

Other options include a really good pair of Shure SRH440 headphones and Alpine Earplugs

Debugging tools for java.lang.OutOfMemoryError: Java heap space


If you have a Java app that’s crashing due to out-of-memory errors then you can create a heap dump by utilizing the following flags:

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/mydump.hprof

To read the head dump, you’ll need to:

  • Install eclipse memory analyzer
  • Open eclipse with lots of memory: eclipse -vmargs -Xmx6G
  • Open memory analysis perspective: Window > Open Perspective > Other > Memory Analysis

TodoMVC: An Angular vs React Comparison


Two of the more talked about frameworks today are Google’s AngularJS and Facebook/Instagram’s React, but there are limited comparisons between them. TodoMVC is a project which aims to provide a comparison of JavaScript frameworks by implementing a todo list in the most popular frameworks. I have a little experience with Angular and none with React. I looked at both the Angular TodoMVC app and the React TodoMVC app to try to compare them and was very intimidated that the React one took twice as many lines. In this blog post, I’ll aim to break down the code differences between the AngularJS and React versions and try to decide whether React really is much more verbose and cumbersome to write or if there was some difference in implementation and coding style between the two which had a larger affect.

One thing TodoMVC does is allow the user to type onto the list, and if the user hits enter, it moves the current item down and creates a new empty space for a new item.

Here’s the React version for creating a new item:

handleNewTodoKeyDown: function (event) {
  if (event.which !== ENTER_KEY) {

  var val = this.refs.newField.getDOMNode().value.trim();

  if (val) {
    var newTodo = {
      id: Utils.uuid(),
      title: val,
      completed: false
    this.setState({todos: this.state.todos.concat([newTodo])});
    this.refs.newField.getDOMNode().value = '';

  return false;

Here’s the Angular version:

$scope.addTodo = function () {
  var newTodo = $scope.newTodo.trim();
  if (!newTodo.length) {

    title: newTodo,
    completed: false

  $scope.newTodo = '';

Much of the extra code in React is because it is listening for a key and then deciding if it was the enter key whereas Angular was simply listening for a submit. This seems likely to be not related to the framework, but merely a difference in implementation in this case.

Let’s look at removing an item from the list. This also takes additional lines in React:

destroy: function (todo) {
  var newTodos = this.state.todos.filter(function (candidate) {
    return candidate.id !== todo.id;

  this.setState({todos: newTodos});

And here’s the Angular version:

$scope.removeTodo = function (todo) {
  todos.splice(todos.indexOf(todo), 1);

The big difference here is that React creates a new array whereas Angular alters the existing array. I’m not familiar enough with React at this point to know if there’s a requirement to avoid mutating the data structures. However, it’s important to note that the implementation here in React does not work in IE8 because of the use of Array.filter. Throughout the code base, it is a very common theme that much of the extra code results from the React implementation using immutable data structures.

The React also has some extra code for performance improvements. It has included a shouldComponentUpdate method as an example of how performance improvements can be made with React. This is method is not necessary and is used to demonstrate how you could make such an improvement.

 * This is a completely optional performance enhancement that you can implement
 * on any React component. If you were to delete this method the app would still
 * work correctly (and still be very performant!), we just use it as an example
 * of how little code it takes to get an order of magnitude performance improvement.
shouldComponentUpdate: function (nextProps, nextState) {
  return (
    nextProps.todo.id !== this.props.todo.id ||
    nextProps.todo !== this.props.todo ||
    nextProps.editing !== this.props.editing ||
    nextState.editText !== this.state.editText

However, this creates extra code besides just this method. The React version also tracks whether the user is in an “editing” state, which is something Angular does not have any code devoted to and which is only ever used in the shouldComponentUpdate function. This means we need a cancel function and about half a dozen other places to track state that are not present in the Angular version.

cancel: function () {
  this.setState({editing: null});

Some extra code in React is needed in order to show optional components because it requires making an extra variable which sometimes has its value set:

var footer = null;
if (activeTodoCount || completedCount) {
  footer =

In Angular, no extra extra lines are required to show an optional attribute and instead you simply use ng-show or ng-if:

<footer id="footer" ng-show="todos.length" ng-cloak>

Similarly, switching between all, completed, and active todos is quite cumbersome in React:

var shownTodos = this.state.todos.filter(function (todo) {
  switch (this.state.nowShowing) {
    case ACTIVE_TODOS:
      return !todo.completed;
      return todo.completed;
      return true;
}, this);

That took 10 extra lines for something that takes no extra lines in Angular:

<li ng-repeat="todo in todos | filter:statusFilter track by $index" ng-class="{completed: todo.completed, editing: todo == editedTodo}">

A big portion of the extra lines in the React example are also due to coding style. HTML elements which would more typically be placed on a single line have been split between several in the React example:

    checked={activeTodoCount === 0}

Here’s that same code in the Angular example:

<input id="toggle-all" type="checkbox" ng-model="allChecked" ng-click="markAll(allChecked)">

Most of these difference are just implementation or style differences. I think it’s nice to show each example using the idioms of that framework. It may also be nice for TodoMVC to make some basic guidelines so that apps get implemented analogously (e.g. whether to use a submit listener or keypress listener to determine item completion). The one thing that I think would be really annoying is the manner in which React handles if statements in templates vs. the way that this is done in Angular which is much less verbose. I’m also curious about the React implementation’s aversion to mutating state, which seems quite unique to that framework.

Getting started with the Go Language


Here are a few tricks I picked up when trying to get started with go.

Firstly, don’t install Ubuntu’s version of go as you’ll get an old one. Instead, use gvm to install go1.2 or whatever the latest is.

Secondly, be careful to structure your code in the right directory format. If you do not have the correct directory structure it will cause certain errors such as a “cannot find package” error when running “go test”. E.g. if you create a directory named gocode and want to check code out from github into that directory, you must have it in the src/github.com/organization directory such as:


You’ll need to add the directory containing all of your Go code to the GOPATH environment variable. E.g. I did this with:

export GOPATH=$GOPATH:/home/${USER}/src/gocode

You’ll also probably want godep installed since many packages use it for managing dependencies:

go get github.com/tools/godep

Automated Play Framework Testing with Jenkins


Jenkins automates builds and tests. This post describes setting up Jenkins for the Play 2 Framework.

First off, you need a machine with a good amount of resources. I tried first on a small cloud machine with 2GB of RAM and it was not sufficient, so get a machine with 4GB of RAM.

Next you need to install Java and SBT. Also, git if you use it for source control.

sudo apt-get install openjdk-7-jdk git
wget http://repo.scala-sbt.org/scalasbt/sbt-native-packages/org/scala-sbt/sbt/0.13.0/sbt.deb
sudo dpkg -i sbt

Now that you have Java installed, you can install Jenkins:

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

To be able to have Jenkins check the source code out of GitHub or BitBucket, you need to setup an SSH deploy key. First create an SSH key:

sudo su jenkins
ssh-keygen -t rsa
$ cat /var/lib/jenkins/.ssh/id_rsa.pub

Copy the key into GitHub/BitBucket as a deploy key.

Accept the remote host as a machine to trust connections to:

ls-remote -h git@host:org/repo.git HEAD

Now go to the Jenkins web UI. First, Install Jenkins GIT plugin and Jenkins sbt plugin.

Then, setup the system configuration under “Manage” > “Configure System”. Point the sbt plugin to the launcher jar we installed earlier at /usr/share/sbt/bin/sbt-launch.jar.

You’ll also want to set some type of notification of failed builds. Set the “System Admin e-mail address” under “Jenkins Location”, which will be the email address you will receive alerts from and set the SMTP host, username, and password under “E-mail Notification”. I recommend installing the Email Extension Plugin in order to be able to customize the emails that you’ll receive. You can then set project to use Editable Email Notification as a Post-build Action. With the Email Extension Plugin, you’ll need to choose Advanced Settings… and then select Recipients or else your emails won’t go to the recipients you’ve specified. This frustrating option should not exist let alone be unselected by default. To include the build log in the email you can add ${BUILD_LOG,maxLines=10000}. I also suggest adding a trigger so that you get notified both on failed builds and also when the build is fixed.

At this point, you can create a “New Job” selecting “Build a free-style software project”. Enter your git repo location, how often to build, and set it to build with sbt. Enjoy!

Older Posts