You NEED to start using Vagrant for Web Development.

Vagrant is an exceptionally useful tool for pretty much every web development project, and rather than provide you with a step by step guide to using it (there are much better, more educated, resources out there for that), I’m going to my best to make a solid case for why you should be using it. There are very good reasons to be jumping on this train whether you’re in development, design or management, everyone can benefit from this.

First though, what is Vagrant? It’s a tool for building development environments in a completely automated fashion. It takes the process of setting up your environment, installing databases, interpreters, web servers and so on, and completely automates it within a sandbox. Once you’ve created your configuration files, you can get a development setup onto ANY computer by installing VirtualBox, Vagrant, and then running a single entry on the command line.

It’s easier to show than tell, so lets take a look at some ‘early days’ config files for Pomodoro Go, a side project I’m working on. If this kind of thing is not your bag (it’s developer focussed), skip past all this to find out the key advantages to all this trouble. Here’s the config:

Vagrant.configure("2") do |config|
  config.vm.box = "precise64chef"
  config.vm.box_url = "http://grahamc.com/vagrant/ubuntu-12.04-omnibus-chef.box"

  config.vm.network :forwarded_port, guest: 9000, host: 9010
  config.vm.provision :chef_solo do |chef|
    chef.cookbooks_path = [
      "deployment/cookbooks",
    ]
    chef.add_recipe "pomodorogo"

     chef.json = {
       "postgresql" => {
         "password" => {
           "postgres" => "postgresql's_password",
           "vagrant" => "vagrant's_password"
         }
       }
     }
  end
end

This is the base of Vagrant’s configuration. When you run Vagrant, it starts off with a ‘box’, which is basically an image of an operating system and forms the base of your config. The box url means that Vagrant will download it automatically if required. Once the box is set up, you can ‘provision’ it, installing required software, creating databases, users, and so on.

In this case I’m using Chef, an automated configuration manager, but you can also use basic shell scripts when you’re getting your feet wet.  You’ll move on pretty quickly to something like Chef or Puppet though, because they are much easier to work with in a detailed configuration. Most of the detail of this configuration is in the recipe `pomodorogo`, but I think it’d be a bit heavy for this blog post. Once provisioning is complete, your box is ready to go, up and running.

From here you can SSH into the box using the command `vagrant ssh` and do whatever you like. In additionn any web servers you run will connect to you local machine through port forwarding, and the development folder gets synchronised with the box, so you can work on the project locally once the server is running.

Why Bother with Vagrant? Reasons.

That’s a pretty quick look at it all, here’s the list of reasons WHY this is worth the trouble.

  • Designers don’t need developers to spend a day setting up their machine. A proper configuration means a single command will build up everything they need to run the project on their machine while they go grab a coffee.
  • New People can be brought into the project with much less friction as the Vagrant configuration is kept with your version control system. This is especially valuable for very small teams or solo developers looking to bring in new people, where everyone’s time is at a premium.
  • Deployment configuration is upfront in the Vagrant process, so when it’s time to take your project live your setup files will be up to the task with a little bit of tweaking. This knowledge and confidence will greatly smooth any launch you need to do.
  • Testing can be made much closer to your production setup, but in a development scenario, which again makes sure your deployment process will be a much nicer experience.
  • Developers in teams will spend a lot less time working on ‘it worked on my machine’ issues, so they’ll be able to devote more time to adding value, not chasing ghosts.

Please give it a try, the learning curve is entirely worth the effort as the potential gains to your projects are enormous.

Testing JohoDB with Jasmine

I’ve never really engaged in much unit testing with Javascript, as a solo developer I pretty much have to spend a lot of time optimising for years of my life implementing a given application. As Javascript was typically easy to observe when used only for front end effects, so I saved my testing rigour for the data layer of my web apps, where all the insidious stuff can happen. Not ideal but it’s the best I can do given current resources.

Testing the JohoDB application using Jasmine

Testing the JohoDB application using Jasmine

Of course, when writing libraries it’s a whole different game. With JohoDB you’re going to be deploying to a moving specification, with new technology and a wide variety of browsers. There’s no option but to test every feature if I’m serious about going live with it. I pretty quickly went with Jasmine, and have been able to customise it’s output very easily to get what I wanted out of my test page.

Here’s some features that come out of the box with Jasmine:

  • Create your test suites in a hierarchical structure to make it easier to follow.
  • You can click on individual test suites to filter it down, speeding up testing when you’re  working on a particular feature. In my case I customised it a little so some ‘setup suites’ would always run.
  • Each error come with messages and a traceback to help you get started on the bug hunt.
  • Asynchronous functions can be given a time limit, again with a message so you can figure out where the chain broke.
  • Everything is run through functions, so if  you need to run a test suite multiple times with different input you can do so easily. In my case I have a function with tests, and then it gets passed the IDB database, and then the SQL Database. That way I know both are being tested with exactly the same API calls.

Jasmine is something you should seriously look into if you’re writing non-trivial applications in Javascript. It has very quickly enabled me to clean up some portions of my code in JohoDB.

To be specific I’ve now got the database encapsulated properly so I can run the two side by side, no dramas, and delete / rebuild the database for every single test suite. It really is a stress test for how the promise system is holding up to keep the API manageable, and it’s going great so far.

There’s still more work to do, but once the existing library is well tested, It’ll be time to build up features so I can reach a version to test in an actual application. I can’t wait.

Why I Decided to Invent the Wheel – Browser Databases

Offline storage is a moving target in web application development. IndexedDB is becoming commonly accepted, however WebSQL is still all you’ve got when it comes to running a database on the client. When I wrote the Elephant Prototype I built it entirely with IndexedDB because I wanted to learn how to work with it on a large scale application, and seeing as how the standard was gravitating towards it I may as well get in front of the game.

I achieved my goals but being my first time building a non-trivial Javascript application rather than just a jQuery AJAX et al implementation, I made a few decisions that weren’t long term maintainable. The major ones being the API I wrote was entirely IndexedDB specific, and the fact that it was entirely done using callbacks, which became remarkably messy when it came to more complex queries for which the standard IndexedDB API is insufficient.

With that, I knew that I was going to have to make a serious improvement on my approach to offline storage if I was serious about this whole offline web application thing. Local and Session Storage is adequate for simpler, easily JSON serialised data, but for a real application with genuine relationships and all that good stuff, it’s just not the business.

Investigating My Options

In getting started with my recent project, Pomodoro Go!, I immediately wanted to tackle this problem. I tried to use JayData which claimed full support for IndexedDB and WebSQL, however I came up against two solid roadblocks very quickly:

  • JayData does not handle relationships by itself, so it’s value was already diminishing to me.
  • JayData was already having issues for me in some fairly trivial examples involving IndexedDB, mainly with race conditions causing things to just not work. 

After trying that and getting disillusioned, I had a look at other solutions around and they were either not supporting IndexedDB yet, or trying to reinvent storage with their own layer of stuff. All I wanted was a unified API that could use WebSQL or IndexedDB as it was designed, without the headaches, but I couldn’t seem to find it.

Enter JohoDB

With that I gave up on some saviour library that would make my life easier, and decided to write my own library now called JohoDB that would handle either WebSQL or IndexedDB using an API similar to Django’s Object Relational Mapper. It would address the following requirements:

  • Same output and input regardless of database.
  • Decoupled development of the main API and the database API, so I could respond to new database technologies in the future without a major rewrite.
  • Decoupled usage from the application specific code, to allow for a high level of flexibility. Your models don’t have to have a .save() method like in Django, rather you send your object to the API for it to get saved.
  • Promise driven rather than callback driven, to make the asynchronous nature of Javascript storage much less of a headache, as well as allowing the developer a much greater level of control.
  • Leverage existing technology without reinventing the wheel. The standards are standards for a reason, and JohoDB at the base level should always use those standards as they were intended rather than trying to create it’s own storage layer.

With that in mind, I set to work and now have something to show for it. While not at all production ready I’m quite confident in meeting all the above challenges with this: JohoDB. The website linked demonstrates the API from top to bottom, I’ve implemented Foreign Key and Many To Many Relationships, and it works in both WebSQL and IndexedDB. It’s been open sourced so feel free to check out the source code.

I’m really excited about this right now, currently I’m working to build up a good suite of tests for it and just build inertia to something that is production ready, but it’s certainly already looking bright from my perspective.

Pomodoro…… GO!

I find myself with the time to start a new project, and I’m planning on logging the development process so here goes. It’s going to be a digital implementation of the small but very helpful productivity process called the Pomodoro Technique.

The key goal of the project is to create a comprehensive implementation of a HTML5 Offline Application that addresses the following criteria:

  • Client Application for Mobile and Desktop that works with or without an internet application.
  • Server Application that functions entirely as an  API, for authentication, backup and synchronisation.
  • Support for client side translation.
  • Full application is accessible for screen readers, and usable by those without sight.
  • Application is usable across the full spectrum of modern mobile and desktop browsers, at least on the mobile side: iOS, Android and WinPhone; and on the desktop side: IE9/10, Chrome, Firefox, Safari, Opera.

While I don’t know how things will turn out in the end, my initial plan is to use the following tech:

Client Side

  • AngularJS for client side application control, templates and code modularity.
  • JayData for offline storage abstraction, to support both WebSQL and IndexedDB without having to write low level code for both.
  • ngTundra, an Angular Module I’ve written for client side translation.

Server Side

  • Written is Scala, ethos being the server API should be as rock solid as possible to keep all other clients honest. This means having type safety, plus using a functional style as much as possible to reduce side effects.
  • More exploration needs to be done in this space, but I’m hoping to work with Finagle + routing delivered through something like Finatra.
  • The server will avoid doing anything beyond API communication as much as possible.

Anyway, it’s just the beginning and at the moment I’m scaffolding the code base and wire-framing. I’ll use this blog as a development log, looking to talk about all sorts of things such as implementation details, the tools I’ll be using, and any major difficulties I experience working through the above plan. Here we go!

Generating a Javascript Loader with Grunt

If you’re building a web application you’re going to end up with a sizable body of Javascript code, which you’ll usually want to keep separate until deployment. The best way to do this, that won’t require you to change the HTML on deployment, is to use a generated loader file that will load the full body of javascript code.

Recently I’ve made a Grunt Specific task for automating this based on ways I’ve done it in the past. It parses using Coffeescript but it’d be really trivial to change for your purposes. The script also assumes the presence of head.js You can find the Gist of it here: https://gist.github.com/MalucoMarinero/5473658

You can then configure it using options like the ones below.


  grunt.initConfig {
    loader:
      desktopClient:
        # The directory to look for javascript files.
        srcDir: 'www/clients/desktop/js'

        # The loader file to generate
        dest: 'www/clients/desktop/js/loader.js' 
        
        # Javascript files that must be loaded first due to dependency, when they
        # match they'll get pushed to the top of the head.js caal.
        priorityMatches: ['**/app.js', '**/module.js']

        # Patterns which will not get included in the loader. If the loader ends up in the source
        # you'll want to make sure the loader filename is in there.
        ignorePattern: 'loader.js'

        # A URL Prefix to make sure the filenames work in the browser. Without it all file paths
        # will be relative to the srcDir.
        prefix: 'js/'

        # What global variable the loader call will be attached to. This makes the loader a function
        # that can perform callback actions once the load is complete.
        varName: 'desktopLoader'
  } 

Once the loader is generated, on your HTML page all you need is to load that single loader.js file, and then run the function, at which point your application will come into being. Any initialisation that needs to be done after loading can be done by running the function with a callback.

When it comes time for deployment, the same HTML will be all you need, the loader will just point to a concatenated and minified file instead and everything will work as before. Hope you find it useful.

Resolving LiveReload Conflicts in Grunt

Grunt is an excellent automation tool and part of the Yeoman toolset. It has this ridiculously cool feature that allows you to watch files for changes and then tell every browser looking at your webpages to refresh — mobile, tablet, desktop, whatever — and it does this without browser extensions. It’s pretty speccy.

There’s a problem though: if your HTML, CSS and JS is all generated from HAML, SASS and Coffeescript like me, it doesn’t work like you’d expect it to:


# Examples in Coffeescript, that's how I roll.

  grunt.initConfig {
    watch:
      livereload:
        files: [
          'src/example/**/*.{haml|coffee|sass}'
          'www/example/**/*.{html|js|css}'
        ]
        tasks: ['coffee', 'compass', 'hamlpy', 'livereload']
  }

Rather than compiling the Coffeescript, SASS and HAML, followed by a LiveReload, the watch event doesn’t pick up the changes to the HTML, CSS and Javascript so that reload never happens. Makes sense really, it’s not watching during the task run so it misses it.

You can’t run two watch tasks concurrently though, unless you open another terminal session. If you want to keep it all in one process, you can do this:


mm = require 'minimatch'

  grunt.initConfig {
    watch:
      livereload:
        files: [
          'src/example/**/*.{haml|coffee|sass}'
          'www/example/**/*.{html|js|css}'
        ]
        tasks: ['reloadDispatcher:example']
    reloadDispatcher:
      example:
        "**/*.haml" : ['hamlpy']
        "**/*.sass" : ['compass']
        "**/*.coffee" : ['coffee']
        "**/*.{html,css,js}" : ['livereload']
  }

  grunt.registerMultiTask 'reloadDispatcher', 'Run tasks based on extensions.', () ->
    for pattern, tasks of this.data
      if grunt.regarde.changed.some mm.filter pattern
        grunt.task.run tasks

With this new task, what it’s doing is checking the list of files changed that the watcher (grunt.regarde) detected against your patterns in the config, and then for any matches it runs the appropriate tasks. Dunno how it will behave in bigger projects but that’s enough experimentation for now.

Autoformatting Indented SASS

Personally I like to use SASS in it’s indented syntax. It’s easy to read, and because it’s sensitive to white-space you can change the structure very quickly using simple commands in Vim.

After watching a demonstration about Tabular I added a handy little function that automatically formats your properties as you write them, so they are lined up and easy to read like this:

span.day
  display:        block
  font-size:      ms(2)
  text-align:     right
  border-bottom:  1px solid $bg_color
  line-height:    1.25em
  margin-bottom:  0.25em

Here’s how you do it. Start off by installing the Tabular plugin with your script manager of choice, I use Vundle. Once you’ve done that, pop the following into .vim/ftplugin/sass.vim :

inoremap <silent>  :   :<Esc>:call <SID>align()<CR>a
function! s:align()
  let reg = '^\s*[a-z\-]*:\s*'
  if getline('.') =~# reg && (getline(line('.')-1) =~# reg || getline(line('.')+1) =~# reg)
    Tabularize /:\zs
    if getline(line('.')-1) =~# reg
      let endpos = strlen(matchstr(getline(line('.')-1), reg))
    else
      let endpos = strlen(matchstr(getline(line('.')+1), reg))
    endif
    normal! 0

    let currentlinelen = strlen(getline('.')) 
    if currentlinelen < endpos
      exe "normal A" . repeat(' ', endpos - currentlinelen) . "\"
    else
      call cursor(line('.'), endpos)
    endif
  endif
endfunction

All done, from now on when you’re in a SASS file Vim will automatically line up your properties when you type the colon, so the entire set stays neatly formatted as you go.