Hoptoad and Javascript, Sitting in a Tree, S-E-N-D-I-N-G - GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS

| Comments

Hoptoad and Javascript, Sitting in a Tree, S-E-N-D-I-N-G – GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS.

I’m really excited about this new feature of HopToad.  I’ve played around with ExceptionHub but it was missing some important features like team management.  Leveraging HopToad to do this kind of JavaScript/browser level error tracking really cleanly combines two useful and similar tools for debugging a running system.

I should add that I echo the concerns of some of the commenters on the linked to blog post about security concerns.  It would be helpful if ThoughtBot followed this up with a post to address this concern in a bit more detail.

All in all though, this is nice

Rethinking “F@#$ You Money” – Tony Wright Dot Com

| Comments

Rethinking “F@#$ You Money” – Tony Wright dot com.

This is a very interesting take on what retirement is starting to look like for a lot of people.  I have thought for a while that retirement wasn’t really a goal that I felt like I was moving towards, but that doesn’t mean I don’t want to reach a point of financial security where money isn’t a concern.  It just means that i don’t want to stop doing the things that I love to do.  I’ve been lucky enough to find joy in my work and moreover find an entire category of employment that makes me happy to be a part of.

What’s interesting about this post is a look towards making that which you love sustain you (financially) for longer than just your job’s paycheck might.  Tony has a similar direction of thought as you see coming from Gary V in his book Crush It, which I highly recommend.  I can’t think of a better way to make a living than to share the things that I love to do with a broader audience.

4 Items That Have Reinvented My Workouts

| Comments

4 items that have reinvented my workouts.

I know Josh from our ELC days and he’s always been much better at maintaining a workout routine compared to me.  I’ve been struggling for a number of years now to find a routine that I could maintain for more than a few weeks.  This types of posts really inspire me to try again and keep it up.  I think staying active has a huge impact on my happiness.

New Job at ProFounder

| Comments

In case you missed it on twitter and actually care about recent events in my life, I just started as CTO at ProFounder this week.  ProFounder is a really cool company and they have a very exciting mission.  We are helping small businesses raise money from their friends and family while still adhering to the law, which is way more complicated than you’d think.  I’ll be talking more about it in the future, as well as getting in to what makes this such an interesting business/tech problem.

One on my first tasks is to get an alpha released and a second developer to help me complete the app for a public launch by this Fall.  If you are interested check out our job posting.

User Stories and Mind Mapping

| Comments

I read an article recently by Robert Dempsey about how he has recently discovered mind mapping as a way to manage user stories.  His technique was interesting and it gave me the excuse I needed to take another shot at mind mapping.

I’ve tried mind mapping in the past and it never really stuck for me.  It was a little too free form.  I needed some structure for my ideas, that’s why I was looking for something in the first place.  Robert’s post briefly describes the structure that he has used for forming user stories.  The method made the idea click a little bit better.  Basically he starts with the project in the center and then the ring out is the different actors.  Hanging off the actors are the actions that they should be doing.

I downloaded the 30 day trial of MindJet MindManager and tried Robert’s technique out on my current project.  It was a lot easier to get started that it had been on previous attempts with mind mapping, but I kept feeling like something was missing.  Here’s what finally clicked for me: what the user does is less important than why they want to do it.

In the classic user story we have an actor, an action, and a business value that the story provides (As an <actor> I want to <action> so I can <business value>).  I slightly modified Robert’s approach and added a level for business value — the “so I can” clause of a user story.  Now I have the project in the center, next the actors, and then I start listing out the business value that each actor wants to get from the app that we are building.  Under each of these leaves I can then start describing actions that would provide the actor with the business value.

This really make things click.  My ideas started to come together and I feel like the result is clear user stories that are customer business value focused.  I need to thank Robert again for convincing me to dig in to mind maps again.  I think this will be a big help in focusing my ideas in to something communicable and implementable.

Up and Running With MagLev

| Comments

The MagLev alpha was released recently.  Before I get too far in to this post I need to make it clear that I’m not affiliated with the MagLev development team.  I’m not really even much of a Ruby interpreter hacker.  I’m a curious ruby developer that has heard some interesting things about the project and wanted to get it up and running now that it’s available.  I decided to make this post because the install and setup procedure is anything but standard.  It’s not complicated, just not what you normally would expect.

First lets get the code:

[sourcecode language=“bash”]
$ git clone git://

Initialized empty Git repository in /Users/rgarver/Sources/maglev/.git/
remote: Counting objects: 28955, done.
remote: Compressing objects: 100% (12671/12671), done.
remote: Total 28955 (delta 15669), reused 28427 (delta 15200)
Receiving objects: 100% (28955/28955), 14.97 MiB | 539 KiB/s, done.
Resolving deltas: 100% (15669/15669), done.
Checking out files: 100% (2180/2180), done.
Initialized empty Git repository in /Users/rgarver/Sources/maglev/.git/remote: Counting objects: 28955, done.remote: Compressing objects: 100% (12671/12671), done.remote: Total 28955 (delta 15669), reused 28427 (delta 15200)Receiving objects: 100% (28955/28955), 14.97 MiB | 539 KiB/s, done.Resolving deltas: 100% (15669/15669), done.Checking out files: 100% (2180/2180), done.

$ cd maglev

Great, we have the code.  Next step is to do a base install.  This installs the base libraries and GemStone which is the fabled persistence layer that MagLev has integrated.  GemStone is a object persistence layer originally built for Smalltalk.  If you haven’t ever played with Smalltalk or some of the variants (eg: Squeak) I recommend it.  It will turn your head upside down.

[sourcecode language=“bash”]
$ ./
[Info] Starting installation of MagLev-22578.MacOSX on sirius.local
Sat Nov 21 09:22:44 PST 2009
[Info] Setting up shared memory
Total memory available is 4096 MB
Max shared memory segment size is 4 MB
Max shared memory allowed is 4 MB
[Info] Increasing max shared memory segment size to 2048 MB
kern.sysv.shmmax: 4194304 -> 2147483648
[Info] Increasing max shared memory allowed to 2048 MB
kern.sysv.shmall: 1024 -> 524288
[Info] Adding the following section to /etc/sysctl.conf

  1. kern.sysv.shm* settings added by MagLev installation
    [Info] Setting up GemStone netldi service port
    [Info] Adding "gs64ldi 51456/tcp" to /etc/services
    [Info] Downloading GemStone archive using /opt/local/bin/wget
    -2009-11-21 09:22:44-
    Connecting to||:80… connected.
    HTTP request sent, awaiting response… 200 OK
    Length: 74858717 (71M) [application/zip]
    Saving to: `’

100%==================>] 74,858,717 847K/s in 1m 45s

2009-11-21 09:24:32 (694 KB/s) – `’ saved [74858717/74858717]

[Info] Uncompressing GemStone archive into /Users/rgarver/Sources
[Info] Linking gemstone to /Users/rgarver/Sources/GemStone-22578.MacOSX
[Info] updating MSpec, RubySpec, and RBS submodules
Submodule ‘benchmark’ (git:// registered for path ‘benchmark’
Submodule ‘spec/mspec’ (git:// registered for path ‘spec/mspec’
Submodule ‘spec/rubyspec’ (git:// registered for path ‘spec/rubyspec’
Initialized empty Git repository in /Users/rgarver/Sources/maglev/benchmark/.git/
remote: Counting objects: 7332, done.
remote: Compressing objects: 100% (5521/5521), done.
remote: Total 7332 (delta 1595), reused 6917 (delta 1274)
Receiving objects: 100% (7332/7332), 9.90 MiB | 578 KiB/s, done.
Resolving deltas: 100% (1595/1595), done.
Submodule path ‘benchmark’: checked out ‘d807eea7f7b2f38240bc177a0c22e599081882ea’
Initialized empty Git repository in /Users/rgarver/Sources/maglev/spec/mspec/.git/
remote: Counting objects: 2745, done.
remote: Compressing objects: 100% (1080/1080), done.
remote: Total 2745 (delta 1848), reused 2484 (delta 1644)
Receiving objects: 100% (2745/2745), 378.57 KiB | 383 KiB/s, done.
Resolving deltas: 100% (1848/1848), done.
Submodule path ‘spec/mspec’: checked out ‘bcec47c70e0678a29fd0c1345358c4daf7b971a3’
Initialized empty Git repository in /Users/rgarver/Sources/maglev/spec/rubyspec/.git/
remote: Counting objects: 26787, done.
remote: Compressing objects: 100% (8705/8705), done.
remote: Total 26787 (delta 18332), reused 25672 (delta 17482)
Receiving objects: 100% (26787/26787), 3.71 MiB | 520 KiB/s, done.
Resolving deltas: 100% (18332/18332), done.
Submodule path ‘spec/rubyspec’: checked out ‘b0a18cf80dc706d39ee550831b8b941224b60fb6’
[Info] Creating new default ‘maglev’ repository
[Info] Generating the MagLev HTML documentation
[Info] Finished upgrade to MagLev-22578.MacOSX on sirius.local

[Info] MagLev version information:
maglev 0.6 (ruby 1.8.6) (2009-11-20 rev 22578-1067) [x86_64-linux]
GEMSTONE: 3.0.0 Build: 64bit-22578
MONTICELLO: MagLev-ao.1067.mcz
MAGLEV: commit e2a4fe2e0f7ca85cdcb141e6b56913eba802eefd
Author: Allen Otis <>
Date: Thu Nov 19 19:57:09 2009 -0800
[Info] GemStone version information:
GemStone/S 64 Bit
3.0.0 Build: 64bit-22578
Fri Nov 20 8:22:00 2009

[Info] Adding these to your .bashrc will make it easier to run MagLev
export MAGLEV_HOME=/Users/rgarver/Sources/maglev

[Info] After you complete this upgrade and verify MagLev is working, run
rake stwrappers
to generate the .rb files for the GemStone/Smalltalk FFI
in MAGLEV_HOME/lib/ruby/site_ruby/1.8/smalltalk/

As you can see on OS X it will build everything for 64bit which is pretty cool.  It also downloaded a bunch of support libraries and updated all of the submodules.  If you ever update the code locally you are supposed to run ‘$ ./’ to rebuild everything and get it all up and running.

Once you have it installed you should add the following lines to your .profile or .bashrc

[sourcecode language=“bash”]
export MAGLEV_HOME=/Users/rgarver/Sources/maglev

You’ll need to make sure you run those lines on the command line also.  Once the environment is setup you can run ‘$ rake maglev:start’.  This command apparently boots up the core MagLev engine.

[sourcecode language=“bash”]
$ rake maglev:start
(in /Users/rgarver/Sources/maglev)
startstone[Info]: Starting Stone repository monitor "maglev".
startstone[Info]: GemStone server ‘maglev’ has been started.

Once that is started you are good to go:

[sourcecode language=“bash”]
$ maglev-irb
error , no such file to load — readline,
during /Users/rgarver/Sources/maglev/lib/ruby/1.8/irb/completion.rb
error , no such file to load — readline,
during /Users/rgarver/.irbrc
irb(main):001:0> puts ‘hi’
=> nil

SPDY Looks… Possible

| Comments

Google recently announced their SPDY protocol that they’ve been working on to address a number of inherent non-performant aspects of the HTTP protocol that most of the web depends on.  In the last few years the web has shifted much more towards real-time applications.  Web application development is starting to think about interaction experiences much closer to desktop apps.  It’s not out of bounds to consider the response times of certain queries on a website in terms of keystrokes (~200ms).  Moving the request/transmission protocols to catchup with this change makes sense.

One thing that I am happy about with SPDY is that is appears to be built with deployment clearly in mind.  This isn’t the first attempt to improve web speeds, it’s not even the best, but it does appear to be the simplest to deploy in to the wild and see rapid adoption.  If Apache and Firefox gained support for SDPY out of the box, and it was show that using the protocol would improve server throughput, it would be enough to shift most websites over.  That’s only two players.  That’s pretty promising.

Unanticipated Externalities or the 6 Week Collapse

| Comments

Our development team recently went through a transition period that involved the introduction of a couple of team members.  We aggressively track velocity week to week.  These numbers not only help in planning releases, but also to gauge the health of the team and process.  I generally disregard the first few sprints (sprint = 1 week) for the team to get comfortable with each other and the tools.

I should note here that the team is using a fibonacci scale of estimates and generally has features between 1 and 5 points.  This project is also made up of significant legacy code and is being “stabilized”.  Bugs come in regularly and don’t get estimated.  Big changes to the application need to take in to account existing users and their similarly legacy data. (Legacy here means old and originally developed with minimal QA and tight time constraints)


The first 5 sprints for this team were quite encouraging.  After a 3 week bootstrapping period there was a strong sense that the team was building up to a strong pace.  The team had a rough sprint 6, but they seemed to bounce back the following week.  Sprint 8 had another collapse.  By this point we were looking at 4 weeks and only completing 20 points.  What is going on?  Sprint 4 had us expecting twice that pace.

I’m very fortunate to have a reliable team, a very skilled and experienced team lead, and a patient set of stakeholders.  As we began seeing the fluctuating velocities for what they were (a problem with our process) all of us began looking for causes and solutions.  I’ve seen this before.  The team got about 6 weeks in to the project, everyone was beginning to feel confident that we were doing things right, and then we started to lose control.  We’d nail one sprint only to completely miss on the next one.  It was frustrating and demoralizing.

We got through it.

The root cause came down to unanticipated externalities.  That means that a task would get held up because we needed an icon from the designers, or we needed content for the new email, or our acceptance criteria were vague enough that developers couldn’t quite tell if they were done until QA approved or rejected the work.  The team wasn’t quite sure if their work was done and tasks would get rejected at the end of every sprint for often minor issues.

What did we do to fix it?

The biggest change was to add detail to the acceptance criteria and make sure our QA process would verify exactly to the acceptance criteria.  This ultimately was my fault and by getting QA to strictly focus on the ACs it put pressure on me to get as much in to the ACs as I could, otherwise I’d need to create a new user story to tune it and that may mess up my timelines.  I like to call this approach “strategic pressure points”.  The goal is to strategically put positive and negative pressure and side effects to encourage those best practices that we all say we should follow but often lose motivation after a few times.

The other shift in think came more as a side effect of the first.  That was to hold on to user stories until I had all of the content and graphics ready to go with it.  In an ideal world we’d be able to drop a designer directly on the team and turn the graphics problem from an external issue to an internal one.  This gives the team (plus one designer) control over their ability to complete the stories that they accept in to a sprint.

The key to this turnaround is a return to the basics.  What do the numbers say?  What is going wrong or right; what is causing frustration in the team members?  And, what can you do to make things incrementally better?  Keeping our eyes on the metrics that we are collecting helped us track the instability of our process and allowed us to focus on specifics when looking for problems.  Constantly looking for things that aren’t working perfectly and finding ways to make them slightly more perfect helped us respond to the problems rationally and see rapid recovery from the issues that were affecting us.