On Friday afternoon, we completed our migration from one tower server to three rack-mounted servers. This was a big move for us for a few reasons:
On the whole, the deployment went very smoothly. One thing that didn’t go as expected was that the connectors (our remote data collection devices) did not failover when we changed the DNS. We use JMS to transmit data back to our Glassfish application servers. JMS (at least the CORBA implementation) uses a double handshake to establish a connection between the client and the server. The application server that we were migrating from was configured to use its IP instead of a domain name as its address. Because the clients had bound to this IP, we needed to reset them to get them to communicate with the new servers.
This past week we ended up making two releases: one on Monday and the other late Thursday night. The Monday release was a carryover from the previous week finalizing new functionality for a very important demo on Tuesday. The Thursday release contained features for a newly connected client that was shown to them on Friday. And I was on the critical path for both releases – not fun.
Fortunately, I’m on a team that is highly supportive and collaborative. My teammates were continuously looking for ways to support the releases and alleviate the burden on each others’ shoulders. The Thursday release happened a little after midnight and my teammates and our product manager/customer were there to regression test the build after it went into production.
In a previous life, when helping teams develop the capacity to compress their release cycles, I emphasized the importance of building truly cross-functional teams – teams containing all the roles necessary to carry a release from inception to deployment. At my current company this is easy: we just have developers and a customer. However, having a cross-functional team is not enough. You really need to have created a culture that is collaborative and supportive; a culture where there is no hand-offs and everyone takes responsibility for the success of the release. Without this kind of culture frequent releases are just not sustainable.
Both releases were well received and will contribute to significant new business for our company. However, biweekly feature releases will, I hope, be a rare occurrence. I find it contributes significantly to the stress and potential burnout of the team (at least it does for me anyway). In a rapidly growing company, sometimes a weekly cycle is not fast enough. That said, this could have been averted through better planning and coordination. The weekly release cycle is new to our sales staff, and having a better understanding of how it works will allow them to schedule accordingly in the future.
Last week at the Agile Vancouver conference, my colleague Jeremy Goldstrom presented on the process that we use at our company to deploy new releases to production every week. Aside from being a good distillation of our team process, the session led to some interesting follow on discussion with others that were doing something similar.
Helping organizations compress their release cycles has been a passion of mine for a while and is something that I helped a number of teams achieve in my consulting work with ThoughtWorks. I had worked on trying to set up a service offering called Rapid Response devoted to helping enterprise clients build the capacity for monthly releases, but unfortunately it seemed to gain limited traction with the TW sales staff.
As a SaS company with a rapidly evolving product offering, my current company is an ideal place for applying these ideas. I’ve resolved to try and capture the learnings that I’ve gained through our weekly release process on this blog; if anything it will encourage me to blog a bit more frequently. So stay tuned.
Several weeks ago I re-read Kent Beck’s Extreme Programming white book. It was in preparation for an Agile 101 course that I helped conduct for Agile Vancouver. This was my third time through the book and the first time reading through it in several years. Each time I revisit it, I find I get different things out of it — and this time was no exception.
Reading it this time, I was reminded of all the reasons why I was so taken by the book the first time I read it. The book speaks directly to our fears and failings as software developers. It does so with empathy and the promise of a better way of working. It inspires us to do better by working with our strengths and our weaknesses – our human nature – rather than working against them:
By embracing the human condition rather than ignoring it, XP is truly a humanist approach to building software.
In the intervening eight years since the white book was published, a lot has been written and said on the subject of XP. Much of it seems to be based on a rather loose interpretation of the original work. One of the enduring misconceptions is that XP is a militant, all-or-nothing methodology. The name, of course, doesn’t help. But neither does Bob Martin marching around the stage during his Agile 2008 keynote giving Nazi salutes and putting on a cringe-worthy German accent.
Re-reading the book, I was struck by how Kent Beck advocates an incrementalist approach to adopting XP throughout. Start with your biggest pain point and try out an XP practice that might help; learn from the experience; and iterate. He acknowledges that there is a synergy between the different practices such that the benefit of applying several practices in concert are greater than the impact of applying them individually. But there is no requirement for immediate full-scale adoption.
This is, of course, consistent with the humanist approach. It is hard to take on learning many things simultaneously, figure out how to apply them in context and still reliably deliver value. This is not to mention the challenge of trying to get a team to communicate and coordinate this much simultaneous change. The benefit of having a team focus on making a single change together is easily underestimated.
Two other learnings that I got out of re-reading the book:
In summary, even after 8 years of leading XP teams and 3 times reading through the book, it’s still worth revisiting. If you haven’t read it in a few years, try taking another look.
Last week, I conducted a tutorial on Continuous Monitoring at the Agile 2008 conference in Toronto. The title of the session is Continuous Monitoring: Beyond Continuous Integration. Unfortunately, the track organizers changed the topic title on me twice and as a result I ended up with a number of attendees who had come to learn about setting up an automated build server. Ack! Hopefully, they didn’t go away disappointed and still got something valuable out of the tutorial.
The session was divided into 3 sections: I began with a presentation introducing the topic; next, participants were encouraged to work in small groups to design an andon dashboard for their project teams; the remainder of the session was spent discussing the implementation details involved in building a dashboard. My plan for the latter half of the session was to get participants to integrate metric data from different sources via RESTful XML web services into a simple Rails-based dashboard that I have thrown together, but given the size and interest of the group, it seemed easiest to just discuss the implementation rather than go through with the exercise. I had also intended to demo using a digital photo frame as a digital dashboard, but my photo frame couldn’t get onto the hotel’s wireless.
If you are interested in a copy of the presentation, I’ve uploaded it in Keynote or PowerPoint 2003. Please feel free to use the contents of the slides. The presentation is done in the Lessig style, so it might not be the easiest to follow. If you end up presenting on the topic, let me know — I’m interested to track the thinking and ideas as they evolve. Here’s the embedded slideshow from Slideshare:
As for the code that I used in the demo, I’ll get it uploaded to github soon.
Today is a proud day for Canadian athletes. No, I’m not talking about the Olympics. I’m referring to the World Ultimate Frisbee championships that concluded today in Vancouver. Canadian teams won gold in the Open (Men’s) and Mixed (Co-ed) divisions. The Open division final was the highlight with incredible displays of athleticism exceeding what I’ve seen so far in the Olympics.
In the Open final, the Canadian men’s team took on their American rivals. Despite being the underdogs, the Canadian team came out with an early lead due to some intense, high-energy play. As the rain started coming down and the wind picked up, the Americans began to resort to calling out fouls left, right and centre. Ultimate frisbee is a game without referees, where the rules are balanced to provide a fair system of play — assuming, of course, that players are displaying good sportsmanship. Time and again the Americans called fouls on legitimate defensive plays made by the Canadians. It was a shame to see such dispirited play. Fortunately, the Canadians really showed their class and shrugged off the controversial calls to prevail in the end 17-15. I was proud to see the Canadian side rise above the pettiness and carry the victory.
While at DevTeach, I was interviewed by Scott Hanselman for his Hanselminutes Podcast. We started out talking about the history of the CruiseControl.NET project, but I opted to segue into discussing Continuous Monitoring. Continuous Monitoring focuses on providing continuous feedback to a team by leveraging visible dashboard displays to ambiently communicate information about the health and state of their project. I intend to write more about the practice here on this blog, but for now the podcast is the best place to learn more about it. I will be presenting about it at Agile 2008 and if you are interested in joining the discussion, feel free to join the Continuous Monitoring group.
There are a few statistics that I cited incorrectly off the top of my head during the podcast:
Last week I was out in Toronto presenting at DevTeach. I gave 3 presentations:
Unfortunately I ended up attending relatively few of the sessions as I was pretty busy preparing the materials for my presentations. But what I did see was quite good. I particularly liked Derek Hatcher’s Leveraging the Amazon Platform (EC2 and S3) and Greg Young’s DDDD, Unshackle Your Domain.
What I enjoyed most about the conference was getting to know and learn from some of the experts in a new technology circle. I missed last year’s DevTeach in Vancouver as I was in China at the time but I was glad to have made it out this one.
This past week I’ve been conducting a number of .NET-related presentations for the DevTeach Toronto conference. Unfortunately, the MacBook remote does not work by default in Windows and I wasn’t looking forward to the prospect of keying my way through my PowerPoint deck.
Fortuitously, I came across this handy little utility called EventGhost. EventGhost hangs out in your system tray, intercepts events from external devices and then allows you to script the response to the event. It comes with a plugin for intercepting events from the MacBook IR receiver which you can then map onto keystrokes.
To get going with Event Ghost, you need to add a plugin for HID: Aple Computer, Inc. IR Receiver. Clicking the buttons on the remote control will then allow you to see the names for the various events. Next create a macro for each event type and then choose the Emulate Keystrokes action to produce the right response. The screenshot below shows the settings that I use.
80% technical, 20% social change. This blog is dedicated to finding ways to sustainably release software more frequently.