If you learned how to make decisions before the fall of the Berlin Wall, you might get overwhelmed by decision making today. We used to live in a fairly black-and-white world -- East versus West, Pepsi or Coke, Miller or Bud, Democrat or Republican, ABC, NBC, or CBS.
How do you decide? If you're like a great many people in my generation, one tactic might be to create a list of pros and cons for each of the alternatives, and then compare these lists side-by-side. Ok, great! Let's use that to select a content management system. Let's see, what are our alternatives?
- WordPress
- Joomla
- Drupal
- Typo3
- ModX
- Expression Engine
- Django
- Sharepoint
- Ektron
- SquareSpace
- ... and let's see. 1200 others go here.
Uh, oh. How on earth am I going to effectively compare 1200 different alternatives?
Ok. I lied. While there are always attempts to limit options and choices down to a clear set of alternatives, and for a time on the macro scale, that might have been the case in certain domains, in others there have always been too many choices (or at least in the past 1/2 century). Such as books, music, movies, where to live, magazines, sports teams.
So what's the best book? Which is the best state in the United States? Who is the best artist? Where should you go on vacation? These are not questions that can be objectively answered -- the answers depend largely on who you ask, and what criteria you've chosen to compare.
It's no longer Mac vs PC
When you have a huge number of alternatives, it's a mistake to sink a lot of time into comparing them. There's a much more effective strategy:
- Identify a set of criteria you can use to judge, specific to your current needs as well as anticipated future needs. Break these into must-haves and nice-to-haves.
- Pick a few available alternatives, and compare each to your criteria. Eliminate all that don't meet all of the must-haves.
- If you find a feature or capability not in your list, decide whether it's something you care about -- if so, add to your list. If not, ignore.
- If you end up with an option that provides all your nice-to-haves, you're done! If you find only two or three, now you can do a direct comparison. If you find more than a handful, add some stricter criteria to your list. If none of your original set of alternatives work, broaden your search and try a few more.
- Once you have identified one or two top contenders, try them out. Evaluate not just against your original criteria, but also against intangibles, like whether you "like" using the solution.
- If you find something that works for you, meets all your criteria, make a decision to stick with it, unless/until you find a solution that is clearly a better match for your criteria.
It's all about your criteria
The big downfall of feature comparison charts is that they aren't comparing a product to your needs -- they are comparing a product to competitors based on criteria that favor the product being sold. You absolutely have to have a critical eye when evaluating a solution, and think about the bigger picture: how this solution fits in with everything else you do in your organization. The problem is, our brains are easily influenced by seeing long lists of items in a feature list, and tricked into thinking that quantity equals quality. Well, we think, it does all this other stuff, it must do what I need! Not necessarily the case...
Here's an example of a recent technology decision I made for our company, selecting a Configuration Management tool to automate provisioning our production and development environments. Some of my criteria might surprise you...
Decision: Select a configuration management (CM) platform
Goals:
- Automate the provisioning and deployment of the following kinds of computers: 1. Developer and staff workstations 2. Staging and development servers 3. Production servers.
- Facilitate password updates, account removals, software updates, and distribution of scripts
Must-have criteria:
- Open Source. As an open source shop, solutions we deploy must be open source. One key business reason for this is to eliminate vendor risk -- we don't want to invest in a solution that might get priced out of our budget range or taken away from us.
- Capable of running on a Linux server. We're Linux junkies here, and don't want to have to manage another operating system just for our configuration management.
- Capable of managing Ubuntu Linux hosts. The vast majority of our servers and desktops run Ubuntu.
- Capable of managing other Linux distributions. We do manage CentOS, Gentoo, and SuSE servers in addition to Ubuntu.
- Can detect/correct changes in the underlying configuration of an individual system.
- <skipping a bunch of technical requirements>.
- Can be extended with custom functionality. You never know how we might want to use it in the future.
- Stable, production-quality, in widespread use. We don't have time to be beta testers of this.
- Low or no ongoing cost.
Nice-to-haves:
- Can manage settings in a default + overrides way, allowing us to keep our configurations as simple as possible.
- Can report updates to an XMPP-based chat room. We use chat internally, and it's a great way to monitor what's going on.
- Can trigger actions via XMPP/Chat.
- Can manage a Windows server. Very rarely we need to do that.
- Uses a LAMP-based technology stack (preferably PHP, Python, Perl, or Ruby), and not Java or .NET. This is because of our in-depth server administration experience in this stack.
- Can provide a list of target machines with a particular configuration, which can be synchronized with an application database.
- Has a robust and active developer community.
- Easy to get started, and grow over time.
- Developer community is helpful and welcoming.
The Short List
From the start, these are the alternatives that were on my radar:
... there are several dozen others, mostly proprietary. These are basically the options I have heard of, and had some sense that they would meet our needs. How did I hear of them? Aside from miscellaneous references to them here and there, the biggest source for me happens to be a podcast called FLOSS Weekly, on the Twit network, where these 4 options each have a complete show. Listening to these episodes gave me a feel for each of the projects, much like reading a solid review.
How do they stack up?
Puppet is a really strong contender. Huge developer community, large enough to have its own conference. Lots of sample configurations out there, and success stories.
Chef is slightly older, and has a similarly huge community. It sounds like there are some differences in philosophy -- you write recipes for your configurations that script out what gets done and in what order, whereas with Puppet you just list what is dependent on what and it figures out the sequence for you. Both are mostly written in Ruby.
CFEngine is older still, some 20 years old, and claims to have a more "scientific" approach to configuration management. It was originally written in Bash (a scripting language) but now much of it is in C.
Salt is the newcomer of the bunch, but has made very impressive inroads in the two short years it has been around. It is written in Python and leverages ZeroMQ for very fast, very scalable management of thousands of hosts. It started as a remote execution system, to run commands simultaneously on all of your targets, but has added "state management" to bring each machine to a specified state in a way that seems very similar to CFEngine.
All four of these options met the "Must have" criteria without breaking a sweat.
The Decision: Salt
Why did we choose Salt? It boiled down to a bunch of less concrete, more touchy-feely types of criteria, but two things clinched it:
- Extremely easy to get started.
- Very fast to run a command on demand (remove a particular user account).
Getting Salt installed on a new host is pretty much a 4 step operation:
- Run a one-line install command.
- Edit the configuration file to point at the master.
- Restart the salt "minion" service.
- Accept the key on the the master.
Setting up the master only added a couple more steps, installing the master software (a single package) and creating a base "top" file for the states.
Inside 15 minutes, I had the salt master set up and a couple "minions" already talking to it. Inside another 15 minutes I had some 20 of our computers talking to it. And with a single command, I was able to revoke a user account of an employee who no longer works with us, on all computers we control.
That ease of getting set up, compared to the other solutions, really made a big deal. But that would not have been enough if it wasn't capable of doing the job. And that's where the other intangibles come in:
- Python. While we mainly use PHP here, Python is our next most common language, and we know it better than Ruby.
- ZeroMQ. I've been playing around with queuing models and event-driven programming, and have come to really like this pattern, which is the underlying architecture that allows Salt to scale to huge numbers.
- One of the Salt modules allows you to use Puppet configuration files as modules, so we could feasibly leverage existing configurations for Puppet using Salt.
- The base configuration files are written in YAML, which is a really simple text-based file format we've already used with some Drupal configuration tools. It's very simple to replace the YAML configurations with Python programs to generate configurations on the fly based on much more complex criteria.
- There are examples described to manage Salt from XMPP, and the path to hooking up to our project management tool seems very straightforward, opening up a lot of opportunities for further automation.
- The business model behind SaltStack.com is devoted to keeping the code open, and many core contributors are not employed by the core company. This suggests a thriving ecosystem much like Drupal, with the founder working at a marquee company but encouraging other companies to provide services. (CFEngine seems just as good in this regard).
We've been using Salt now for over 5 months, and it's been a huge improvement to our environment. As we replace servers that are reaching end-of-life, we're building up the new configurations in Salt -- which greatly helps our disaster recovery plans, giving us the ability to provision a new web server on pretty much any cloud provider in around 20 minutes with all our tools ready to go, the web server and database with a good starting point for tuning, our provisioning scripts ready to add a site, statistics and daily/weekly jobs pre-scheduled.
Decision verified, and made.
Almost more important than what decision you make is committing to the choice, going in full bore. I remain convinced we could have done well with any of these options. We happened to choose one for our own pretty arbitrary reasons, though the low barrier to entry was perhaps the biggest factor. Right after making the decision, we found lots of evidence that we had made a good one, so pretty quickly it became our overwhelmingly entrenched CM system of choice. Now we're quickly coming up to speed with how to make this tool do the more sophisticated things we would like it to do, and we see no good reason to switch to anything else.
At this point, as a configuration management user, Salt is what we use and recommend. That's a statement very much like saying Seattle is where we live and work, and we have no plans to move elsewhere. While there are plenty of other great cities in the world, Seattle offers everything we need and most of what we want to be able to do.
When you're asking a technology professional for a recommendation, you're going to hear about the systems they know and inhabit. That doesn't mean there aren't other viable options out there -- but if you ask a New Yorker the best way to bike to Fremont, you might well end up slogging up Queen Anne Hill. And while you can certainly make some broad decisions about where to live based on whether you want to be in a big expensive city or a cheap small one, the important thing when picking a technology system is to get the basic criteria right and just move in. If you really don't like it, you can always move later. But that's not a decision to be made lightly.
Fred Hutchinson Cancer Research Center eagle-i Integration
About Fred Hutchinson Cancer Research Center
Fred Hutchinson Cancer Research Center’s (FHCRC) Shared Resources core facilities support biomedical research by providing services and expertise that permit more rapid
It's been a hectic few months at our new digs in Pioneer Square, and we've gotten settled in nicely. We ramped up quite a bit for a couple of big rush projects, and as we near the end, we've got quite a bit more capacity for new work. Since moving, we have 5 new people on board: Christo, Erin, Kristina, Luke, and Steve.
Not only have we've grown, but we've also invested a lot of time setting up processes to improve our consistency. We've always had some really great projects going, while a few projects suffered from not getting the same level of attention. We've changed how we do work to add a horizontal layer of planning and another for testing/Quality Assurance to make sure we don't have developers handling some projects isolated from the rest of the team.
As our quality and consistency increases, so do our costs. We are planning to raise our rates in the next few months. However, since we have some availability right now, we're keeping our rates where they are for the next month or two. So if you have a web project that needs getting done, now is a great time to get on our calendar pretty much immediately, and get your work done with higher quality than we've ever had, at lower cost than we'll offer in the future! Give us a call at 206-577-0540 or drop us a line to get started today!
Add new comment