Showing posts with label project management. Show all posts
Showing posts with label project management. Show all posts

2013-12-18

Engineers, Hours, Perks, and Pride

This started as a Google+ post about an article on getting top engineering talent and got way too long, so I'm posting here instead.

I wholeheartedly agree that 18-hour days are just not sustainable. It might work for a brand-new startup cranking out an initial release, understaffed and desperate to be first to market. But, at that stage, you can expect the kind of passion and dedication from a small team to put in those hours and give up their lives to build something new.

Once you've built it, though, the hours become an issue, and the playpen becomes a nuisance. You can't expect people to work 18-hour days forever, or even 12-hour days. People far smarter than I have posited that the most productive time an intellectual worker can put in on a regular basis is 4 to 6 hours per day; after that, productivity and effectiveness plummet, and it only gets worse the longer it goes on.

Foosball isn't a magical sigil protecting engineers from burn-out. Paintball with your coworkers isn't a substitute for drinks with your friends or a night in with your family. An in-house chef sounds great on paper, until you realize that the only reason they'd need to provide breakfast and dinner is if you're expected to be there basically every waking moment of your day.

Burn-out isn't the only concern, either. Engineering is both an art and a science, and like any art, it requires inspiration. Inspiration, in turn, requires experience. The same experience, day-in, day-out - interacting with the same people, in the same place, doing the same things - leaves one's mind stale, devoid of inspiration. Developers get tunnel-vision, and stop bringing new ideas to the table, because they have no source for them. Thinking outside the box isn't possible if you haven't been outside the box in months.

Give your people free coffee. Give them lunch. Give them great benefits. Pay them well. Treat them with dignity and respect. Let them go home and have lives. Let them get the most out of their day, both at work and at home. You'll keep people longer, and those people will be more productive while they're there. And you'll attract more mature engineers, who are more likely to stick around rather than hopping to the next hip startup as soon as the mood strikes them.

There's a certain pride in being up until sunrise cranking out code. There's a certain macho attitude, a one-upmanship, a competition to see who worked the longest and got the least sleep and still came back the next morning. I worked from 8am until 4am yesterday, and I'm here, see how tough I am? It's the geek's equivalent to fitness nuts talking about their morning 10-mile run. The human ego balloons when given the opportunity to endure self-inflicted tortures.

But I'm inclined to prefer an engineer who takes pride in the output, not the struggle to achieve it. I want someone who is stoked that they achieved so much progress, and still left the office at four in the afternoon. Are they slackers, compared to the guy who stayed another twelve hours, glued to his desk? Not if the output is there. It's the product that matters, and if the product is good, and gets done in time, then I'd rather have the engineer that can get it done without killing themselves in the process.

"I did this really cool thing! I had to work late into the night, but caffeine is all I need to keep me going. I kept having to put in hacks and work-arounds, but the important thing is that it's done and it works. I'm a coding GOD!" Your typical young, proud engineer. They're proud of the battle, not the victory; they're proud of how difficult it was.

"I did this really cool thing! Because I had set myself up well with past code, it was a breeze. I was amazed how little time it took. I'm a coding GOD!" That's my kind of developer. That's pride I can agree with. They're proud because of how easy it was.

This might sound like an unfair comparison at first, but think about it. When you're on a 20-hour coding bender, you aren't writing your best code. You're frantically trying to write working code, because you're trying to get it done as fast as you can. Every cut corner, every hack, every workaround makes the next task take that much longer. Long hours breed technical debt, and technical debt slows development, and slow development demands longer hours. It's a vicious cycle that can be extremely difficult to escape, especially once it's been institutionalized and turned into a badge of honor.

2013-12-16

My Personal Project Workflow/Toolset

I do a lot of side projects, and my personal workflow and tooling is something that's constantly evolving. Right now, it looks something like this:

  • Prognosticator for tracking features/improvements, measuring the iceberg, and tracking progress
  • WorkFlowy for tracking non-development tasks (the most recent addition to the toolset)
  • Trac for project documentation, and theoretically for defect tracking, though I've not been good about entering defects in Trac recently; it doesn't seem worth the effort on a one-person project, though with multiple people I think it would be a must
  • Trello for cross-cutting all the above and indicating what's next/in progress/recently completed, and for quickly jotting down ideas/defects. Most of the defect tracking actually goes in here on one-man projects right now. This is a lot of duplication and the main source of waste in my current process.
  • Bitbucket for source control (I also use Atlassian's excellent SourceTree as a Git/Hg client.)

It's been working well for me, the only issue I have is duplication between the tools, and failing to consistently use Trac for defect tracking. What keeps me in Trello is how quick and easy it is to add items to it, and the fact that I'm using it as a catch-all - I can put a defect or an idea or a task into it in a couple of seconds; I just have to replicate it to the appropriate place later, which is the problem.

I think the issue boils down to being torn between having a centralized repository for "stuff to be done" (Trello) and having dedicated repositories catered to each type of thing to be done (Prognosticator, Trac, and WorkFlowy); and convenience. Trello is excellent for jotting something down quickly, but lacks the additional specific utility of the other tools for specific purposes.

I think what I'll end up doing is creating a "whiteboard" list in WorkFlowy, and using that instead of Trello to jot down quick notes when I don't have the time to use the individual tools; then I can copy from there to the other tools when I need to. That will allow me to cut Trello down to basically being a Kanban board.

2013-10-18

Pragmatic Prioritization

The typical release scheduling process works something like this:

  1. Stakeholders build a backlog of features they'd like to see in the product eventually.
  2. The stakeholders decide among themselves the relative priority of the features in the backlog.
  3. The development team estimates the development time for each feature.
  4. The stakeholders set a target feature list and ship date based on the priorities and estimates.

The problem here is primarily in step 2; this step tends to involve a lot of discussion bordering on arguing bordering on in-fighting. Priorities are set at best based on a sense of relative importance, at worst based on emotional attachment. Business value is a vague and nebulous consideration at most.

I propose a new way of looking at feature priorities:

  1. Stakeholders build a backlog of features they'd like to see in the product eventually.
  2. The stakeholders estimate the business value of each feature in the backlog.
  3. The development team estimates the development time for each feature.
  4. The stakeholders set a target feature list and ship date based on the projected return of each feature - i.e., the estimated business value divided by the estimated development time.

This turns a subjective assessment of relative priorities into an objective estimate of business value, which is used to determine a projected return on investment for each feature. This can then be used to objectively prioritize features and schedule releases.

I've been using this workflow recently for one of my upcoming projects, and I feel like it's helped me to more objectively determine feature priorities, and takes a lot of the fuzziness and hand-waving out of the equation.

Shameless self-promotion: Pragmatic prioritization is a feature of my project scheduling and estimation tool, Rogue Prognosticator

2013-06-07

Code Patterns as Microevolution

Code patterns abide by survival of the fittest, within a gene pool of the code base. Patterns reproduce through repetition, sometimes with small mutations along the way. Patterns can even mate, after a fashion, by combining them, taking elements of each to form a new whole. This is the natural evolution of source code.

The first step to taming a code base is to realize the importance of assessing fitness and taking control over what patterns are permitted or encouraged to continue to reproduce. Code reviews are your opportunity to thin the herd, to cull the weak, and allow the strong to flourish.

Team meetings, internal discussions, training sessions, and learning investments are then your opportunity to improve both the quality of new patterns and mutations that emerge, as well as the group's ability to effectively manage the evolution of your source, to correctly identify the weak and the strong, and to have a lasting impact on the overall quality of the product.

If you think about it, the "broken windows" problem could also be viewed as bad genes being allowed to perpetuate. As the bad patterns continue to reproduce, their number grows, and so does their impact on the overall gene pool of your code. Given the opportunity, you want to do everything you can to make sure that it's the good code that's continuing to live on, not the bad.

Consider a new developer joining your project. A new developer will look to existing code as an example to learn from, and as a template for their own work on the project, perpetuating the "genes" already established. That being the case, it seems imperative that you make sure those genes are good ones.

They will also bring their own ideas and perspectives to the process, establishing new patterns and mutating existing ones, bringing new blood into the gene pool. This sort of cross-breeding is tremendously helpful to the overall health of the "code population" - but only if the new blood is healthy, which is why strong hiring practices are so critical.

2013-05-29

My Take on "Collective Ownership"/"Everyone is an Architect"

I love the idea of "collective ownership" in a development project. I love the idea that in a development team, "everyone is an architect". My problem is with the cut-and-dried "Agile" definition of these concepts.

What I've been reading lately is a definition of "collective ownership" that revolves around the idea of distributing responsibility, primarily in order to lift the focus on finger-pointing and blaming. A defect isn't "your fault", it's "our fault", and "we need to fix it." That's all well and good, but distributing blame isn't exactly distributing ownership; and ignoring the source of an issue is a blatant mistake.

The latter point first: identifying the source of an issue is important. I see no need for blame, or calling people out, and certainly no point in trying to use defects as a hard metric in performance analysis. However, a development team isn't a factory; it's a group of individuals who are constantly continuing their education, and honing their craft, and in that endeavor  they need the help of their peers and managers to identify their weaknesses so they know what to focus on. "Finding the source of an issue" isn't about placing blame or reprimanding someone, it's about providing a learning opportunity so that a team member can improve, and the team as a whole can improve through the continuing education of each member.

In regard to distributing ownership, it's all too rare to see discussion of distributing ownership in a positive way. I see plenty of people writing about eliminating blame, but very few speaking of a team wherein every member looks at the entire code base and says "I wrote that." And why should they? They didn't write it alone, so they can't make that claim. For the product, they can say "I had a hand in that," surely. But it's unlikely they feel like they had a hand in the development of every component.

That brings us around to the idea that "everyone is an architect." In the Agile sense, this is generally taken to mean that every developer is given relatively free rein to architect the component they're working on at any given moment, without bowing down to The Architect for their product. I like this idea, in a sense - I'm all for every developer doing their own prototyping, their own architecture, learning their own lessons, and writing their own code. Up to a point.

There is a level of architecture that it is necessary for the entire team to agree on. This is where many teams, even Agile teams, tend to fall back on The Architect to keep track of The Big Picture and ensure that All The Pieces Fit Together. This is clearly the opposite of "everyone is an architect". So where's the middle ground?

If a project requires some level of architecture that everyone has to agree on - language, platform, database, ORM, package structure, whatever applies to a given situation - then the only way to have everyone be an architect is design by committee. Panning design by committee has become a cliche at this point, but it has its uses, and I feel this is one of them.

In order to achieve collective ownership, you must have everyone be an architect. In order for everyone to be an architect, and feel like they gave their input into The Product as a whole - or at least had the opportunity to do so - you must make architectural decisions into group discussions. People won't always agree, and that's where the project manager comes in; as a not-an-architect, they should have no bias and no vested interest in what choices are made, only that some decision is made on each issue that requires consideration. Their only job in architectural discussions is to help the group reach a consensus or, barring that, a firm decision.

This is where things too often break down. A senior developer or two, or maybe a project manager with development experience, become de facto architects. They make the calls and pass down their decrees, and quickly everyone learns that if they have an architecture question, they shouldn't try to make their own decision, they shouldn't pose it to the group in a meeting, they should ask The Guy, the architect-pro-tem. Stand-up meetings turn into a doldrum of pointless status updates, and discussion of architecture is left out entirely.

Luckily, every team member can change this. Rather than asking The Guy when a key decision comes up, ask The Group. Even better, throw together a prototype, get some research together, and bring some options with pros and cons to the next stand-up meeting. Every developer can do their part to keep the team involved in architecture, and in ownership, and to slowly shift the culture from having The Architect to having Everyone Is An Architect.

2013-05-03

Be Maxwell's Demon

Source code tends to follow the second law of thermodynamics, with some small differences. In software, as in thermodynamics, systems tend toward entropy: as you continue to develop an application, the source will increase in complexity. In software, as well as in thermodynamics, connected systems tend toward equilibrium: in development, this is known as the "broken windows" theory, and is generally considered to mean that bad code begets bad code. People often discount the fact that good code also begets good code, but this effect is often hidden by the fact that the overall system, as mentioned earlier, tends toward entropy. That means that the effect of broken windows is magnified, and the effect of good examples is diminished.

In thermodynamics, Maxwell's Demon thought experiment is, in reality, impossible - it is purely a thought experiment. However, in software development, we're in luck: any developer can play the demon, and should, at every available opportunity.

Maxwell's demon stands between two connected systems, defeating the second law of thermodynamics by selectively allowing less-energetic particles through only in one direction, and more-energetic particles through only in the other direction, causing the two systems to tend toward opposite ends of the spectrum, rather than naturally tending toward entropy.

By doing peer reviews, you're doing exactly that; you're reducing the natural entropy in the system and preventing it from reaching its natural equilibrium by only letting the good code through, and keeping the bad code out. Over time, rather than tending toward a system where all code is average, you tend toward a system where all code is at the lowest end of the entropic spectrum.

Refactoring serves a similar, but more active role; rather than simply "only letting the good code through", you're actively seeking out the worse code and bringing it to a level that makes it acceptable to the demon. In effect, you're reducing the overall entropy of the system.

If you combine these two effects, you can achieve clean, efficient, effective source. If your review process only allows code through that is as good or better than the average, and your refactoring process is constantly improving the average, then your final code will, over time, tend toward excellence.

Without a demon, any project will be on a continuous slide toward greater and greater entropy. If you're on a development project, and it doesn't have a demon, it needs one. Why not you?

2013-04-24

Real Sprints


Agile methodologies talk about "sprints" - workloads organized into one to four week blocks. You schedule tasks for each sprint, you endeavour to complete all of it by the end of the sprint, then you look back and see how close your expectations (schedule) were to reality (what actually got done).

Wait, wait, back up. When I think of a sprint, I think short and fast. That's what sprinting means. You can't sprint for a month straight; you'll die. That's a marathon, not a sprint.

There are numerous coding competitions out there. Generally, you get around 48 hours, give or take, to build an entire, working, functional game or application. Think about that. You get two days to build a complete piece of software from scratch. Now that's what I call sprinting.

Of course, a 48 hour push is a lot to ask for on a regular basis; sure, your application isn't in a competition, this is the real world, and you need to get real work done on an ongoing basis. You can't expect your developers to camp out in sleeping bags under their desks. But that doesn't mean turning a sprint into a marathon.

The key is instilling urgency, while moderating burnout. This is entirely achievable, and can even make development more fun and engaging for the whole team.Since the term sprint has already been thoroughly corrupted, I'll use the term "dash". Consider this weekly schedule:
  • Monday: Demo last week's accomplishments for stakeholders, and plan this week's dash. This is a good week to schedule any unavoidable meetings.
  • Tuesday and Wednesday: your 48 hours to get it done and working. These are crunch days, and they will probably be pretty exhausting. These don't need to be 18-hour days, but 10 hours wouldn't be unreasonable. Let people get in the zone and stay there as long as they can.
  • Thursday: Refactoring and peer reviews. After a run, athletes don't just take a seat and rest; they slow to a jog, then a walk. They stretch. The cool off slowly. Developers, as mental athletes, should do the same.
  • Friday: Testing. QA goes through the application with a fine-toothed comb. The developers are browsing the web, playing games, reading books, propping their feet up, and generally being lazy bums, with one exception: they're available at a moment's notice if a QA has any questions or finds any issues. Friday is a good day for your development book club to meet.
  • By the end of the week, your application should be ready again for Monday's demo, and by Tuesday, everyone should be well-rested and ready for the next dash.
Ouch. That's a tough sell. The developers are only going to spend two days a week implementing features? And one basically slacking off? Balderdash! Poppycock!

Think about it, though. Developers aren't factory workers; they can't churn out X lines of code per hour, 40 hours per week. That's not how it works. A really talented developer might achieve 5 or 6 truly productive hours per day, but at that rate, they'll rapidly burn out. 4 hours a day might be sustainable for longer. Now, mind you, in those four hours a day, they'll get more done, better, with fewer defects, than an army of incompetent developers could do in a whole week. But the point stands: you can't run your brain at maximum capacity eight hours straight, five days a week. You just can't - not for long, anyway.

The solution is to plan to push yourself, and to plan to relax, and to keep the cycle going to maximize the effectiveness of those productive hours. It's also crucial not to discount refactoring as not being productive; it sets up the following weeks' work, and reduces the effort required to get the rest of the development done for the rest of the life of the application. It's a critical investment in the future.

Spending a third of your development time on refactoring may seem excessive, and if it were that simple, I'd agree. But if you really push yourself for two days, you can get a lot done - and write a lot of code to be reviewed and refactored. In one day of refactoring, you can learn a lot, get important work done, and still start to cool off from the big dash.

That lazy Friday really lets you relax, improve your craft, and get your product ready for next week, when you get to do it all over again.

2013-04-22

The Development Stream


I was reading today about GitHub's use of chat bots to handle releases and continuous integration, and I think this is absolutely brilliant. In fact, it occurs to me that using a chat bot, or a set of chat bots, can provide an extremely effective workflow for any continuous-deployment project. Of course, it doesn't necessarily have to be a chat room with chat bots; it can be any sort of stream that can be updated in real-time - it could be a Twitter feed, or a web page, or anything. The sort of setup I envision would work something like this:

Everyone on the engineering team - developers, testers, managers, the whole lot - stay signed in to the stream as long as they're "on duty". Every time code is committed to a global branch - that is, a general-use preproduction or production branch - it shows up in the stream. Then the automated integration tests run, and the results are output to the stream. The commit is deployed to the appropriate environment, and the deployment status is output to the stream. Any issues that occur after deployment are output to the stream as well, for immediate investigation; this includes logged errors, crashes, alerts, assertion failures, and so on. Any time a QA opens a defect against a branch, the ticket summary is output to the stream. The stream history (if it's not already compiled from some set of persistent-storage sources) should be logged and archived for a period of time, maybe 7 to 30 days.

It's very important that the stream be as sparse as possible: no full stack traces with error messages, no full commit messages, just enough information to keep developers informed of what they will need to look into further elsewhere. This sort of live, real-time information stream is crucial in the success of any continuous-deployment environment, in order to keep the whole team abreast of any issues that might be introduced into production, along with when and how they were introduced.

Now, what I've described is a read-only stream: you can't do anything with it. GitHub's system of using an IRC bot allows them to issue commands to the bot to trigger deployments and the like. That could be part of the stream, or it could be part of another tool; as long as the deployment, and its results, are output to the shared stream for all to see. This is part of having the operational awareness necessary to quickly identify and fix issues, and to maintain maximum uptime.

There are a lot of possible solutions for this sort of thing; Campfire looks particularly promising because of its integration with other tools for aggregating instrumentation data. If you have experience with this sort of setup, please post in the comments, I'd love to hear about it!

2013-04-17

Truly Agile Software Development


Truly agile software development has to, by nature, allow for experimentation. In order to quickly assess the best option among a number of choices, the most effective method is empirical evidence: build a proof of concept for each option and use the experience of creating the proof, as well as the results, to determine which option is the best for the given situation.

While unit tests are valuable for regression testing, a test harness that supports progression testing is at least as useful. Agile development methodologies tend to focus on the idea of iterating continuously toward a goal along a known path; but what happens when there's a fork in the road? Is it up to the architect to choose a path? There's no reason to do so when you can take both roads and decide afterward which you prefer.

Any large development project should always start with a proof of concept: a bare-bones, quick-and-dirty working implementation of the key functionality using the proposed backing technologies. It doesn't need to be pretty, or scaleable, or extensible, or even maintainable. It just has to work.

Write it, demo it, document what you've learned, and then throw the code away. Then you can write the real thing.

It may seem like a waste of time and effort at first.  You'll be tempted to over-engineer, you'll be tempted to refactor, you'll be tempted to keep some or all of the code. Resist the urge.

Why would you do such a thing? If you're practicing agile development, you might think your regular development is fast enough that you don't need a proof. But that's not the point; the point is to learn as much as you can about what you're proposing to do before you go all-in and build an architecture that doesn't fit and that will be a pain to refactor later.

Even if it takes you longer to build the proof,it's still worth it - for one thing, it probably took longer because of the learning curve and mistakes made along the way that can be avoided in the final version, and second because again, you've learned what you really need and how the architecture should work so that when you make the production version you can do it right the first time, with greater awareness of the situation.

This approach allows much greater confidence in the solutions chosen, requiring less abstraction to be built in to the application, which allows for leaner, cleaner code, and in less time. Add to that the value of building a framework that is flexible enough to allow for progression testing, and you've got the kind of flexibility that Agile is really all about.

Note: Yes, I understand that Scrum calls prototypes "spikes". I think this is rather silly - there are already terms for prototypes, namely, "prototype" or "proof of concept". I'm all for new terms for things that don't have names, but giving new names to things that already have well-known names just seems unnecessary.

2012-07-21

Simplicity, Flexibility, and Agility


Agile programming is supposed to be about flexibility in the face of changing requirements. It's supposed to be about rapid development and iteration. But all too often it ends up being like classical methodologies in many ways. Many agile methodologies drown developers in process, taking time away from development. Test-driven development is a brilliant concept, but it puts more time between planning and iteration, making it more difficult to deal with changing requirements, not easier, and increasing the burden of change and the cost of refactoring.

Every developer wants carefully, precisely defined requirements. Developers often try to handle changing requirements by developing for flexibility, but flexibility often comes at the cost of added complexity. Trying to write-in endless flexibility to allow for changing requirements is very much akin to premature optimization. You end up doing a whole lot of work to make some code "better" - more flexible in this case, versus more performant in the case of optimization - when you don't yet know which code really needs it and which code doesn't.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies." --C. A. R. Hoare 
Often the best way to maintain flexibility is through simplicity. A program that meets its requirements with the simplest possible implementation is one that will be naturally flexible, maintainable, stable, and manageable.Of course, intelligent development plays a major role; making appropriate use of design patterns can do a lot to improve both flexibility and simplicity. Key design tenets like DRY, YAGNI, separation of concerns, and avoiding premature optimization (including overarchitecting for flexibility) help keep complexity down and productivity up.

What if requirements change? What if the simplest possible solutions isn't as extensible? Good news: having invested in the simplest possible solution, you've lost little in development. You haven't built anything that wasn't strictly necessary. The new solution will still aim for simplicity, still reap the same rewards. By keeping things simple, you've kept down the cost of change.

Software development is a learning process by nature; you're building something that's never been done before, or at least building something in a way that's never been done before. Innovation is at its heart, and learning is the personal experience of innovation. That being the case, every iteration has value in what's learned, even if the code is later removed or replaced. The experience of writing it adds value to the team, and the team defines the value of the end product.

Many readers may think of targeting simplicity as a given, but it truly isn't; while simplicity is often a goal, all too frequently it takes a back seat to other concerns, or is abandoned altogether because simplicity becomes far more difficult to achieve with many of the popular frameworks and libraries available today. Frameworks have to aim for maximum flexibility in order to be successful; in order to be general-purpose, they can't make many assumptions, and they have to account for a whole host of different usage scenarios and edge cases. This increases the complexity of the framework, and accordingly, the complexity of any implementation using the framework. The fewer assumptions the framework can make, the more effort a developer has to put in just telling the framework what she's trying to accomplish.

I can't count how many implementations I've seen that are drastically more complex than necessary, simply because they have been forced to apply the conventions required by their chosen framework; and even following those conventions, they're still left managing arcane configuration files, and tracking down bugs becomes an epic undertaking requiring delving deep into the inner workings of the framework and libraries. All too often, the quick-start "hello world" app is far more complex with a framework than without it, and adding functionality only makes the situation more bleak.

So, what does all this add up to? Here's the bullet-point version:
  • If you're aiming for agility - the ability to adapt quickly to changing requirements - don't invest too much time on nailing down requirements. Get as much detail as you can, and start iterating. If you're planning for requirements to change, go all the way - assume that your initial requirements are wrong, and think of each iteration as an opportunity to refine the requirements. All code is wrong until proven right by user acceptance.
  • Use interface mockups (for software with a UI) or API documentation (for libraries and services) as a tool to give stakeholders a chance to revise requirements while looking at a proposed solution and thinking about using it in real-world scenarios.
  • Don't choose flexibility or modularity over simplicity. Choose a framework that won't get in your way; if there isn't one, then don't use a framework at all. Don't write what you don't need. Don't turn a piece of code into a general-purpose API just because you might need to use it again. If you need it in multiple places now, separate it out; otherwise, you can refactor it when it's appropriate.
  • Think about separation of concerns early in the game, but don't sacrifice simplicity for the sake of compartmentalization. Simple code is easier to refactor later if refactoring turns out to be necessary. Overarchitecting is the same sin as premature optimization, it's just wearing a nicer suit.
  • The simplest solution isn't always the easiest. The simplest solution often requires a lot of thought and little code. Don't be a code mason, laying layer after layer of brick after brick; be a code poet, making every line count. If a change could be implemented by increasing the complexity of existing code, or by refactoring the existing code to maintain simplicity, always take the latter route; you'll end up spending the same amount of time either way, but will reap far more benefits by maintaining simplicity as a priority.
  • Simplicity carries a great many implicit benefits. Simpler code is very often faster (and easier to optimize), more stable (and easier to debug), cleaner (and easier to refactor), and clearer to read and comprehend. This reduces development, operational, and support costs across the board.
  • Simplicity doesn't just mean "less code", it means better code. SLOC counts don't correlate directly to complexity.
  • Don't reinvent the wheel - unless you need to. All wheels aren't created equal; there's a reason cars don't use the same wheels as bicycles.
What are your experiences? What ways have you found to keep hold of simplicity in the face of other pressures? Feedback is welcome in the comments!

2007-03-14

Urgent Apps - Mac Development Kit

After yesterday's post, I got to thinking. I had pulled some items out of that list because they were highly developer-centric applications. However, that does mean that some really top-notch programs didn't make the list, and I think that's unfair. There are some apps that I really can't live without when it comes to development work.

Before we begin, I should point out that my particular development tasks typically include the administration of a MySQL database, editing PHP, Java, HTML, CSS, and JavaScript files, and operating revision control; the tools laid out here are centered around those tasks.

Now, without further ado, the list:

jEdit

jEdit calls itself "the programmer's text editor", but that's selling it short. jEdit is, to put it lightly, a god among executables. I've never seen another program come close to its level of flexibility, modularity, and customizability. The sacrifice for all this goodness is that it's a bit of a RAM-hog, particularly running under the MacOS JRE (I highly recommend updating to the 1.6 JRE available on apple's website, and completely switching over to 1.6; it provides some vast performance and footprint improvements.)

NetBeans
NetBeans is, of course, the Java IDE, unless you're one of those people that thinks that Eclipse is the Java IDE, but I'm not.

SmartSVN (payware)
The best SVN client I've seen for the Mac. Unfortunately, there aren't many good free options.

MySQL GUI Tools
The real deal, straight from the source.

Platypus
This nifty little app lets you take any shell or other script file and turn it into a Mac application package.

ArgoUML
Java-based UML designer.

Subversion
Revision control extraordinaire. Any box I do development on has a local Subversion server for anything I happen to want to keep a history for.

Trac
I've recently fallen in love with Trac, which is why it made the list. It's not exactly an application - it's a web application. However, it can be installed on a Mac, so it made the cut, and I do love it dearly. Go check out their page; the Trac site runs on Trac.


Anything I'm missing? Post in the comments!!

2007-02-23

Development Tactics

I recently set up an account with hosted-projects.com, because I wanted a Subversion repository more accessible & stable than the one running on my home desktop. I shopped around for a while, and decided on this place - it's a small project, and a starter account is only $7/month, so I figure, what the heck.

My account was set up within a few minutes, even though I ordered after business hours - I'm guessing they've got a pretty good automation system going. I get fast, secure access for unlimited users to unlimited projects in 100M of space, plus a free Trac - not a bad deal. As far as reliability and support, well - only time will tell.

The host is all well and good, but what I really wanted to talk about is Trac. I had looked Trac up some time ago, and decided to take a pass on it - it just wasn't mature enough at the time, and didn't have most of the features I was looking for.

Now, however - after some time, and a few bug tracking schemes - I find myself with a free Trac page sitting around, and I figure, what the hey, I'll give it a shot. And you know what? It still doesn't have some of the features I was looking for. But it works so well, it doesn't matter.

The whole thing runs on a Wiki engine. This Wiki engine identifies all CamelCase as wiki links, which I find a bit annoying, but I got used to it pretty quickly. It lets you easily link to pretty much anything, and inline, too: #123 is ticket 123, r456 is revision 456, etc. It hooks up to your Subversion repo and lets you keep an eye on changelogs and browse the repo; plus, this means if you put properly formatted notes in your commit messages (which isn't hard), you get links in the changelog, for free.

While not quite as versatile as MediaWiki, for example, in terms of page layout and design, it's probably easier to use - and programmers tend to go for form over function anyway. It's a developer's tool. Developers probably won't spend all day perfecting page templates and macros.

The system provides for a roadmap of milestones, a list of issue tickets, the wiki, and the repository. That's it. What's the big deal? How insanely easy it is to wire them all together. With some really basic formatting, you can turn a simple list of milestones into this.

It's got some rough edges, and there are definitely some huge opportunities yet to be taken advantage of - particularly, I have yet to discover decent, proper JavaDoc support, with full wiki integration. I may just have to learn enough Python to write a plugin for it. I'd also really like to see automatic backlinks added to all the internal links.

I know it's still version "0.10.3", but it's pretty stable so far, and everything works pretty well. I have yet to run into any bugs or bad behavior - however, you should keep in mind that this is bleeding-edge software if you're considering deploying it. Don't let that scare you off though: if you don't mind the under-heavy-development label, you really should give this little application a try and see what you think. At the very least, check out Trac's own website to see what it can do.