Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't doubt that this could accurately reflect your experience in the industry, but it is not a universal truth. I've worked on sustaining engineering of systems that were years old, the stewardship of which involved refactoring only, and which are still in production today. There are lots of systems that are more or less right in version 1.


> There are lots of systems that are more or less right in version 1.

While true, in my experience this usually happened when at least one - if not all three - of these conditions were true:

1. The problem domain is relatively static, unmoving, and self contained

2. The systems in question is relatively small

3. The developers in question have written substantially similar systems before, or ground locally, or used version 0.x version numbers first (in any of these cases, is it really "version 1" per se?)

...game development is admittedly weak in all 3 categories, and high on time pressure. Oh sure, there's some solid code that will probably last decades in many codebases, but even the best programmers will occasionally make a system that - while good in isolation - needs reworking when combined with other systems - to say nothing of the content and rendering pipelines that vary wildly by decade.


4. A system that never needs to scale much beyond its original projected load.


First read as "original project lead", but I think that could apply too.


That's certainly true in my experience. I've seen some great, readable, codebases written by solo developers that had lots of previous experience writing code in larger teams.

Caveat: it's never something that's too large or complex, of course.


0. The requirements were known in advance of starting the design.


> the stewardship of which involved refactoring only

I've known several such systems, and the reason they stuck with the original architecture is they wrote it in C, and it's very hard to iterate architecture in C programs. The programmers will just keep bashing the C code so it works well enough.

Yes, I know this is a provocative statement.

The reason behind it is that C's ability to encapsulate behavior is just not very good. Even simple things, like "is it an array or a linked list" leak out all over the place.

Even simpler, "is it a value or a reference type" means one has to swap "." and "->" everywhere. C++ half-fixed it by introducing reference types, D fixes it all the way by noting that "." can be unambiguously used for both operations.


To some degree that's matter of luck. Sometimes all your assumptions are correct from the start and the requirements don't change much. But often a lot of things change over time until the first architecture doesn't fit anymore. I would agree there are better and worse architects but luck definitely plays a role.


Yeah it is always important in such discussions to remember how vastly different development can be. Developing embedded software for an appliance have vastly different constraints compared to a web app startup which pivot every three months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: