One thing I’ve learned from doing RakNet and am trying to do in RakEngine is to follow good systems design, which can be summarized as
1. Preserve generality
2. Preserve relevant information
This can be summarized in one sentence: Don’t make decisions that have exceptions – either in regard to processing or information hiding. Almost every time there is a case where someone says “This code is all over the place” it is because this rule was violated.
An example of this is a class that calculates and stores AI paths. Lets say you make a decision to limit updates to every 500 milliseconds. During later testing you find that the AI doesn’t respond fast enough when getting hit, and in that particular scenario you need to update the path right away. So now you have an exception to your rule of 500 milliseconds, which means it was a mistake to make that rule to begin with. Except that now when the AI gets hit a lot, you don’t want to recalculate the path again, so you have to add a special flag to disable this. Except for a third case when a path blocker comes up. And so forth. Now your code which simply finds AI paths has special flags for things it shouldn’t know about (blockers and getting hit).
It’s much harder to design systems that maintain generality as opposed to systems that are specific. You usually have less features. One way to tackle this is to define primitives, where a primitive is the minimum unit of functionality and information that anyone could ever care about. Then build up systems composed of those primitives. A good example of a primitive is the sin() function. A sin function is an expansion series. There is a certain amount of internal data going on there such as the nth term and how far the series is expanded. But you don’t care about that. So it’s not exposed and the sin function works well, without people having to rewrite it. Suppose instead the sin function calculated both the sin and the cosine at the same time, at a cheaper cost than calculating both separately but more expensive than individually. That would be a good thing, except in cases where you don’t care about the cosine. So you no longer have a primitive, which is why that is not done.
Sometimes designers think something is a primitive and it is not. For example, bytes. In network code I deal with bits all the time. The bit is actually the minimum computing unit. Because of the abstraction of bytes I had to write a bitstream class with a huge set of complicated functions to read and write bits. So defining bytes as the minimum memory unit was a bad abstraction. A better abstraction would have been to make a bit a native type, and to use that in composition, where you have a bit8 as another type, and so on.
The native C/C++ types are also bad abstractions. They try to hide the underlying type by representing the most efficient form for the compiler. So an int is the type you should usually use for numbers, a char is used for characters, and so on. But in practice a great deal of the time you don’t care about that – you care about how many bytes are used. So you end up with almost every library doing something like
typedef unsigned char UCHAR;
And when you use another library, you can’t just use UCHAR. You have to look at what type that actually was, and match it up with your own type. So people have proposed to do something like
typedef unsigned char u8;
typedef signed int s32;
And so forth. This is good, because it is more of a primitive than int or char. You are undoing the bad design work which shouldn’t have been done to begin with.
So a good rule of thumb when designing systems is “Is there any case where anyone would want to process something differently than I just wrote?” And “Is there any case where anyone would want to view this data I am setting as protected or private?” If yes, you need to think carefully about that because you will have to write exceptions, which multiplies the complexity of your system. It might be better to go to a lower level of granularity and compose the system instead.