Monday 4 May 2015

Pragmatic Modern C++ Survival Guide

Compiling: Brute Force Helps

C++ has a reputation for being the slowest mainstream language to compile, leading to light sabre office duels. A certain creative boredom can set in, when people start chasing the dream of the Suceessor to C++: "The creation myth of Go is something like this: Rob Pike, Ken Thompson, and Robert Griesemer were waiting for a particularly long C++ build to take place when they decided to theorize a new language" Zack Hubert. It was also a reason why I blew several years of my life on an interactive C++ interpreter.

The fact is, that C++ compilers are not slow. It is the incredible amount of inlined code that the compiler must digest before it can give you the proverbial Hello World application. The six or so lines of code pull in more than 17,000 lines of include files. The culprit here is actually C++ hanging onto an outdated compile/build cycle that was already quaint in 1973.

So the best way to improve build times is to get the best system you can afford, with particular attention to fast disk I/O, like keeping the compiler with its headers and libraries on a SSD. And remember that you will pay for heavily optimized builds; better to test unoptimized with debug information, since then your crash traces can make sense in a debugger. But mainly, the less included code a file needs to bring in, the better.

Embrace the Future

As the quote goes, the future is already here, it's just badly distributed. The modern standard (2011) is only now being widely available on all platforms. On Windows, Visual Studio 2013 is pretty much there and the Community Edition is available free with fairly liberal licensing; GCC 4.9 is available from the excellent TDM-GCC project. On Linux, Ubuntu 14.04 has GCC 4.8, and Clang 3.4 is easily available from the repositories; on OS X Clang is now standard.

I mention these choices because it's good to get a second opinion if you're struggling with weird errors; in particular, Clang gives better diagnostics than GCC, even if you would still be using GCC as your 'production' compiler. C++ is a standardized language with a standard library, and you can use that fact.

The new language is much more fun. The standard way to iterate over a container was:

 for (vector<int>::iterator it = vi.begin(); it != vi.end(); ++it) {
     int ival = *it;
     ....
 }

Now there is auto, much relief.

 for (auto it = vi.begin(); it != vi.end(); ++it) {
     int ival = *it;
     ....
 }

And finally, range-based for-loops.

 for (auto ival: vi) {
     ....
 }

You can even do this, thanks to std::initializer_list:

 for (auto i : {10,20,30,40}) {
   cout << i << endl;
 }

And the cost? Totally nada - the short form is completely equivalent to the first verbose form, and will work with any type that defines begin and end.

This new form of for loop is interesting, because it's the first piece of basic C++ syntax to be based on the underlying standard library. A big step, because the language does not provide batteries, just the means to create batteries. std::string is not hard-wired into the language, neither is std::map. The Go language decided to implement them in as primitives, just as Java allowed System.String to participate in operator overloading, which is otherwise not allowed. Both of these languages are reactions to C++, which achieve simplicity at the cost of generality. The unusually awkward error messages generated by C++ stem mainly from this not-baked-in philosophy, rampant function overloading, plus a possibily excessive reliance on generic programming.

auto represents a very cool idea, local type inference. It is an interesting idea that the new langauges of this century lean on this heavily, so that (for instance) idiomatic Go code has relatively few explicit type annotations. Meanwhile, the dynamic language people are adding type annotations, not necessarily for performance but rather for maintenance and better tooling. auto comes with a big gotcha ; since C++ is a value-based language, auto v = expression makes v the value type and does a copy; if you want a reference to that value, say auto& v = expression. (This burned me more than once, so I feel I ought to pass on the warning).

Move semantics are a powerful new idea. The concrete result of this is that you no longer need to worry about a function returning a std::vector, since the vector constructed in the function will be moved to the returned vector; basically its data (a pointer to some values) is moved to the new vector and zeroed out in the old vector. But the old advice to pass const &T if you just want to access a large object still stands, since C++ still copies across by value.

Parts of a system may want to keep objects from another part. This can be tricky, because we don't have a garbage collector cleaning up when the last 'owner' of the object is gone. Say a GUI object needs to keep a a persistent store object as part of its state; what must it do when it dies? It can ignore it, and assume that the persistent store people will free it one day. But how do the PSP know that their object isn't still being used? More machinery needs to be added, and so forth. (No wonder big C++ systems are like Heath Robinson/Rube Goldberg cartoons). Better to use a well-proven idea that has a standard implementation - shared ownership through smart pointers.

 class B {
     shared_ptr<A> a;

 public:
     B(shared_ptr<A> a) : a(a) { }

     int method() {
         // this is the underlying pointer
         cout << a.get() << endl;

         // otherwise behaves just like a regular pointer
        return a->method();
     }
 };

 void test() {
     shared_ptr<A> pa (new A());
     B b1(pa);   //b1 shares pa
     B b2(pa);   //b2 shared pa

     // exactly the same operation, since B's share an A
     b1.method();
     b2.method();

 } // b1 and b2 are now dead. No one left sharing pa, so it is deleted.
This is easier to show than explain - it is a way for objects to share an object, in such a way that when the last sharer goes, the shared object may be safely deleted. It works because the default destructor for B works by destructing its members, releasing the shared pointers. Smart pointers can be put into containers; when the container is destroyed, the shared pointers are again released. A note on Names: readability can be helped by typedefs:
 typedef std::vector<std::shared_ptr<A>> SharedVectorA;

Here's the new generalized typedef in action:

 template <class T>
 using SharedVector = std::vector<std::shared_ptr<T>>;

which can be used like so: SharedVector<A> my_a_list.

Dealing with Errors from the Compiler and Its Community

I must emphasize that C++ compilers, however stubborn and irritating they can be, do not embody active malice. They merely wish to share their understanding of the problem, in a verbose and pedantic way, like a person with no social skills. So the thing to do is concentrate on the first error and ignore the spew.

For example, a naive but hopeful person might think that iostreams can dump out vectors directly. Here are the first lines of the error (clang 3.4)

 for1.cpp:11:9: error: invalid operands to binary expression
 ('ostream' (aka 'basic_ostream<char>') and 'vector<int>')
 cout << vi << endl;
 ~~~~ ^  ~~

That's not too bad! Notice that the compiler uses the terminology of compilers, not humans. You have to know what a 'binary expression' is and that it has 'operands'. These terms are not part of C++, but part of how people talk about C++.

But the compiler will now go on for another ninety lines, telling you how it could not match vector<int> against any of the many overloads of ostream& << TYPE. These can be safely ignored, once you know that you can't dump out a vector directly.

GCC 4.8 makes a complete hash of it, however.

 for1.cpp:11:12: error: cannot bind 'std::ostream {aka std::basic_ostream<char>}' lvalue to 'std::basic_ostream<char>&&'
     cout << vi << endl;

Visual C++ 2010 is relatively clear, but thereafter degenerates into irrevalent confusion:

 for1.cpp(11) : error C2679: binary '<<' : no operator found which takes a right-
 hand operand of type 'std::vector<_Ty>' (or there is no acceptable conversion)

And so an important point: there is more than one compiler in the world, and some of them are more sensible than others.

We move on to the attitude of the community to errors. For instance, unlike many introductory texts, I don't bother to prefix standard types with std:: because the result is easier to read and type. You ought in any case know the commonly-used contents of the std namespace; if there is an ambiguity, then the compiler will tell you about it. The one thing you must not do is say using namespace std in a header file, because you will truly be Polluting the Global Namespace for everyone else. Anyway, this is one of the little things that can generate a lot of hot air on the Internet - it is a red flag to a certain kind of bull who believes that stylistic quirks represent major heresies. I'm reminded of Henry Higgins in My Fair Lady when he sings "An Englishman's way of speaking/Absolutely classifies him/The moment he talks/He makes some other Englishmen despise him/One common language I'm afraid we'll never get".

Like with compiler messages beyond the statement of the original error, it is a good idea to ignore random online opinion. Much better to read the classics - Dr Stroustrup himself is refreshingly pragmatic. The term 'Modern C++' has been redefined several times in the last twenty years, so remember that much earnest wisdom and advice has expired. It may be an unfashionable position, but reading blog posts and looking at Stackoverflow answers is not the way to learn a programming language like C++ properly. Go read a good book, basically. You're no longer forced to use paper.

Use a Good IDE

I suspect this will upset some bulls, because they themselves are super productive in the editors originally created by the Unix gods. Well, great for them, but I can't help feeling that the anecdotal experiences of a few very talented and focussed people does not represent universal good advice, and certainly isn't data. Personally I find the Eclipse CDT makes me more productive, and in my Windows days found Visual Studio equally effective. But I will qualify this: as a professional you should not be dependent on an environment that you cannot code outside - that's when you need a good general-purpose code editor in addition. For instance, when a program is first taking shape, a plain editor is less distracting; I don't write these articles in Word because it it is far too busy, and fails to undertand the creative writing process: first write, then edit; then format.

There is a learning curve with big IDEs, and you will have to read the manual. For instance, it is straightforward to bring a makefile-based project into Eclipse, but then you have to tell it what your include paths and defines are - it cannot deduce this from your makefile, (which is a task which many people find difficult anyway). Once it knows where to find the includes, the errors start disappearing, and the program becomes a live document, in the sense that any symbol becomes a link to its definition using ctrl-click, and so forth. If you type nonsense, the environment will start emitting yellow ink and finally red. This takes some getting used to, since it will often complain before you've finished an edit. Again, it's about filtering out irrelevant criticism. The benefits go beyond 'hyper-linking' to code completion, where ctrl-enter will complete functions and methods for you, and safe global renaming, which is useful for us who can never get names exactly right the first time. Having constant feedback about errors means that often your builds will be correct, first time.

So right tool for the job at hand, and knowing when a tool is appropriate. Investing some learning time in your tools always pays off. For instance, learn the keyboard shortcuts of your editor/IDE and you will not waste too much time with pointing your mouse around like a tourist trying to buy things in a foreign market. Learn your debugger well, because exploring the live state of a program is the best way to track down most bugs. C++ can be frustrating to debug, but GDB can be taught to display standard library objects in a clear way.

There seems to be some contradiction in what I'm saying: first I say that rich environments are too busy and distracting, and then say that they are enormously helpful. The trick is again to know what tool to use for what phase of a project.

For instance, I encourage you to write little test programs to get a feeling for language features - for this an IDE is irritating because setting up a new project is tedious. Better then to have a code editor which can run the compiler and indicate the errors. But for larger programs, you need to navigate the code base efficiently and perform incremental changes.

Good Design and Modularity

Of course, C++ does not have modules. They're likely to arrive in the coming years for the 2017 standard, but currently all we have are 'compilation units', aka 'files'. A class consists of a header file and an implementation file? Not necessarily. I think people are encouraged to keep classes separate to avoid source files getting too large. If a group of classes hunt in a pack then there's no reason why they can't live in the same file, and if nobody outside a file refers to a class, then it doesn't need to be defined in a header. Finally, if a class is just a collection of static functions, then that class is acting like a namespace and should be expressed as such - this is not Java.

Code that needs to see a class obviously needs to see its definition. A weakness of the current C++ compilation model is that such client code also needs to see non-public parts of the class definition, mostly because to create an object, whether directly on the stack or using new, it needs know how big that class is.

I am not a paranoid person so I don't worry about 'others' seeing the 'insides' of my class definition; but it does irritate me that client code depends on private members, and all classes that these members refer to. So any change in the implementaton of a class requires a rebuild of all code that refers to it, and all that code needs to pull in the private dependencies. One way around this irritation is the so-called 'Pointer to Implementation' pattern (PIMPL).

 // pimpl.h
 class Test {
 private:
    struct Data;
    Data *self;

 public:
    Test(double x, double y);
    ~Test();
    double add();
 };

Notice the private struct that isn't fully defined! So the first thing the implementation does is define that struct. Thereafter, the methods all refer to the hidden implementation pointer.

 // pimpl.cpp
 #include "pimpl.h"

 struct Test::Data {
    double x;
    double y;
 };

 Test::Test(double x, double y) {
    self = new Data();
    self->x = x;
    self->y = y;
 }

 Test::~Test() {
    delete self;
 }

 double Test::add() {
    return self->x + self->y;
 }

The cost is some extra allocation and redirection when accessing object state. But now imagine that you wish to expose a simple interface to a complicated object, then PIMPL will simplify the public interface and any pure implementation changes will not require a rebuild.

Another method which uses redirection is to define an interface:

 // Encoder.h
 #include <inttypes.h>

 class Encoder {
 public:
     virtual int encoder(uint8_t *out, const uint8_t *in, size_t insize) = 0;
     virtual int decode(uint8_t *out, const uint8_t *in, size_t insize) = 0;
 };

The derived subclasses do all the work:

 // simple-encoder.cpp
 #include "encoder.h"

 class SimpleEncoder: public Encoder {
     int encoder(uint8_t *out, const uint8_t *in, size_t insize) {
         ...
     }

     int decode(uint8_t *out, const uint8_t *in, size_t insize) {
         ...
     }
 };

 Encoder *SimpleEncoder_new() {
     return SimpleEncoder();
 }

Then the client code can get the proper subclass with a XXX_new function. This is not the most elegant way of creating subclasses, but it works and it's easy to understand; you add a new Encoder and add a reference to the builder in the file that's responsible for making new encoders. There is a little cost involved in the virtual method dispatch, but usually this will be dwarfed by the actual payload. (Again, premature optimization...).

The beauty of this decoupling is that the actual details of a particular encoder are kept within a single file, if it fits.

Good design is dividing larger systems into subsystems that are responsible for a single aspect of the problem. Networking, databases, GUI, etc are separate concerns and you make your later life easier by separating them out. And this makes for better builds on average, since modifying a subsystem implementation requires only compiling the affected files.

2 comments: