I have read the point about having the compiler handle dependency resolution etc. so that there are fewer tools.
I would like to point out that Java started that way. Javac will go find included packages and compile the code if necessary. However, in practice, this proved to be insufficient and now I almost always see Ant, Maven, or other build systems used instead. I am not sure of all the reasoning behind this, but I can think of a few things:
0) it never really worked for more than very simple things.
1) if there is any code generation going on (think yacc), then javac could not handle it.
2) going outside java for some artifacts did not work. I.e when building a WAR file, you need all the crufty XML, images, static HTML etc. etc. as well as the .jar and/or .class files.
3) NIH syndrome. Why else would we have make, cmake, scons, etc. etc. etc. etc. etc.
4) changes and additions to the build system required a release of the whole compiler.
I think go does most of the work for compiling Go code. So, there is some backlash against separate tools.
I am not saying that combining the build system into one tool with the compiler is wrong, but there are some tradeoffs that should be thought about.
One example I really like is git. The core git code (the plumbing) is a small set of tightly linked programs (they were in shell, now mostly C I think). On top of that is the "porcelain" that provides varied and higher-level interfaces. Go is much this way too, I believe.
The caveat to all this is how does a different executable know how to link together various modules to run a program? C has several steps to get to an executable. The linker step is not part of the compiler (conceptually). In C2 it is. Modules make this a little trickier. Certainly the pain of keeping Makefiles up to date is something to be avoided!
How can C2 keep that simplicity but avoid the problems that Java, for instance, ran into?
Best,
Kyle