Z2 Compiler PT7 in feature freeze

There has been a lack of new information on the blog in 2016. Sorry, I didn’t have time to write posts since I was busy with PT7. For PT7, we wanted to simplify a lot of the complexity that can be found deep within the heart of the implementation of some of the standard classes. This tuned out to be a far more lengthy task then expected. But now is the right time to iron out the last few remaining kinks, even if this means breaking compatibility.

Module system fine-tuning

First, we changed the module system. The using keyword was used to make a source file refer to another source file. After the using clause, a sequence formatted like foo.bar.(...).baz was interpreted as a reference to an existing module/source file on disk and imported as such. Like a lot of things in Z2, this is an evolution of a system from C++. In this case this was a more advanced form of f#include, but this time coupled with a powerful module system. And it worked very well. But we did find a small problem that in practice may have been a moot point, but it was still worth to fix. Top level source files were referring to lower lever modules and so on until the bottom level ones were reached. Using this hierarchy, a net semi-unavoidable increase in the number of imported symbols was noticed. The module system gave you to power to choose what to make available and what to keep private, but still a small increase was leaking from layer to layer.

So we changed the sequence after the using to refer to a fully qualified entity and decoupled it from its location on disk. I shall explain this in detail one day on the blog, but it is a variant on the C# system. But in short, a source file can refer to one or more fully qualified classes and other entities and it is the job of the compiler to supply the on disk location to them. You can still organize source codes in any way you see fit and there is no compounding public symbol pollution. And since we made sure for the standard library than fully qualified class names were in the right place on disk, this change had zero compatibility break. Compilation has gotten slightly slower because of this change, but we’ll fix this in the next versions.

Greatly simplified parameter type system

Z2 is all about combining good and best-practice inspired designs with powerful compilers to reduce the complexity and fussiness of problems. This is why we use the term of “dependency/declaration order baby-sitting”, declared it a “bad thing” and went ahead to eliminate it. Another thing we wanted to eliminate was ambiguity, especially when it came to calling functions. Like most things in life and programming, ambiguity is not binary, but a spectrum. Things that have low to medium ambiguity are often resolved in programming languages by conventions and rules. In Z2 we took this to its limit and created a language than can resolve any level of ambiguity, if it is possible to resolve of course. In consequence, the rules were extremely complicated when dealing with the types of formal parameters. We set out to create a language more powerful than C++, but with better and more sane rules, and for the most part we think we were very close to achieving this. But for formal parameters and eliminating all ambiguity by rules, we failed: we created a rule-set that is easy to learn, but almost impossible to master. The exact opposite of our goal.

We tried a solution in 2015 and now in January 2016 we tried another one. Things were better but still too complicated. So we reevaluated the problem and the value proposition of having all ambiguities resolved and came to the conclusion that… it is not worth it! We rewrote the system and now common, expected and useful ambiguities are resolved by a set of easy to learn and master rules and for the rest, we are not attempting to resolve them. We give an ambiguity related compilation error! This brings Z2 in line with other languages when it comes to the effort of learning and overall we feel that this is far better place to be in related to complexity!

Low level array clean up

Z2 has a wealth of high-level containers you should use, like Vector. It also has a couple of very low-level static vectors, RBuffer and SBuffer. Unless you are writing bindings for an existing C library, using some existing OS API or declaring static look-up tables, there is no good reason to use these low-level buffers. Still, they are in use in the heart of the standard library when calling OS features, and there was a small problem with them.

RBuffer (short for RawBuffer) is the new name starting with PT7. Before, it was called Raw. It is a raw static fixed size low-level array, like standard arrays in C. Unlike C, RBuffer has a an immutable Length and Capacity, but they are not stored in RAM. When needed, they are supplied based on compile time information. So if you have a RBuffer of Byte with a Capacity of 4, it will occupy exactly 4 bytes of RAM, not 4 plus the size of the buffer. SBuffer (short for SizeBuffer) was called previously Fixed and is low-level static fixed capacity array, that has a mutable Length that is stored in RAM and an immutable Capacity that is not stored in RAM. So a SBuffer of Byte with a Capacity of 4, will occupy RAM like this: a PtrSize sized chunk to store the Length and exactly 4 bytes. The Capacity is supplied based on compile time information, without it taking up memory. So the difference between SBuffer and RBuffer is that SBuffer has an extra field to store Length.

So far so good. The small problem of library design came from the way we were using these two types. We noticed that in most cases, a RBuffer was passed as in input, but when using RBuffer as an output, we always had an extra reference parameter to store “length”. So we refactored the standard library and now in 99% of cases a const RBuffer is used as in input and a ref SBuffer is used as an output parameter. Additionally, the low-level parts of the library no longer use pointers when those pointers are meant to be a C arrays, but use RBuffer instead. This creates a cleaner standard library.

Function overloading code refactored

All these welcome simplifications worked together and allowed us to refactor and greatly simplify the function overloading code. Simple rules give simple code and now the old super complicated implementation is gone and replaced with a far shorter one that is faster than ever!

Conclusion and future versions

This chunk of simplifications turned out very well. The language is in a far better place right now. Easier to learn, easier to master and cleaner API overall.

On the downside, implementing all this took more than expected. Starting with PT8, we want to do shorter release cycles. This means that the target final PT version moves up from about PT15 to about PT20. To compensate for this, we won’t wait until we have a very stable and feature complete compiler before we make it available for testing and instead will release a super early pre-alpha version of the compiler and ZIDE for adventurous people.

PT7 is in feature freeze and we need a couple of weeks more to fix some bugs, but starting with PT8 the standard library code will begin upload to GitHub. An account was created and I’ll write a couple of explanatory post related to the standard library and then one by one, classes will be tested, documented and uploaded.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s