Z2 PT 9.4.1 Available for Download

Z2 PT 9.4.1 is available for download! Finally! And the very much work in progress site received a few updates, including this now working download page.

Older versions are still available on GitHub, but keep in mind that the goal is to reach for a stable production ready compiler, so older versions do not receive any updates or support and can be deleted without notice.

Version 9.4.1 was supposed to come out in two weeks after 9.4, but instead it took 6 months. The original 9.4.1 started according to plan but quickly went of the rails and became a completely failed experiment. I will explain the issues some other time. This version was rolled back to 9.4 with 95% of the changes removed and what you can try now is a brand new version. Since so much time was wasted on the failed experiment, for 9.4.1 we needed a fairly substantial update that could be finished in a relatively short time, so we went with a bunch of safe features:

Full Linux support

Previous Linux versions were functional, but the standard library was not complete. This is the first version where the standard library has fully been ported over to Linux and there is now official 64 bit support too. The packages are still in a “portable binary” format, like the Windows versions, so there is still much left to do: packages that can install themselves to standard Linux locations are needed.

New Windows SDKs supported

We still did not manage to fix older Visual Studio versions support, but for this release, we added support for Visual Studio 2017 and Microsoft Builder Tools 2017. Microsoft has the bad habit of changing conventions from one version to another, so the old auto-detection methods do not work for these versions. A new method was added for auto-detection and the old methods were touched up to make them more robust. The downside of the new method is that registry can’t be used to detect VS2017, or at least to out knowledge, so disk based search had to implemented. This is relatively slow and can take a few seconds to find VS. But the results are cached as allays, so you shouldn’t need tot run auto-detection only after installing new versions of VS.

Bidimensional vectors

When working on such a large project, one often forgets about a beginner friendly list of features. The question when implementing a new feature is often “what part of the compiler needs the most amount of work” rather than “what does a beginner need from the compiler”. So you will find a lot of advanced features already implemented that are very rarely used. And for every such feature, there is something that is directly needed that we did not have time to get to yet.

One such feature was bidemensional vectors. Z2 has already good vector support and the plans we have for it will materialize in extremely rich vector support, but in previous versions this all worked only for one-dimensional vectors. We added support for 2D vector literals, but for starters only for jagged instances of Vector. Multiple dimensions together with flat arrays will come soon.

As a testament to the robustness of our Vector, it needed no updates to support bidimensional arrays. Only the compiler needed updates. There are also almost a dozen of new tests for these vectors.

Packages for 9.4.1 were available on last Friday and during the following days the site and documentation received updates. Only today did I have time to post about the new release. During this period 9.4.2 has already had a lot of updates. The next version will bring multi-dimensional jagged arrays and ZIDE will receive a lot of love. Among those features, it will finally have perfect compiler generated auto-complete.

Version 9.4.2 is scheduled in approximately two weeks.


Z2 PT 9.4.0 Available for Download

Z2 PT 9.4.0 has been released in 32 and 64 bit builds for Windows and a 32-bit Linux version. More specifically, a Mint Linux 18 build. More Linux versions will be tested an supported in the future, but this being the very first Linux release, one version is more than enough as a testing workload.

Version 9.4 was delayed two weeks to fix incompatibility with Visual Studio 2010 and 2012, but this is a fairly slow task. So in the meantime we added a new class, MemoryStream, and here is where the problems started. An apparently simple line revealed serious problems with the current state of the compiler.

In order to fix this issue, we fixed 14 serious bugs, added 3 new features and did a revision of the language’s object model. These were all great steps in making the compiler significantly more stable!

But it was also time consuming and was barely finished last Wednesday. So there wasn’t any time to fix older Visual Studios. The original delay was because such incompatibility with older versions shouldn’t be a trigger point for a release, but updating the object model is reason enough, so 9.4.0 contains the above mentioned fixes, but not the VS2010/2012 fix. These fixes will come in future versions.

In addition, ZIDE got two new features: drag and drop file move and ability to kill the launched processes.

As always, a minor release will be followed by updates to it, with 9.4.1 around the corner. It already has one more important fix and 3 minor ones.

For this release, beyond the normal stuff, we are experimenting with optional opt-in newline statement terminators for the language and with precise auto-complete pop-up for ZIDE.

Status update for 9.4

Today was the scheduled day for PT 9.4 release. Beyond the usual enhancements and fixes, this was supposed to be the first version to ship with a Linux version and, as a bonus, also have a 64 bit client for Windows.

Things are working flawlessly with the C++ backend if you have a fairly modern compiler, be it VS2015/VS2017 on Windows or GCC on Windows and Linux. But while updating the object models that are used in this version, we arrived at some solutions that are not supported on older versions of C++.

The easy solution would be to disable those versions for this release only, but this is a minor release. Such changes should probably be done only in major releases, so the alternative is to delay 9.4.

We expect PT 9.4 to be out in two weeks.

The plan was to release today, explain the new features and also the long delay: 9.3 was released around 4 months ago. But since there is no release today, I’ll explain the delay today and keep the actual release on subject.

Since we have a well documented history on not delivering the Linux version on time and this release being late, I’d like to mention the fact that this time the two events are unrelated. Since 9.3 we were busy with another project, so for two months there was no work done on Z2.

And then came December. Around this part of the world, people will take long holidays if they can and I’ve been on holiday 3 weeks. So progress on the compiler was minimal in December.

But January was full throttle Z2 work. The previous efforts on Linux porting have payed of, and this time we managed to get the Linux version to work fully in around a week of extra work. It actually worked fairly well after a day in “hello world” scenarios, but there are hundreds of tests to be passed. This includes some custom porting and features to guarantee feature parity across operating systems.

The rest of the work was on cleaning up and enhancing the compiler, its object model, trait system and introducing the bind feature. These features will be explained in the release announcement.

Some other pieces of progress were updates to the docs and the doc generating system: both MD and HTML docs looks and behave now a lot better.

Some new parts of the compiler were open-sourced and so has ZIDE in its entirety (but with the non-functional debugging stripped out; no use submitting dead code to GitHub).

Hopefully this issue with the older compilers can be fixed on time and I’ll update again when the version is up!


Adventures in Linux porting #1

With 9.3.1 freshly released, we made a promise that 9.3.2 will finally bring Linux support!

Now this isn’t the first time we promised Linux support, but each time we tried, it turned out to be a very time consuming task: fixing hundreds of small bugs, most of them related to paths, while we would rather work on more pressing issues, like finishing the language and library for one platform first. But we did learn a lot from the path related issues and we will introduce a platform agnostic path class in the library, one that won’t allow you to use the wrong path format on the wrong operating system.

Anyway, each time we attempted Linux, we had to postpone it due to deadlines for the next release, but each time progress was made, so this final attempt should be relatively quick and easy and give great results. In theory! Let’s see how it goes!

While we need to finish this ASAP, it is still not possible to allocate too much time during the week for it because of other tasks. This means weekend work! My task for this weekend was to fix up as much as possible and prepare a status update for the project under Linux. On Friday night, I dusted off the old virtual machine and made it install all the updates.

On Saturday at 9 AM, I was up and ready to go. The first step was grabbing the compiler sources and seeing if it compiles. There was just one compilation error, because Linux file-system is case sensitive and Windows is not. We have the convention of threating Windows as case sensitive too in order to not have any problems, but a single file naming error snuck through. After fixing this, all the project specific compilation errors were gone, but one of the third party libraries we use had an older version installed on our Linux and it was not compatible. This library is available in a third party repository, not the main Ubuntu ones, but I’ve always had problems installing it from there: the installation works, but causes crashes when invoking the library. So I like to build it by hand. I’ve downloaded the latest sources and started the compilation.

18 minutes later it was done. Great! Mental note: now that Linux is a full-time platform, it might be time to no longer use VMs. Maybe use native install. Dual boot or new PC. After this, the compiler was compiled and ready for testing.

The first test I did was running the compiler from the command line without any parameters. And the first problem manifested: the compiler supports a lot of build methods, so you need to choose one. You need to give it as a command line parameter. The compiler also lists all the available configurations, so you can choose one. But it was complaining that you did not provide one and exited before showing you the list. The correct order is to show you the list, then complain that you did not choose one from the list. We use the command line compiler all the time, but not manually, so we never noticed that if you give it no parameters, it is not that usable.

I fixed this issue and also fine tunned GCC detection. On Saturday I was convinced the detection was made better. On Sunday I’m thinking the change I did was neutral. Anyway, the compiler was up and running, listing build methods and taking parameters correctly, so it was time to compile “Hello world”. The compiler exited with a short message telling me that compilation finished correctly in a very short period of time.

Naturally, I did not believe it! The amount of time passed was a clear give-away, especially on a VM, plus it running on the first attempt would be a minor miracle. I did not check the folder where the native binary is to be put, but first I checked the temporary build folder and found something nasty:

Blasted path bugs! This screenshot also reveals that the compiler is trying to pull in the Windows libraries instead of the Posix ones. And the builder was still thinking it is on Windows. I fixed these problems and everything was working as expected: all auxiliary bugs looked like they were fixed and compilation was failing as expected because some standard library functions were implemented with Windows API and they need to be ported over to Linux.

The compiler looked like it was successfully ported and I arrived to the phase where the library needs to be ported. Porting it requires examining compilation error messages and jumping around a lot from one source to the other, the perfect job for our resident Z2 IDE, ZIDE. I compiled ZIDE and it ran fine, only I did notice that it too was trying to pull in Windows sources:

I fixed this issue again and there needs to be in the future some mechanic added to fix this problem better. I also went though the code and removed a lot of Windows bias. For starters, I commented out the offending Windows API calls and tried to compile. ZIDE hung and I had to kill it.

After a lot of research, I found the culprit: under Windows, execution return codes are 32 bit, with negative values meaning failure and positive success. Under Linux, return codes are 8 bit, 0 is success and positive values are failure. So exactly the opposite. Every time the compiler exited with -1 32 bit, it became a positive number and ZIDE was not expecting that. I wrote a quick platform independent return code mapper and made ZIDE much more robust, so it no longer hung, but reported error gracefully when something unexpected was received.

So ZIDE is considered ported too. The next step was to look at what the compiler was feeding to the backend. As expected, it was not good, uncompilable, but there were no major issue. Some required files are put in the right places under Windows for backend support, but under Linux, such an interfacing profile was not created yet. I created it and will be included in all future builds from now on.

After this was fixed, it compiled correctly. Unfortunately, “hello world” was not doing anything. If you remember, I wrote above that I commented out the offending windows code and this included console support. This was fixed last time we attempted Linux, but it broke. The Z2 compiler has a great dependency analyzer, so all the Windows specific parts of the library can exist in peace as long as you don’t call them. “Hello world” was calling just on platform specific bit, the one that outputs to the console. And beyond that, the runtime environment we created need to report bugs and that uses the console too. There was no code to pull the console in though the dependency analyzer, so I fixed that.

Z2 inherited the “extern” import mechanism from C. It turns out that that system is not good enough for our needs and will be removed next version. Instead I came up on the spot with a new “bind” mechanism, much more powerful, and hacked in support for it into the compiler in 30 minutes. Now, this is not how we do things. Everything is properly designed, sometimes even feeling like the design work never stops and there is never enough time to code. So starting Monday, this bind feature will be properly discussed and designed. There are still a lot of unknowns, but I think it is better than “extern”.

Anyway, with the “ieee.posix” package made to use the new “bind” feature and a few more fixes, finally it was time to see it all put together:

Too good to be true! So I checked the binary file. It was named “.exe”, something I quickly fixed. And it was almost 1 MiB large. Turns out the builds were made with debug information and static linking. Good to know that these features work. But without debug information and using the Linux standard .so that you inevitable get even in a small hello world like program (nothing Z specific here), the executable was around 17 KiB and working properly.

These results are so good (I did further testing to make sure that all major bugs were squashed) that I don’t want to ruin my mood by running the test suite. Next time… I will probably say “1 out of 206 tests have passed successfully”.

So there is a lot of work still. “Hello world” by coincidence only uses a single Linux specific function, so porting it over was easy. The Z2 standard library tries to be as native as possible, meaning most functions are implemented in pure Z, no platform specific bindings, but for some things, like console output, file system work, getting the time, etc. this can’t be avoided. All those functions need to be ported over to Linux. Not just ported, but using the new “bind” feature. Which first needs to be properly designed.

This port is only a 32 bit one. A 64 bit Linux must be installed and tested toughly. And then there is CLANG support too which we need to add, not just GCC.

But this was a good start and a weekend well spent!

And who knows? Maybe someday there will be a Mac/iOS port? No promises!

Z2 PT 9.3.1 Available for Download

Z2 Compiler version PT 9.3.1 is now available for download under Windows!. Here are some links:
PT 9.3.1
Prepachakged GCC TDM for Windows 32/64 bit
Previous versions
Main GitHub page for standard library

This version brings minor improvements and fixes, but also fixes a massive backend compilation performance/resulting binary size issue. The fix is not that pretty but it works. Unfortunately, it is only supported for modern backends, so for older ones an even uglier fix must be found. This issue will continue to improve as versions get released.

But the main thing that this version has is better documentation. Last version we intruded the basic documentation architecture, but that was the first time we attempted anything like this so it was far from perfect. A new 2.0 documentation format has been introduced, much more readable and powerful. Additionally, more classes have been documented. This is an ongoing process and we’ll make an announcement when all of the API has been documented.

When some of the most common container algorithms were added to the standard library, the focus was on the design of the API and correctness, not on performance. So a lot of the common operations like insertion and deletion were created to set the tone of the library, but they are not particularly fast. This version we experimented with a standardized set of optimizations that will be rolled into the object model to help make the API as fast as humanly possible. The experiment show that performance has been multiplied by a double digit number in the worst case scenarios, meaning at least ten 10 faster. This applies for any subclass that can be inserted into containers. We also experimented with a particularly aggressive form of optimization that can only be applied to more special classes, where the performance gain can be more than doubled.

We left out the aggressive optimization out of 9.3.1 and only included the normal one in some very small parts as a test. If no issues are found, 9.3.2 will have both optimizations included everywhere.

So what is next for 9.3.2? Linux version of course! Except for optimization described above and smaller fixes we can include, 9.3.2 is on feature freeze and all the work will go into finally finalizing Linux support!

Here is the change-log for 9.3.1:

Z2 PT 9.3.1
– literal constants keep track of base 16 in C++ backend output
– better C++ backend else if handling
– const, mem and return nodes cleanup
– refactored literal CArrays

– new AsciiParser class

– new doc format
– option to regenerate all docs
– empty doc entries in the DB are marked

– massive backend compilation time fix for large CArray literals
– nameless constructors fixed for core types
– calling nested CArray method bug fixed


In the heat of the release announcement, I’ve completely forgot to mention the new automated build scripts. Creating a build is not that easy or fast: every time we need to go though roughly 14 steps and it can take up to 2 hours. Every build we get faster and faster at it, but you still need to do a fresh compile of 3 binaries, package sources, licenses and other files, run the basic test suite, the full test suite and do some visual QA, since we have a GUI too (our IDE) and GUIs are hard to test without somebody using their eyes.

So we created an automated script that removes some of those steps and cuts down time by 20-30 minutes. It is not that smart yet, but eventually it will do the whole process from checking out the source code to running tests, leaving just the visual QA as a manual step.

PT 9.3.1 is the first release created by the script, so if it is worse than usual or missing something, blame the scripts! All versions from now on will use the script and it will be improved each version.

Z2 PT 9.3 Available for Download

Z2 PT 9.3 is now available for download:
GitHub Win32 Prebuilt binary

There has been some restructuring and cleanup in the GitHub repository, including project renames and the use of separate isolated branches for releases, but you can find the standard library sources here:
GitHub Z2 Stdlib

There has been quite some time since the last release was announced. And there were minor releases that were not announced at all. We need to do something about lengthy pauses between releases, especially if in that period bugs are resolved. You shouldn’t have to download an old release with bugs that might have been fixed weeks or even months ago.

In order to fix this, we are restructuring releases and what they mean. Each release will now be a platform with a plan. PT 9.3 is foundation build that will receive multiple updates. Here is what to expect from 9.3 during its lifetime:

Biweekly minor releases

Every two weeks we’ll release a new minor version. If there are some major bugs solved and the solutions are stable, there might even be a weekly bugfix release, but as a general rule we’ll stick to bi-weekly. This means that two weeks from now, 9.3.1 will be released.

Language nears completion

9.3 will implement all major language features, except for lambdas and advanced meta-programming features. Delegates and reflection will be added before 9.4, to name the two largest missing features.


If you look inside the GitHub repository, you might notice some new “*.api.md* files. These files are documentation. They are readable but not particularly pretty Markdown files which you can read directly from GitHub to get some idea on what the standard library API does.

At the same time, these file are read by ZIDE and using the actual source code, the information inside these *.md files is enhanced and presented as small documentation flash-cards when browsing a documented source code. This is only the first incarnation of the documentation system. The *.md files will become more readable, ZIDE will allow you to browse documentation in a separate web-browser like tab and the enhanced union of the *.md descriptions and the formatting extracted from the source code will be exportable as standalone HTML and PDF in order to enable the browsing of documentation outside of ZIDE.

This is only the first step in documentation, so only class APIs are documented. And not all of them yet. Each minor release will add more and more documentation, until 100% of the API is documented. And there will be also non API documentation created.

Integrated debugging

This was supposed to be the major ZIDE improvement in 9.3. If you observe the file sizes, zide.exe has gone up a bit from the previous versions in size. This is the new debugger code. Unfortunately, we were not able to get his working on time. It kind of works, but it is very crash prone. So this feature is disabled for now.

Future minor versions will re-enable it, as soon as it stops crashing. Currently, only an experimental PDB based debugger was implemented. A second GDB based one is planned.

OSSing the compiler

When Z2 was originally released, only the standard library sources were made available. The compiler and tools were distributed only as pre-built binaries. And only on Windows. We always planned to release all the sources, but we wanted the compiler to be self-hosting and buildable without any hacks. This is still the plan.

But that will still take some time, probably more than half e year. Until then, we started releasing C++ sources for the compiler. Not just a simple release, where we take that actual compiler sources and dump them to GitHub. Instead, piece by piece, the code base is cleanup up, improved and tested further. So at this time, only some parts of the compiler were released on GitHub. As the minor 9.3 versions roll out, more and more parts of the source code will be released, up to the point where a full package will be available, for everybody to compile their own full featured Z2 compiler.

Releasing the compiler sources piece by piece is a welcome opportunity to refactor it all. The code-base is starting to show its age. With the benefit of hindsight, most of algorithms and data structures that the compiler uses currently would have been replaced with better ones. The compiler is due to a major rewrite with techniques that work better. We can’t put the project on hold and rewrite it optimally, but we can do small incremental changes, refactoring one small system at a a time. As the pieces are refactored, they will make their way to GitHub.


This was promised long ago but it is still not available. 9.3 is compatible under Linux and even has some support for it, like detecting GCC, but at the end of the day, the compiler is not working under Linux. With the OSSing effort it would be silly to not go ahead and fix Linux once and for all. So 9.3 will start providing full Linux support for the compiler.

Take not that this means only the compiler. Once the compiler is ported, you can compile arbitrary Z2 code with it on Linux. This does not mean that the Z2 standard library will be ported to in the same effort. That will be a separate effort. Most of the standard library is OS agnostic, but some bits need to be ported. Things like identifying the CPU, IO, clock and environment queries all need to be ported before a full Linux SDK will be functional.

CrashCourse – 007 – Vector literal introduction

Today I shall introduce the concept of vector classes. There are several vector classes in the standard library, but I shall only focus on the most common and simple of them, Vector. Vector is a template class and it is very similar to std::vector from C++, so it can be used as any normal template class using it’s API defined by the members it has. You can see the source code for this class in sys.core\container\Vector.z2.

But using functions to manipulate a Vector is a bit boring and also easy to figure out by looking at the class interface. Instead, I’ll focus on phase two features and talk about easier and more expressive ways to work with vectors though Vector literals. So, let us first see how a vector with 3 Int items, 1, 2, 3, in this order, looks like:

[1, 2, 3]

Using this syntax, there is no mention of Vector, or Int. The [] syntax denotes (in this case) a Vector instance and always the first element dictates the type of all the elements. The first element has a type of Int, giving the whole literal a type of Vector<Int>. All further elements beyond the first must be compatible with the first, meaning that one could assign any element to the first without any losses. So the following example works:

[1, Byte{2}, 7u, 2'147'483'647u, 1l]

Byte{2} is an 8 bit unsigned value but it fits into a 32 bit signed value, so it is not impartant that the first element is an Int and the second a Byte. This is the same for 7u, a DWord with value 7 and 2'147'483'647u, another unsigned that is the maximum DWord value that still fits inside Int. 1l is a signed 64 bit values, but it still fits into a 32 bit signed value. So the final type of the literal is Vector<Int>.

On the other hand, the following example does not compile:

[1, 1.0f, 2'147'483'648u]

The first element gives us a Vector<Int>, but the second element is 1.0, a Double. Due to how floating point numbers work, a floating point value may not be able to perfectly represent its integer counterpart, so Floats and Doubles are not compatible with Int and you need to convert them manually. So the second element causes the literal to not compile. So does the third. 2'147'483'648u is too big as an unsigned to fit into Int, it being one greater than the maximum value that would fit, 2'147'483'647u.

This short inferred syntax can be handy, especially when writing some code that is very obvious in what it does and further syntax sugar is not wanted. But in Z2, inferred classes can still be manually specified, even though most of the time they would be redundant. So:

val a = Foo{};

is 100% identical to:

val a: Foo = Foo{};

and since there is no such thing as an uninitialized instance in Z2, the two snippets above are 100% identical to:

val a: Foo;

Once can do the same thing with literal vectors. As I mentioned before, the type of that literal I used as an example is Vector<Int>, so:

val p = [1, 2, 3];

is 100% identical to:

val p: Vector<Int> = [1, 2, 3];

Now that we have a variable called p, we can do some stuff with it, like printing it:

val p = [1, 2, 3];
for (val i = 0p; i < p.Length; i++)
	System.Out << p[i] << ' ';
System.Out << "\n";

1 2 3 

This was a normal traversal using a for loop, PtrSize index local variable called i and the Length of the vector. An easier way to traverse the vector is using the foreach loop:

val p = [1, 2, 3];
foreach (v in p)
	System.Out << v << ' ';
System.Out << "\n";

The class can also print itself and by default will print out the number of elements followed by the elements, so [1, 2 3] printed will be “3 1 2 3”:

System.Out << p << "\n";

3 1 2 3

This variable is also mutable, so we can change the values of some elements, add, elements to the end, insert and delete:

p << 4;
System.Out << p << "\n";
p << 5 << 6;
System.Out << p << "\n";
System.Out << p << "\n";
System.Out << p << "\n";
System.Out << p << "\n";
System.Out << p << "\n";
System.Out << p << "\n";

4 1 2 3 4
6 1 2 3 4 5 6
5 1 2 4 5 6
5 4 4 4 4 4
6 4 4 4 4 4 7000
1 7000

The final line in the output is interesting: the vector remains with a count of zero elements. Vector instances can have zero elements, both after several operations, like in the example above, or they can be created from the go with zero elements. The later needs more attention, since you can’t just go [] to express an empty literal because the compiler can’t infer the type of elements in the vector. A few paragraphs ago I mentioned how these literals are just normal class instances and the class name is Vector<Int>, so you can instance them as a normal class, Vector<Int>{} in order to get an empty vector instance. There is also a shorter syntax: <Int>[]. This is equivalent to the one before and it allows you to create a vector of Ints with zero elements, an empty vector. I won’t explain today how and why this works to keep the post short. But it is a useful short syntax to remember.

One interesting tidbit to note is that these empty vectors do not interact with the heap, so creating an empty vector is very fast and doesn’t allocate any RAM.

When declaring all the previous literals, the number of elements, the Length of the vector was determined implicitly by the compiler counting how many values you provided. This number can be explicitly specified by using the syntax of [number_of_elements: list_of_elements]. So [3: 1, 2, 3] is identical to [1, 2, 3], but this time we explicitly specified the number of items. The explicit number must be equal to the implicit number, so [2: 1, 2, 3] or [10: 1, 2, 3] will not compile.

What use is it then to be able to provide the item number if it must be the same as the actual number of elements? The secondary reason is safety. Let’s say you have a table that must be introduced verbatim in code and if you know how many elements there are, the compiler can help find the error of leaving out an item or two.

But the primary reason is the ellipsis syntax, [number_of_elements: list_of_elements, ...]. The ... at the end of the sequence means to repeat the last element so many times that the total element count of the sequence is equal to the provided explicit count. This means that while [10: 1, 2, 3] is a compilation error, [10: 1, 2, 3, …] is equal to [1, 2, 3, 3, 3, 3, 3, 3, 3, 3]. Yo may be inclined to think that it is [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], but as said, the syntax means repeat the final element.

But it is not a dumb copy or anything similar. In this context, “repeat” means evaluate the last element a number of times: execute the same code sequence multiple times. This works very well with literals that have only one item and a count (but they can have any number of items) and can be used to achieve a lot of interesting stuff and even some meta-programming. Now I’m getting ahead of myself. Going back to the repeat code execution/evaluation paradigm the syntax of [5: 1, …] means to evaluate 1 five times and the syntax of [5: foo(), …] means evaluate foo() 5 times, while the syntax of [5: 1, 2, foo(), …] means evaluate 1 once, 2 once and foo 3 times (5 – 2). Here it is in action:

namespace org.z2legacy.ut.misc;

class VectorSample {
	def @main() {
		val a = [5: 1, ...];
		System.Out << a << "\n";
		val b = [5: foo(), ...];
		System.Out << b << "\n";
		val c = [5: 1, 2, foo(), ...];
		System.Out << c << "\n";
	def foo(): Int {
		return dummy++;
	val dummy = 100;

5 1 1 1 1 1
5 100 101 102 103 104
5 1 2 105 106 107

The fact that the last item is executed multiple times shows the way how this feature can be used for meta-programming, but there is so much more to this subject. And since I gave the example that [10: 1, 2, 3, …] is not [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], one can do something like val i = 1; [10: i++] to get [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].

One final but very important note is that this evaluation and all literal construction is executed, whenever possible, at compile time. Even if it is delegated to run time, it is done as efficiently as possible. To take the [10: i++] example, supposing it is executed at run time, this will not result in an empty vector that grows to accommodate one element, i++ is computed in a temporary and then the value gets copied into the vector, then the vectors grows again to accommodate another i++ and so on. A single memory allocation happens with the correct number of elements (10) and each element is not copy constructed (if possible) but instead in-place constructed, so you should not have any extra copy constructors called. So if we have a class Foo with a default constructor with a side effect, a copy constructor with a side effect and an assignment operator with a side effect and we do val v = [100: Foo{}, …], this will result in one Vector<Int>{} constructor and 100 Foo{} default constructors, the side effect of said constructor being executed 100 times and the side effect of the copy constructor and assignment operator as well as the main effect of these methods (making copies) will not be executed since they are not called at all.

So how does this all function? Vector is a dynamic buffer with a Length and a Capacity, similar to std::vector. The Capacity property gives it amortized growth rate. All vector types have Length and Capacity, but not all have these properties mutable. One special case is where neither are mutable, so we have a fixed Length and Capacity vector. Since accessing elements beyond Length is an error, such vectors are considered fixed Length. If you pass such a vector as a reference to a function, the function will be ale to change items, but not the length. You know how many items there are but you can’t modify this number. If you pass it as const, the elements will be read only and the Length will remain immutable. This is the base case, the CArray, a class from the standard library not yet introduced.

At the other extreme is Vector, a class with mutable Length and Capacity. You both know the Length and can change it. Changing the Length might change Capacity too. And changing Capacity directly might allow for faster insertions if you know the number of elements that you are going to add to a vector. Passing such a Vector as a ref parameter will allow you to modify both the items and the Length and Capacity of the Vector.

So when designing the interface to something that needs a vector, the question to ask is: is the Length mutable or not? This will allow you to determine the correct flavor of Vector. But as general rule, Vector is the main and most commonly encountered vector flavor. Unless you have a good reason not to use it, always use Vector.

Z2C PT9.2 “Kitchen Sink” Released

For the last couple of weeks I’ve been looking at a white on black console output, checking automated unit-testing results and frantically fixing bugs every time a test failed. But the output of the text suite has been stable and promising in the last few days:

1386 out of 1386 tests have passed in a total of 330.129 seconds.

We are golden! Roll the tape with the release music…

So finally, 4 months too late, with a total of 7 months passed since the last release, a new version is finished! PT 9.2 is out! Normally, we like to keep releases smaller, having a fixed set of features and work on that until the version is stable enough. But this time we did not manage that and in the 4 months over the planned time and even in the 3 months before, a lot of non-planned features managed to receive attention. Some are in the latest release, some are not quite done (still!) and are deactivated for this release and some were so experimental that they were removed for good. So this release is more of a kitchen-sink release, hence the code-name.

Let us take a look at the main new features:

Reimplemented package system

The old design for the package system has been updated. Things are still pretty similar, but the new design is more robust, scalable and has more defined roles and rules. In addition to this conceptual and design cleanup/enhancement, the implementation has been redone pretty much from scratch.

Cache system for building

The new design for the package system needs to locate resources and this is so much more important because the entire Z2 building process is controlled by the dependency manager. So the dependency manager needs to be able to locate resources that are later used to locate other types of resources for optimal builds. All these lookups are now helped by a cache.

This cache is only one of the needed features for really fast compilation times. The goal is for Z2 to compile faster than C++, even when using the C++ backend. The second pillar of such compilation times is binary packages. Binary packages are not implemented yet, but even the cache on its own helps a lot. I will detail the results in another post.

Redesigned vectors

The existing vector design has proven to be not ideal and there has been an attempt to replace it for ages now. But vector design is really complicated and might be one of the most complicated subjects, so replacing them was no easy task.

The new design is much better and more natural to use. It is still not perfect so expect minor updates, but no more compatibility breaking major changes.

Function parameters are const

Z2 has inherited the concept of const-correctness from C++. But it does need some updates. If we can’t improve on it, maybe it is better to remove this concept from the language.

One of the improvements is the solution to the problem of mandatory const keywords in function parameters. As in C++, the de-facto standard for passing parameters to functions is by const and often forgetting to add the const is a (minor and often harmless) error. Yet the syntax for this de-facto standard is more verbose than the alternative which is rarely used.

Starting with PT9.2, all function parameters come in two flavors: const by default or reference parameters. Const parameters do not need to be decorated by a keyword and are immutable. Reference parameters are introduced by the ref keyword and they can mutate the passed-in object.

This change is experimental and at the same time quite controversial. But this is the optimal time to test its viability in practice, when the user base of the language is low.

Fancy logs

There are now new fancy HTML5 Bootstrap logs. Really fancy! With color coding, collapsible panels and navigation options.

Going over these logs can be a joy when compared to simple non-navigable logs. They are also very slow to generate and not that useful if I’m honest, so they have been deactivated for this build. It is better to compile fast rather than having Bootsrap enabled logs.

Bug fixing

While a longer development cycle can introduce more bugs, there is also more time to fix existing bugs. A ton of bugs has been fixed and Z2 has never been this stable. Probably another 4 months of bug-fixing without adding any new features and I would call the compiler production ready.

Java backend

I’m mentioning this last because the new Java backend does not work. It tries to, but it fails. But if it will work, this might add a new backend. I’m not sure if Java is a good fit for the Z2 object model and that’s putting it politely.

So what are the next steps? Well, there are two sets of plans for what will make it into PT 9.3. I can’t tell exactly which one will make it though. And after 9.3, PT 10 will come out. This should be a really major and mature version.

But until then, even with the long development cycle, PT 9.2 still has some problems:

  • Code completion. This was developed and tested in the isolating context of a code editor. It works decently in such a context. But in the context of a full IDE, it can be sometimes annoying, eating shortcut keys, popping up when not needed and so on.
  • Some const-correctness problems. The new changes to const parameters has introduced some const-correctness problems. These problems are hard to find so it shouldn’t effect potential users, but they are there.
  • Bugs. So many bugs solved, but so many new features. While beyond the 2 problems mentioned above there are no known bugs in this version (but there are of course many incomplete features we know about), I’m sure this larger release has an above average number of bugs.

In about two weeks, PT 9.2.1 will come out, addressing these and other problems!

PT 9.2 Preview #1

PT 9.2 just so happens to turn out a much larger release than anticipated. It includes a major rewrite of the packaging system, making it a lot faster in hopes of the compiler getting closer to a production ready state. These packaging changes will still take some time to finish so I won’t talk about them for now, but other features will also make their way into the release and I can talk about those!

To reiterate, some of the long term goals are:

  • greatly increase Z2/C++ interoperability
  • increase compilation speed
  • do the final disambiguation and language feature cleanup rounds

With the C++ backend, Z2 can be compiled down to plain and simple C++. We generally don’t talk about or show this resulting C++, since it is an ever changing beast. The backend has multiple compilation options and if you use a well defined set of them, the compiler gives guarantees on what the C++ result will be like. But if you just randomly use the compiler with the goal of creating executables, the actual form of the resulting C++ is decided on the fly, is implementation defined and is generally tailored to be smaller and uglier for quicker compilation times. The compiler might even decide to skip all white-spaces and output the whole code on 128 (configurable) character long lines. Or it might change all you names to encoded short strings (configurable, BASE64 and other options).

This is why we talk about two separate entities. One is ad-hoc C++ code, meant to be quickly processed by a backend compiler, is implementation defined and can vary randomly between readable and obfuscated. Ad-hoc serves a single purpose: feeding a back end compiler in a final binary deliverable generation scenario. Scenarios where you don’t care about how the code looks, just what it does and you need it compiled.

The second one is interoperability code. In this mode, the resulting C++ tries to look as close as possible to both your Z2 code and the equivalent hand written code if it were originally written in C++, not Z2, but with compromises to handle the differences and needs of both languages.

So today I shall show some of the resulting interoperability mode code and to demonstrate the new features. First, let us introduce a very simple test class that has a single field called Name of type Int and a sample of its use:

class Test {
	val Name = 0;
val t = Test{};
t.Name = 7;
t.Name += 1;
t.Name /= 4;
System.Out << t.Name << "\n";

The snippet would of course print 2. In the past, if you had a plain class that had a simple member you wanted to have unrestricted read and write access to and there was no design document or other reason for you to expect for this member to ever be read or written to in a more complicated way, we would argue for the use of a variable over a property. If later things changed, you could then and only then change the variable to a property. And the result was always the same: OOP purists would object to the use of a public variable. They would suggest that you always use a property for public members, even if you are sure that you will never have complicated side effect based getters and setters:

class Test {
	property Name: Int {
		return name;
	set (value) {
		name = value;
private {
	val name = 0;

Z2 does not adopt an “one size fits all” approach to things like this and lets you decide. If you feel that you should decide on a case by case basis if each public field should be a variable or a property or instead go with a rule that all public fields should be properties, Z2 let’s you decide. Because, in the end, it might not even matter. The two versions of Test are identical. Even to the point that both the front end and back end compiler will strip away the property, leaving you just with a variable. The public API is the same for both versions, but in order to give the exact answer to what exactly happens, the answer to other questions must be known first, like if the build is debug or release mode, optimization level, inlineing, if the class is intended for dynamically linked libraries and so on.

Z2 will do instead just two things: first, give you the curtsy of keeping your property around when compiling in C++/Z2 interoperation mode, so you API looks nice and clean. Second, Z2 realizes that while there are some complex cases of getter and setters out there, in this simple case, where the property is read and write and only updates a single variable, the current syntax is too verbose, so PT 9.2 introduces this new syntax that is identical to the first:

class Test {
	property Name = name;
private {
	val name = 0;

Using this syntax, the compiler will “provide” you with getters and setters that will affect the variable that is at the right of the = sign.

Now it is time to see the resulting C++ code:

class Test {
	int32 name;

	inline Test() {
		memset(this, 0, sizeof(Test));

	inline Test(const Void&) {}

	inline int32 Name() const {
		return name;

	inline void Name(int32 value) {
		name = value;

The conversion is convention based. Randomly outputted C++ code from Z2 code can always be made to work with other C++ code, but the results might not be pretty. “Autogenerated” code has a reputation of being difficult to work with. So a conventions system is used to make everything look good and have predictable results. But there is also some heavy but standardized compromising always in use, because the object model, calling conventions and other details are subtly different between C++ and Z2.

But the class overall look nice and clean. I won’t discuss the details of getting this code into header files for now. Instead, let’s focus on the class. It has the same name as in Z2, but you will notice that the name variable is public. This is one of the compromises I talked about before. In C++, public/private/protected can affect you API/ABI compatibility, so by default, Z2 bypasses these possible resulting troubles by using public. One additional added benefit is that if you change a field from private to public in Z2, the C++ code does not need to be recompiled. There is an option for turning on protected and private access modifiers, but it is off by default.

Another thing you’ll notice is the second constructor (I’ll talk about the first one latter). This is another compromise. In Z2, everything is a class and there is no such thing as an implicitly unitized object. All Z2 constructors will fully initialize the instance. But C++ can skip this, mostly for built in types. The second constructor is present in every single class and is a NOP: it will leave your instance completely uninitialized. This is not for public use form C++ code, but it still must be present to satisfy Z2 API requirements. So all classes have a constructor that accepts a Void const reference and it can be always ignored because it is always guaranteed to do nothing! Nothing implies a full NOP: all members, on any depth will remain unitized, even virtual tables and other internals. Using an instance resulting from this constructor is an guaranteed error. Don’t use it!

Except for the automatic getters and setters, which use by default the “short getter/setter” naming convention (there is an option for this, default is short; with long convention, the methods are called GetFoo and SetFoo), there is the question of the first constructor. It uses memset. In the post "Class constructor performance foibles?" I detailed the problems. Cutting edge compilers, especially Clang are great at consolidating multiple small fields that are initialized with 0 and even using SSE and every trick in the book the create the fastest constructors possible. In these compilers, using memset instead of initializing has the same performance and results in the same ASM code, because memset is treated as an intrinsic. Other compilers are not that great at consolidating values, especially 8-bit ones, and will routinely be outperformed by memsetting the instance. All supported compilers have been tested and using memset is always as fast or faster than setting all fields in order. This applies the same to using initializer lists. In conclusion, using a memset is as fast as the fastest method of setting everything to zero on all supported platforms. So we have gotten around to adding this optimization to the compiler and by default you will always see memsets in constructors whenever possible. But this optimization is intentionally not aggressive. It will only be used when all the fields in the class are initialized with 0 bits. Otherwise, it will initialize them as usual. It handles pointers, like we can see with String:

String::String() {
	this->data = nullptr;
	this->length = 0;
	this->capacity = 0;


String::String() {
	memset(this, 0, sizeof(String));

It is also aware of types it has already optimized, so even if you have a non POD class that is embedded in another class, the whole deal can be optimized and flattened down to a single memset, instead of the host class calling the child class’ constructor, which memsets and the doing a separate memset for the rest of the members:

SystemCPU::SystemCPU(): Vendor(_Void) {
	new (&this->Vendor) ::String();
	this->MMX = false;
	this->SSE = false;
	this->SSE2 = false;
	this->SSE3 = false;
	this->SSSE3 = false;
	this->SSE41 = false;
	this->SSE42 = false;


SystemCPU::SystemCPU(): Vendor(_Void) {
	memset(this, 0, sizeof(SystemCPU));

And finally, it also handles classes with virtual methods correctly:

Stream::Stream() {
	memset(&this->pos, 0, sizeof(Stream) - __Z_MEMBER_OFFSET);

For classes with virtual mebers, the memset starts at the offset of the first field, making sure to not nuke the vtable. The actual memset that is generated varies based on backend compiler and class layout, so don’t take the actual code as set in stone, only what it does: if a constructor logically ends up writing only 0 bits into the entire instance, barring vtables and what-not, an appropriate memset optimization will kick in and this will guarantee that constructors on really old compilers are more competitive with the latest Clang.

Next, let us look at one of the samples form the org.z2legacy.ut package, in the access folder:

namespace org.z2legacy.ut.access;

using org.z2legacy.ut.access.Foo;

class FailPrivate01 {
	def @main() {
		val p = Foo{};

The contents of the sample are not important here: it just tests that the private constructor of Foo is indeed not accessible. The new minor feature is that you can now write:

namespace org.z2legacy.ut.access;

using Foo;

class FailPrivate01 {
	def @main() {
		val p = Foo{};

When the using statement is followed by an unqualified class name, it will always assume that it is in the same namespace as the one specified in the namespace statement, so using org.z2legacy.ut.access.Foo means the same as using Foo. This may lead you to the question: how do you handle classes that are not within a namespace? The short answer is: you can’t.

But fret not! We removed the ability to have classes outside of namespaces! Not for the above mentioned reason, but because try as we might, as soon as packages started to grow, public namespace pollution became more an more of an issue. If anybody can add names to the public namespace, it is only a question of time before two different packages will define two different classes with the same name. With mandatory namespaces, this issue is greatly lessened.

So starting with Z2 PT 9.2, all your classes must be added to a namespace. This is the first breaking change in the language, but the language is very young, so it should not be a problem. It was either adding a breaking change, or realizing years from now, when it is too late, that namespace pollution is indeed a severe problem affecting actual code.

I’ll go in and change the content on the site to reflect this change.

As a bit of an tangentially related curiosity, Z2 does away with declaration orders being meaningful when they are not needed, as stated before. This is great for classes and methods, where one class can have access to another class that is defined in the same file, only latter. You no longer have to babysit orders and you only care about public or private access rights. But this also affects the namespace statement, so the above example is 100% identical to the case where the namespace statement is not the first line in the file, but placed somewhere more awkward, like:

using Foo;

class FailPrivate01 {
	def @main() {
		val p = Foo{};

namespace org.z2legacy.ut.access;

You can have only one namespace statement per source file, so all the classes in a file must be in the same namespace. It makes sense for it to be in the beginning of the file, maybe even the first statement, since it affects the whole file, but you can place it anywhere outside of a class definition.

The error reporting within the compiler has been upgraded. A new component was introduced to centrally handle all error reporting. This will also allow for internationalization of error messages and the assignment of unique error codes, but the list is not agreed upon yet. Additionally, some error messages have been improved, as one can see in this command line screenshot:


To review the contents of this preview, I talked about:

  • new shorthand syntax for properties
  • memset optimization for zeroing constructors
  • new shorthand syntax for the suing statement
  • new error reporting component

At least one more preview and maybe even a minor release will be created before PT 9.2 is released.

Plans for 2017

So the Linux update sure is taking longer than planned.

Some part of the issue was with scheduling and work load, the last couple of months being particularly busy. But the real problem is that we were treading uncharted waters with the Linux release and it caused us to effectively spin our wheels in place some of the time.

So when an approach is not working, you can keep at it and see where it leads or you can try something new. We’ll try the later and hopefully this will help things. So no longer will we dedicate most of the effort to building a single working Linux version to the detriment of the compiler development and instead we will set the Linux port as a background task, to be finished and released as soon as it is done, but in the meantime continue a good release schedule for the compiler and standard library.

Second, we realized that we will never be able to cover a meaningful subset of Linux distributions with official releases so most of the Linux development effort was centered around making the compiler open-source, so that anybody could compile for the rest of the platforms if a version was not available. Currently, the compiler is mostly written in C++, but some parts are written in Z2. The bridge between them is a horrible collection of hacks. So we attempted to port everything to Z2 and release only that, with no hacks. This meant not only porting the compiler, but a whole lot of support C++ library. This was far too much of a substantial task to finish in any decent time period.

But actually this was a bit of a self defeating effort. Z2 is designed with two backends in mind, a C++ one and a LLVM one, and the C++ can offer excellent comparability, allowing both Z2 to be called form C++ code and they other way around (with some limits of course). So we will focus the effort on making the natural compatibility closer to production ready, rather than porting. After this, new parts of code will be written only in Z2 and every time a bit of C++ code gets some non-trivial update, we will rewrite that bit of code in Z2, one function at a time. And thanks to the designed comparability, this should work without any of the hacks that are currently needed. Slowly, over time, the number of Z2 functions will go up and the C++ one will go down, but in the meantime builds will continue to be hybrid. This way we don’t need to wait until we have a full port that we can open-source. Every single future version will come with the parts that are written in Z2 included in the package.

But having everything OSSed is only the beginning. This won’t really help potential users because as a fledgling language, there is ultimately an almost 0% chance of getting included into any Linux distribution. So you will have to compile yourself on all platforms that are not officially supported. And it won’t be that simple, since bootstrapping is needed. But Linux users willing to try a new programming language should have little problems with the process.

With an ever increasing projected amount of Z2 code in the compiler, it will soon come to us having to fix develop, fix bugs and debug Z2 code in a real project, the compiler itself. Doing this in C++ is easy because the tools are there and work well. Doing it in Z2 is harder, so ZIDE and the rest of the tools must become top notch. Particularly the current SCU output method, where methods are reorganized into a single file in an order that is best for the compiler but not that easy to use by a human won’t do. We’ll keep this method around, but a new more natural and traditional unit based compilation process will be introduced.

This will be a challenge since the goal is for Z2 to compile a faster than C++.

In January we want to release PT 9.2, a version that will support this new compilation method. It will also have a higher percentage of Z2 code in the compiler, this time OSSed. The library will get some additions. We also noticed that it would be possible to improve in some areas on one of the core tenets of the language: in general, there should be one way/syntax of doing a single thing, and if there are more than one, this should be particularly well explained and documented. The language is still not as lean and mean as we would like it, so some parts are getting removed. This should have pretty much zero impact on the library and the capabilities of the language, it will just become less redundant and have a more focused syntax. Part of these updates will be included in 9.2, including the new much improved and less ambiguous way of using vector literals.

Once this process is fully finished, we will release further PT 9.x version. These will have higher meta-programming capabilities and some other features. PT 10 should bring full Linux support and before 11, all source code, including the compiler should finally be OSSed.

Over the holidays I’ll try to write a new Crash Course. I am also experimenting on a larger written piece, titled “Language specs by minigun”, where I’ll try to present the language using short phrases and examples, fired at a break-neck speed. Summarizing pages form the spec into a couple of lines at a time. And hopefully once that piece is done, it will go though the hands of an editor so we can improve on the generally low level of written English on this blog.