Z2 PT 9.0 Available for Download

Here it is, our first pre-alpha release, just on time!

Well, one day latter, since when setting the release date, nobody checked to see what day it was and the 15th of August is a Holiday. No problem, the 16th is just as fine. And this isn’t even the only mistake related to this release: turns out that WordPress.com doesn’t allow you to host ZIP files and make them available for public download. So the release package will have a temporary home until a more permanent location is found:
Download z2-pt-9.0.zip for Windows 32.

As detailed in a previous post, this first release is Windows only. Future releases will be Linux too. Sorry!

So what is the status of the release? The estimation puts it 3-4 weeks behind where it was planned to be, but the delay doesn’t effect it that much. It is pretty stable all things considered. And do note, this is a pre-alpha, so don’t expect the compiler or ZIDE to be production ready. A bug-fixing patch is planned 2 weeks from now.

The delay effected the compiler and there are two known bugs in this release. For starters, aliases are so bugged that they were disabled them. Aliases are a very useful feature, but more widely used with larger projects. Our small scale testing didn’t show the bugs until it was too late. With the current implementation, there is no easy fix. What we need to do is replace this implementation with the class lookup mechanics we use everywhere else in the compiler. The reason this was not done yet because this class lookup code is scheduled for an update for PT 9.1, so working on it now would have been in wain. The second bug is related to function overloading. Last time we refactored function overloading, it was believed to be sufficient and handle all cases. Turns out there are still some issues. This will take some time to fix, so either PT 9.1 or 9.2.

The standard library came out pretty much as planned, but the two above bugs prohibited us from including the Color classes. They exhibit both the overloading bug and use aliases. There wasn’t enough time to include 3 more container classes. Including Vector, the most commonly used of them all. Scheduled for 9.1.

ZIDE was barely effected by the delay. Maybe some minor polish could have been performed, especially related to directory renaming, but it is fine for now. PT 9.1 will introduce a very early code-completion feature.

That’s about it for this post. A new post, or even better, a static page, detailing what the package contains and how to use ZIDE is in the works. That page will then be used as reference for all releases from now on so we don’t have to repeat basic instructions.

Advertisements

Developer Preview on August 15th

With this project it is our goal not just to create a programming language and a standard library for it, but also to do a lot of experimenting, to figure out what works best and why. This is why we followed up upon a very interesting and aggressive optimization technique for containers. You couldn’t apply it in all cases, but when you could, it was pretty much the fastest way possible of getting a new instance into a container. We were pretty sure we were onto something quite valuable here but… in the end it turned out to no work at all well with exceptions. Now, the technique is not dead and buried. It can still be used when we detect that the code can’t throw, but this pushes the feature from a valuable top tier optimization to a niche optimization you may or may not do when implementing the last features of an otherwise mature compiler.

The moral of the story is that at this stage we can’t afford to spend time on such features. We are 2 months behind schedule and there are 3 draft posts that are just waiting there for their final edits in order to get published. Additionally, because of our perfectionism, we didn’t manage to commit anything to GIT that is not related to UT. So we need to refocus and get something out there.

We will release a pre-alpha package labeled a Developer Preview on the date of August 15th. A set date is needed in order to focus the development process on high yield areas, not niche optimization. But the date is so close that this Developer Preview will not be as ambitious as it could. Future previews will be more substantial, but this one will have and intentionally reduced feature set and its goal is to be a working downloadable and testable prototype. I will now detail all the corners that will be cut in order to make this deadline.

 

The focus will be on the language

We could try to release the language as complete as stable as possible at the expense of the standard library. Or we could focus on the standard library, since the language is in a pretty good shape right now. Or maybe somewhere in the middle.

It was decided that for the first preview, the focus will be on language. During the next month, the language and the compiler will receive most of the focus and the library will only get small fixes and maybe a couple of new methods here and there.

 

Only the C++ backend will be supported

As you may know, the compiler is designed with a flexible backends and the plan is for it to eventually ship with 3 backends: LLVM, C++ and C. The LLVM is planned to be developed only after the C and C++ ones, so it was of course not going to be included in the first release, but we decided to also exclude the C one. It is significantly easier to polish a single backend per release and it will probably end up being a higher quality one this way. We are not 100% sure yet, but probably the C one will be pushed back after the LLVM completion.

We’ve been polishing the C++ backend for two weeks now and the work has greatly payed off. The old backend was designed with several profiles in mind which all balance used language features, performance of code, performance of compilation and code aesthetics. There was a profile that generated quick and extremely ugly code. One which generated extremely beautiful code meant for C++ programs to call the Z2 standard library. The new backend has done away with this and now all generated code is decent looking and carefully selected to compile and behave the same on all supported compilers. There is a minimum level of aesthetics. And the code looks a lot more natural, using a more C++ style. And an additional library mode which goes the extra mile.

 

First version will only support a single OS

As with the multiple backends, focusing on just one OS will allow us to give that version extra polish. So the first version can be either Linux or Windows. Unfortunately here we don’t get to choose: most of the standard library is platform agnostic, but some system calls are made when printing to the console or using synchronization and these are currently implemented using Windows API. So the first version will target Windows only. And only 32 bit builds. Developer preview #2 will support 32 and 64 bit builds and the Linux version.

Just to be clear, Z2 is not tied in any way to Windows. It even works today on Linux. ZIDE is build using cross-platform open-source GUI, so there are zero issues with it. The compiler is again portable and indeed does compile 100% of the test-cases under Linux. Unfortunately, it will also fail to link-edit said binaries under Linux because of the WinAPI calls that need to be replaced for Linux. Were the release date two months form now, the first version would support Linux and Windows, but since it is just one month, only Windows will be included. In 2 to 4 weeks after developer preview #1 is out, both Windows/Unix support and 32/64 bit support will be finalized and stable. And after that we shall be targeting Mac too.

 

Support for compilers out of the box will be limited to MSC and GCC

The C++ backend of course requires a C++ compiler installed on your system. We believe that it is vital that Z2 works out of the box, so you will not be left to your own devices in invoking this compiler. Z2 will autodetect the following compilers:

  • TDM GCC
  • Visual Studio 2010/MSC10
  • Visual Studio 2012/MSC11
  • Visual Studio 2013/MSC12
  • Visual Studio 2015/MSC14

For GCC, TDM GCC will be included in the package so that you can compile out of the box. This is the default build method. For Visual Studio versions, the auto-detect should pick up both the commercial and the Express/Community editions. You do need to have the Platform SDK installed. If you can compile WinAPI programs with your installed compiler, you should be fine. It will not auto-detect Visual Studio “15”/MSC15, Microsoft’s own Visual Studio “developer preview”, because it is not out yet. It will be supported when it is out, granted it still supports traditional Windows executables. The are no plans to support Windows 10 styled universal apps (UWP).

The auto-detected methods are stored in the “buildMethods.xml” configuration file, the one file that is used by all the tools inside the Z2 package that need a compiler. So if your compiler is not auto-detected or you want to use the same version of GCC/MINGW/TDM that you have installed and normally use, you can go in and edit that XML.

 

Backends will only be used in SCU mode

Well, this one is actually not true. A bit of a white lie. First, let me explain what SCU is. It stands for Single Compilation Unit. Manual SCU is pretty difficult to use, but automatic SCU can have a lot of benefits. Initially, Z2 had only support for automatic SCU. Meanwhile, we transitioned to multiple CUs that are still automatic. This has more advantages. So what we mean by SCU mode is that it is automatic and you have no control over your compilation units. SCU must be retroactively made to stand for “Smart Compilation Units”.

Now, in a more mature Z2 compiler you can optimally have full control of your compilation units and deactivate/fine tune the automatic system. This feature will not be supported in dev preview #1 and you will have to do with SCU.

 

There will be bugs!

In an ideal world, the package you get on 15th will have zero bugs. But that is highly unlikely. We’ll fix as many bugs we can, but some will get though. Also, some language features might not make it in their final form. There will probably be very few breaking changes in the future, but this is not a guarantee. Developer preview #1 is pre-alpha after all!

CrashCourse – 006 – Templates and relationals

Last time I introduced the Intrinsic class together with some handy relational operator related functionality. But since Intrinsic is now the home of operations like Clamp and Max and these operations are defined by templates, without filtering the template parameters they can receive, anything can be passed to these functions. Even things that are not comparable!

So let’s see what problems this can cause and how to solve them. Let us consider a simple Version class that holds the major, minor and revision number of some product. A very simple class, not meant to be functional, just a simple example:

class Version {
	val major = 0;
	val minor = 0;
	val revision = 0;
	
	this(maj: Int, min: Int, rev: Int) {
		major = maj;
		minor = min;
		revision = rev;
	}
}

To break up the monotony of large blocks of text on the blog, I shall show a screenshot of the typical ZIDE workflow that is used when developing the samples for this blog and anything else Z2 related:

006less01

A quick tour of the sample. On line 3 we define our simple Version class and on line 15 we define a class with a @main slot method to test the version class. On lines 17 and 18 we instantiate two version variables.

On line 20, we test the equality of these two variables. As described in CrashCourse – 003 – What you get for free, Z2 will figure out common task for you from a small pool of common tasks. Simple straightforward comparison of value types is one of these tasks. The compiler takes one look at the Version and has zero problems to test equality of two instances. You as a programmer you shouldn’t have to write such easy boring code that only compares 3 Ints. Same for line 21, where we test for inequality. These two lines work out of the box because aggregate value type equality if a well defined unambiguous operation. Such methods that are provided automatically by the compiler for you are called automatic methods. They are always provided, but you can suppress this for a class/method combination if not needed.

And this is the reason why line 22 fails to compile. On this line, we don’t use operators == or !=, but operator <. The “less” operator can’t be defined unambiguously for aggregate types and the compiler can’t resolve v1 < v2. This is what the error says. Maybe the error message could be improved though.

The solution for this compilation error is to provide a < since the compiler can’t provide one for us. In Z2, method names that start with @ are called slots and they are just regular methods, but in some context the compiler will call them implicitly. Like @main. The calling mechanisms of the slots makes them perfect candidates for defining operators in classes. This was deemed a better solution than using a keyword like in other programming languages and the literal operator. It is easier to read, type and manually call. And solves some additional problems, like with pre and post ++ operators.

The < can be defined using the @less method:

class Version {
	val major = 0;
	val minor = 0;
	val revision = 0;
	
	this(maj: Int, min: Int, rev: Int) {
		major = maj;
		minor = min;
		revision = rev;
	}
	
	def @less(const v: Version): Bool; const {
		System.Out << "call " << class << '.' << def.Name << " of class " << @neq.class << "\n";
		if (major < v.major)
			return true;
		else if (major > v.major)
			return false;
		else  {
			if (minor < v.minor)
				return true;
			else if (minor > v.minor)
				return false;
			else
				return revision < v.revision;
		}
	}
}

This updated Version class defines on line 12 the @less slot. Now, when we do v1 < v2, the @less method will be called on the v1 instance and v2 will be passed in as a parameter. This is of course just syntactic sugar since @less is just a normal method with a name that starts with @, so it can be called normally: v1 < v2 and v1.@less(v2) are equivalent and result in the same machine code. The actual implementation is not important. I used here a simple implementation of the top of my head for comparing versions and may not be the optimal way to compare versions in production code. It is just a sample. This method could be implemented to not work as a relational less operator, even though it represents that slot. It is a very good idea not to do this to avoid major confusion.

One strange thing about this method is line 13. This is just for this sample to test that this method is actually called. Normally, you wouldn’t add such tests in real code. It could have been a simple System.Out << "Hey, @less has been called"; but I opted for this more complicated statement to demonstrate some reflection. Z2 has both compile and run time reflection and these features combined with templates and can be quite powerful. But in this sample we only use reflection for basic debugging. First we print out class. This is equivalent with this.class and returns the compile time class information for this, a reference to the current instance. def is similar to this, but does not represent the current instance, but instead the current method. Like class.Name, def.Name will return the name of the current method. And similarly to this.class, def.class returns the class information for the current method. In Z2 everything is an instance of a class, even methods. The class of all methods is Def. But instead of printing out def.class, I printed out the class of another method: @neq.class. It is the same class and I used this in the sample both to demonstrate that the classes are the same and that automatic methods are there and present, even if it not obvious that they are: @eq is the == operator and @neq is the != operator. So the output of this program after the fix will be:

false
true
call Version.@less of class Def
true

With @less working, now we can try a much more complicated sample, where we use all relational operators, Min,Max, Clamp and so on:

class Test {
	def @main() {
		val v1 = Version{1, 2, 7000};
		val v2 = Version{2, 0, 1};
		
		System.Out << (v1 < v2) << "\n";
		System.Out << (v1 > v2) << "\n";
		System.Out << (v1 <= v2) << "\n";
		System.Out << (v1 >= v2) << "\n";
		
		System.Out << "Min: " << Intrinsic.Min(v1, v2) << "\n";
		System.Out << "Max: " << Intrinsic.Max(v1, v2) << "\n";
		
		val v3 = Version{1, 0, 0};
		Intrinsic.Clamp(v3, v1, v2);
		System.Out << "Clamp: " << v3 << "\n";
		
		val v4 = Intrinsic.Clamped(Version{7, 5, 6}, v1, v2);
		System.Out << "Clamped: " << v4 << "\n";
	}
}

call Version.@less of class Def
true
call Version.@less of class Def
false
call Version.@less of class Def
true
call Version.@less of class Def
false
call Version.@less of class Def
Min: 1 2 7000
call Version.@less of class Def
Max: 2 0 1
call Version.@less of class Def
Clamp: 1 2 7000
call Version.@less of class Def
call Version.@less of class Def
Clamped: 2 0 1

The interesting part of this sample starts on lines 6-9. We only defined operator <, but we can use >, <= and >=. Think of it as the compiler making up for the fact that it couldn’t provide you with a free < operator. If you have @less defined, but not @more(the > operator), the compiler can still handle > by swapping the two operands: v1 > v2 is compiled as v2 < v1. And since we provided less and @eq, the equality operator, is an automatic method, the compiler can use them both to provide operator , called @lesseq and @moreeq. And once all these operators are defined manually or automatically, you can now use all the stuff from Intrinsic. And the same applies for when you have @more defined but not @less.

This part of the language design may be a bit confusing at first, so let me reiterate the rules:

  • @eq (==) and @neq (!=) are automatic. You get them for free, but you can of course override them and do something different. For POD types you rarely need to, but or non POD types and types that embed pointers, you may need to provide a better implementation since the automatic one might not do what you need.
  • @less (<) and @more (>) are not automatic. You need to define at least one of them! If you define both, they are use appropriately. If you define only one, their opposite is resolved by swapping around the operands.
  • @lesseq (<=) and @moreeq (>=) are automatic only if you defined at least one of @less and @more. Once you define at least one of these two, you get all 4. You are free to override @lesseq and @moreeq as with the others, and sometimes it is worth it from a performance point of view. To do <=, the compiler may need to do both < and =. It is sometimes possible to implement <= in a more efficient way.

So to reiterate, in order to get full relational operator coverage, you need to define either @less, @more or both!

In the git repository you can find these samples and more, part of the daily unit-testing.

With this, the very basics of the core numerical types are covered. Next time I’ll extend upon this as a jumping off point to introducing vectors!

CrashCourse – 005 – Int and Intrinsic

Last time I wrote about the basics of the Z2 library using an older and shorter version of the Int class. It is an archetypal value type and behaves similarly in a lot of languages, so it is easy to understand. I described how it handles conversions and operators using intrinsic functionality, how one can use constants to allow a class to offer some basic information about its value range and showed a few methods and properties.

The design looks viable, but has a few problems. It is easy to see this once you try to expand upon the library by adding a few more basic types. Just adding a single class, like Double, the only dependency of Int in this sample, would see us repeat the same code with minor changes. Defining the constants each time makes sense, since they have different values. But how about some methods? Like GetMin and GetMax? Sure, they are short and having them copied over into each class, including third-party classes is not a big issue, but surely there must be a better method.

This is where intrinsics come in! Last time we talked about two types of intrinsics: conversion constructors and operators, both in the context of numerical types. These represent the highest level of intrinsic functionality: they just exist and are part of their respective classes without any formal element to hint at their existence. But there are more traditional ways to access intrinsic functionality, with the main one being the Intrinsic class. This is a class only with static methods, offering a wide-set of common functionality. And this functionality is accessed using normal methods in a normal class, so it becomes easier to gain awareness of what is available.

Determining minimum and maximum values is an example of such functionality. Intrinsic.Min will return the minimum of the provided parameters. Instead of using:

5.GetMin(9);

…you now use the much more natural syntax of:

Intrinsic.Min(5, 9);

This approach has multiple advantages, beyond the already mentioned more natural syntax. It solves the problem of having to repeat the body of GetMin in each class. Intrinsic.Min is now a template method, so it only needs to be defined once and works with all types. Additionally, while some methods inside the Intrinsic class don’t have a visible implementation, Min does and it can be useful to see what it does. And finally, this method, and its counterpart, Max, is designed to work not on individual values, but value providers, so you will be able to pass it any combination of containers.

With this first change, we eliminated two methods not only from Int, but from all comparable value types from the library. What about Clamp? First, let us ignore Intrinsic and focus on naming conventions. During the development of the library we introduced a convention related to actions that can be applied to instances: these actions are implemented using verbs. A verb in its base form describes a mutating action, one that modifies the instance. A few examples: Add, Insert, Delete, Clamp, Sort and so on. Naturally, these methods can’t be called on const instances. Verbs using the past tense do the same thing as the base form action, but do not modify the instance, instead returning a new instance and leaving the original unchanged. The same examples: Added, Inserted, Deleted, Clamped, Sorted and so on. This is just a convention and there is no obligation for third-parties to respect it. So using this convention, in our Int class, we should have two methods. If the variable a is an Int with value 5, a.Clamp(10, 100); would modify a to be clamped to the range of 10-100, in this case making it have the value of 10, while a.Clamped(10, 100); would leave a as 5, but return 10. Additionally, a.Clamp(10, 100); and a = a.Clamped(10, 100); are equivalent. This holds as a general rule, with foo.Bar(); being equivalent to foo = foo.Bared();, but the former may or may not be more efficient, depending on what operator = does and the quality of the compiler’s optimizer.

So using this convention, our second version of Int would have two methods instead of one: Clamp and Clamped. Which leaves us with the same problem: two methods which are almost always the same, having to be copied over to a bunch of classes. Intrinsic solves this again, by having two methods, Clamp and Clamped:

class TestClamp {
	def @main() {
		val a = 5;
		Intrinsic.Clamp(a, 10, 100);
		
		System.Out << a << " " << Intrinsic.Clamped(-5, -100, -10) << "\n";
	}
}

10 -10

This solves the problem, but there is more to it. 0.GetMax(-1) wasn’t the most natural syntax, but a.Clamp(min, max) is. In some cases we want a class to have a “clamp” method independently from the Intrinsic class. We could just add the method to such classes, ignoring code repeat. But there is a better method: method aliasing! In Z2, a method can be an alias for another method. Their parameters must be compatible and there are a few other requirements too which I won’t describe right now. Luckily for us, parameter compatibility includes the case where a non-static method of a class Foo is an alias of a static method from another class with N + 1 parameters, where the first parameter is of class Foo. Using method aliasing, we can add only the signature of the method to classes and let the compiler forward the call to another method, with zero performance overhead. Using this, we can add the following two methods to Int:

	def Clamp(min: Int, max: Int); Intrinsic.Clamp;
	
	def Clamped(min: Int, max: Int): Int; const Intrinsic.Clamped;

Int.Clamp(Int, Int) is now an alias for Intrinsic.Clamp(ref Int, Int, Int).

Int.Clamped(Int, Int) is now an alias for Intrinsic.Clamped(const Int, Int, Int).

I shall talk more about parameters in a future post, including how ref works, but for now it is important to understand that these are just aliases. The parameters match up, are compatible, and when you call Int.Clamp, the compiler actually generates code for a call to Intrinsic.Clamp. An alias is just a formal way to say “hey, I’d like to add a new method to an interface for some purpose which leaves the heavy lifting to someone else”. The method names do not need to be identical. They are identical here because it makes sense, but the alias name can be anything.

Now it is time to see our second version of the Int class:

namespace sys.core.lang;

class Int {
	const Zero: Int = 0;
	const One: Int = 1;
	const Default: Int = Zero;

	const Min: Int = -2'147'483'648;
	const Max: Int = 2'147'483'647;

	const IsSigned = true;
	const IsInteger = true;

	const MaxDigitsLow = 9;
	const MaxDigitsHigh = 10;

	property Abs: Int {
		return this > 0 ? this : -this;
	}

	property Sqr: Int {
		return this * this;
	}

	property Sqrt: Int {
		return Int{Double{this}.Sqrt};
	}

	property Floor: Int {
		return this;
	}

	property Ceil: Int {
		return this;
	}

	property Round: Int {
		return this;
	}

	def Clamp(min: Int, max: Int); Intrinsic.Clamp;
	
	def Clamped(min: Int, max: Int): Int; const Intrinsic.Clamped;
	
#region Saturation

	this Saturated(value: Int) {
		this = value;
	}

	this Saturated(value: DWord) {
		this = value > DWord{Max} ? Max : Int{value};
	}

	this Saturated(value: Long) {
		if (value > Max)
			this = Max;
		else if (value < Min)
			this = Min;
		else
			this = Int{value};
	}

	this Saturated(value: QWord) {
		this = value > QWord{Max} ? Max : Int{value};
	}

	this Saturated(value: Double) {
		if (value > Max)
			this = Max;
		else if (value < Min)
			this = Min;
		else
			this = Int{value};
	}

#endregion
}

We can see the changes from version 1: GetMin and GetMax are gone, replaced with calls to Intrinsic when needed, Clamp is now an alias to Intinisc.Clamp and we added Clamped. Additionally, a new section has been added to the class that handles saturation. Z2 as a systems programming language is designed to have rich and performant numerical processing capabilities. Things like clamping and saturation are considered common tasks and ass such receive full support. Saturation is a lengthy section that will get repeated in multiple classes, but here we consider it not to be a problem since third party value types will generally not offer generic saturation support and us covering the basic numerical types is sufficient. This section is surrounded by the #region/#endregion tags, a purely syntactical construct that allows you to create logically related blocks in code as a tool to facilitate organizing.

And finally, this version 2 also has one additional change from what it could do in version 1, but this change is remarkable not by adding something, but by omitting something that was planned to be added but was ultimately not. Z2 supports bit rotation, not just bit shifting. This is supported with the Intrinsic class and some time ago, we had two aliases in Int for this:

	def GetRol(bits: DWord): Int; const Intrinsic.Rol32;

	def GetRor(bits: DWord): Int; const Intrinsic.Ror32;

During the design process it was decided that bit rotations are useful enough to be fully supported but not common enough to have an alias for them in Int, so these two aliases were eliminated from all core numerical types. If you need bit rotation, you can use Rol8/Rol16/Rol32/Rol64 and Ror8/Ror16/Ror32/Ror64 directly from Intrinsic.

This second version of Int, together with Double and Intrinsic have been committed to a branch in GitHub. The main branch also has some associated UT.

Next time we’ll investigate how this generic solution for clamping and other operations works with third party classes.

GitHub repository is up

The official GitHub repository is up and can be found at: https://github.com/MasterZean/z2c.

For the first phase, this repository will hold the standard library source code and some other related code, like unit testing code, documentation and benchmarks. The license is “Apache License Version 2.0”. To be honest, I have personally studied licenses in my off-time for two weeks and have reached the conclusion that not only do you need to be a lawyer to truly discern 100% of the real life implications of open-source licenses, but you also need to consult with other lawyers too. So what I’m saying is that while we do like the general principles behind open-source and we want to open-source the code, we are not married or feel strongly towards any of the individual license offerings, including Apache License Version 2.0, which may be transitory. Additionally, choosing the absolute best license at this point is beyond our means.

A few first commits were made to the repository, but for now only UT code has been added. But not the interesting UT code, but the boring kind. If you wanted to show the language to somebody by code, the UT code would be a good place to start, since it is a bunch of relatively short snippets of code, each showing off and testing some language feature sometimes in isolation, sometimes testing their flow together. In consequence the UT code can be interesting. But not the one we committed. With the amount of refactoring in PT7, things broke often and in non-obvious ways, especially when it comes to function overloading. So we added about 50 new tests, all for single parameter overloading, creating a very complete coverage of numerical types so that we can have some measure of security that overloading never breaks again. Probably about 10 tests would have been sufficient, but maybe 50 is safer.

We’ll continue to add tests to the “master” branch of the repository, but probably not 50 in one go. But it is best to add enough tests at once to reasonably cover one small feature or API element at a time.

A branch has been created with the code from “CrashCourse – 004 – Building an Int“. The next 2 posts in this series will evolve a few classes closer to their final form while explaining design decisions and once this is done, the branch will be merged into “master”. After this, the real standard library classes will start to be added to the repository, one by one, as they are documented. The documentation infrastructure still needs some work. We have documentation in the source files using comments and XML, but we would also like to evolve this tried and true formula to also work with the exact same XML tags, but externally to the source code, so that you have the choice of documenting code on the spot, with the trade-off of making the code harder to navigate, or having the documentation fully/partially in an outside file.

Now that a few pieces of code are in the repository and soon more will come, we are forced to release a super-alpha version of ZIDE. As mentioned before, the goal of ZIDE is to offer a minimum golden standard of features out of the box for editing and building software using Z2, so you don’t have to resolve to ad-hoc solutions and command line. So a version of ZIDE must exist as long as there is Z2 code, and now there is.

At the begging of April, a super-early alpha version of ZIDE will be made available, meant for developers. It won’t have many features and it will be buggy, but hopefully this early release will help us to make it better based on feedback. Unfortunately, we don’t have the time or resources for multi-platform releases right now, so this first version will only be available on Windows. Starting with the second or third release, we will have a release for Linux too.

CrashCourse – 004 – Building an Int

With PT8 development starting in the next few days, several parts of the project will get slowly released in different states of completion, the standard library source code being one of them. So it is the right time to describe a few parts of the standard library and how it evolved since its inception.

The numeric types are a good point to start, since they have a lot in common: understand one and you understand them all. As a standard library, one part of it may freely use other parts of it to accomplish some tasks. But let us suppose for a moment that all classes inside the library are independent and only serve to offer an API to clients of the library, without one referencing another one within the library. Then what is the minimal Int class?

namespace sys.core.lang;

class Int {
}

That’s it! If nobody expects Int to have a specific API, Z2 as a language does not impose any structure upon it. It is just a normal value class. But the combination of namespace plus name, the class sys.core.lang.Int is still special. It is a core class (not to be confused with sys.core, the two “core” terms have separate meanings; maybe we should fix this conflict of terms), meaning the CPU has a special understanding of it. Additionally, it is an arithmetic class. While all classes are value types, some, like Int are arithmetic implicitly, without them having an explicit API to make them behave like arithmetic types. Other third-party classes do need to have an API to conform to the arithmetic requirements. And this special treatment does not apply to other classes named Int from other namespaces.

As implicitly arithmetic, even though the Int class is empty, it still behaves as if it had several methods defined inside, like the ones commonly defined through operator overloading. All the commonly used operators in C like languages work on Int instances, like +, -, *, /, <>, ==, !=, <, , =>, ++, –, &, |, ^ and ~. They all behave as expected and you are not allowed to override them and change their meaning. Using these operators one can write complex expressions and with a few exceptions, expressions involving Ints could be copied over from C or Java into Z2.

Another thing that one does with numerical values is convert from one type to another, a task commonly done with casting. As a historical note, early versions of the Z2 design had casts, but it was found that they greatly overlapped with constructors and were eliminated. Today, Z2 has no casts and all conversions are handled though constructors. You do not cast a type to another, you construct a new instance of appropriate type, based on another instance. This is a mostly a theoretical and style based distinction, because the end result and the generated machine code are the same. As a normal class, Int has a default constructor Int{}. Conversion constructors have usually one parameter, the input value that needs to be converted. If we have a Float variable called floaty or a literal Float constant, -7.4f, we can “cast” them to Int with Int{floaty} and respectively Int{-7.4f}. And this works for all built-in numeric types, even with Bool values, like Int{true}.

As mentioned in a previous post, Z2 does not like to force you to write code that it can figure out itself or is just boiler plate code. The standard Int class could have had like 20 operators overloaded, all of them with all the parameter combinations, totaling hundreds of methods and additionally have all the conversion constructors. Instead, we choose to have this core functionality be available implicitly. Thus, the class is perfectly functional empty.

And things could be left as is. The standard library could have just a bunch of numerical classes with empty bodies, offering a few expected built-in operations. But Z2 chooses to add a bit of extra functionality to such classes. Not a huge amount, we don’t want these classes to become bloated, especially since third parties can reopen these classes and add any extra functionality they might need. Today I will show a little bit of a blast from the past, the Int class as it was a few months back. Today it is almost identical, but small changes and tweaks have been made. This simpler Int class will serve as a fine introduction on how to add value to such types and in the next posts I’ll detail how the evolution of the language has led to some changes to this class.

namespace sys.core.lang;

class Int {
	const Zero: Int = 0;
	const One: Int = 1;
	const Default: Int = Zero;

	const Min: Int = -2'147'483'648;
	const Max: Int = 2'147'483'647;

	const IsSigned = true;
	const IsInteger = true;

	const MaxDigitsLow = 9;
	const MaxDigitsHigh = 10;

	property Abs: Int {
		return this > 0 ? this : -this;
	}

	property Sqr: Int {
		return this * this;
	}

	property Sqrt: Int {
		return Int{Double{this}.Sqrt};
	}

	property Floor: Int {
		return this;
	}

	property Ceil: Int {
		return this;
	}

	property Round: Int {
		return this;
	}

	def GetMin(min: Int): Int; const {
		return this >= min ? min : this;
	}

	def GetMax(max: Int): Int; const {
		return this <= max ? max : this;
	}

	def Clamp(min: Int, max: Int): Int; const {
		if (this <= min)
			return min;
		else if (this >= max)
			return max;
		else
			return this;
	}
}

This is a rather bare bones Int class but it still offers a lot more functionality over an empty class and also serves to show our approach to library design: using this style, the difference between language features and library features is blurred. The absolute value of -7 can be obtained with -7.Abs and it looks a bit like a language feature, but the implementation is actually part of the library. Additionally, all the numeric types are extremely similar and share similar API, giving you the necessary feature parity in some situations, like when working with templates.

But let’s go slower. On lines 4-6, we have a few simple constants that do not seem that useful, giving you the 0, 1 and default values for the class. They are mostly here for feature parity with more complex numeric types, like multi-dimensional points.

On lines 8 and 9 we have two extremely important constant: Min and Max, giving us the minimum and maximum Int values. Adding these two constants to the class solves an old problem quite nicely. Where to stick these values? In C/C++, you need to include a header to access INT_MIN and INT_MAX. The recommended header changes depending on if you are using C or C++. These constants could be a #define, thus sharing the myriad of well documented problems of the pre-processor. If you are using C++ and doing things the C++ way, you need std::numeric_limits::min() and std::numeric_limits::max(). Or starting with C++ 11, besides min, there is also lowest. Why are there two? What is the difference between them? The answer is not self-evident and you need to google it to find any answer. This approach is better than using #defines, and Z2 could easily go this route, but it was decided that such a simple task should not be handled by templates. Does your type have a minimum value? If yes, just add a constant into it! You can use Int.Min to get the minimum value for Int and Foo.Max to get the maximum value for Foo if it has one. Or you can use existing instances, even literal constants, so the following samples are examples of perfectly legal expressions:

A + C * (C.Max / C.Max.Min);
A + C * (Int.Max / Int.Max.Min);
Int{Bool{Bool{Int{Bool{A}.Min.Max}}.Max}};
(true <= 6).Min <= (1 < 5).Max;

Please don’t write code like this!

On line 17 we have the Abs property defined, which returns the absolute value of the instance. On line 21 we find the very simple property that returns the square of the values. This is useful as a shorthand, when having to square some complex expression. Using Sqr, you don’t need to type it twice with a * between the two, minding side effects of the expression or having to use a temporary variable and multiplying it with itself. We find it useful and it is implemented easily inside Int, so why not have it? On line 25, we have the Sqrt property, which returns the square root of the value. This already shows interconnection of classes within the standard library: the easiest implementation of square roots on integer values is casting them to double, getting the square root and casting that result back to an integer. On lines 29, 33 and 37 we have properties that return the floor, ceiling and rounded values. For floating point values these make sense, but for integer values, they don’t really and by definition the floor of an integer is the value itself. They are included for feature parity again. As an example, you may have a template vector and run a summing lamda on it that adds together the floors of the values in the vector. This will run fine on a vector of Double as an example, but would fail to compile on a vector of Int. But because we added these feature parity APIs, the types are interchangeable and it is easier to write generic algorithms.

These methods are also logically grouped. We have one “block” doing one kind of tasks, followed by other blocks. The final block is the comparison one. Having two or more values, we often need to find the minimum and maximum of them or clamp one to a range. This is why most types in Z2, when applicable, have methods like GetMax, GetMin and Clamp. Or had, to be more precise. This is where we found that having these methods which are almost always implemented identically added to each class contradicts the principle of Z2 not making you write boiler plate code and this was changed. As explained earlier, this is how numerical types were a few months back.

Next time we’ll see how we fixed this and evolve the Int class closer to its current form.

Z2 Compiler PT7 in feature freeze

There has been a lack of new information on the blog in 2016. Sorry, I didn’t have time to write posts since I was busy with PT7. For PT7, we wanted to simplify a lot of the complexity that can be found deep within the heart of the implementation of some of the standard classes. This tuned out to be a far more lengthy task then expected. But now is the right time to iron out the last few remaining kinks, even if this means breaking compatibility.

Module system fine-tuning

First, we changed the module system. The using keyword was used to make a source file refer to another source file. After the using clause, a sequence formatted like foo.bar.(...).baz was interpreted as a reference to an existing module/source file on disk and imported as such. Like a lot of things in Z2, this is an evolution of a system from C++. In this case this was a more advanced form of f#include, but this time coupled with a powerful module system. And it worked very well. But we did find a small problem that in practice may have been a moot point, but it was still worth to fix. Top level source files were referring to lower lever modules and so on until the bottom level ones were reached. Using this hierarchy, a net semi-unavoidable increase in the number of imported symbols was noticed. The module system gave you to power to choose what to make available and what to keep private, but still a small increase was leaking from layer to layer.

So we changed the sequence after the using to refer to a fully qualified entity and decoupled it from its location on disk. I shall explain this in detail one day on the blog, but it is a variant on the C# system. But in short, a source file can refer to one or more fully qualified classes and other entities and it is the job of the compiler to supply the on disk location to them. You can still organize source codes in any way you see fit and there is no compounding public symbol pollution. And since we made sure for the standard library than fully qualified class names were in the right place on disk, this change had zero compatibility break. Compilation has gotten slightly slower because of this change, but we’ll fix this in the next versions.

Greatly simplified parameter type system

Z2 is all about combining good and best-practice inspired designs with powerful compilers to reduce the complexity and fussiness of problems. This is why we use the term of “dependency/declaration order baby-sitting”, declared it a “bad thing” and went ahead to eliminate it. Another thing we wanted to eliminate was ambiguity, especially when it came to calling functions. Like most things in life and programming, ambiguity is not binary, but a spectrum. Things that have low to medium ambiguity are often resolved in programming languages by conventions and rules. In Z2 we took this to its limit and created a language than can resolve any level of ambiguity, if it is possible to resolve of course. In consequence, the rules were extremely complicated when dealing with the types of formal parameters. We set out to create a language more powerful than C++, but with better and more sane rules, and for the most part we think we were very close to achieving this. But for formal parameters and eliminating all ambiguity by rules, we failed: we created a rule-set that is easy to learn, but almost impossible to master. The exact opposite of our goal.

We tried a solution in 2015 and now in January 2016 we tried another one. Things were better but still too complicated. So we reevaluated the problem and the value proposition of having all ambiguities resolved and came to the conclusion that… it is not worth it! We rewrote the system and now common, expected and useful ambiguities are resolved by a set of easy to learn and master rules and for the rest, we are not attempting to resolve them. We give an ambiguity related compilation error! This brings Z2 in line with other languages when it comes to the effort of learning and overall we feel that this is far better place to be in related to complexity!

Low level array clean up

Z2 has a wealth of high-level containers you should use, like Vector. It also has a couple of very low-level static vectors, RBuffer and SBuffer. Unless you are writing bindings for an existing C library, using some existing OS API or declaring static look-up tables, there is no good reason to use these low-level buffers. Still, they are in use in the heart of the standard library when calling OS features, and there was a small problem with them.

RBuffer (short for RawBuffer) is the new name starting with PT7. Before, it was called Raw. It is a raw static fixed size low-level array, like standard arrays in C. Unlike C, RBuffer has a an immutable Length and Capacity, but they are not stored in RAM. When needed, they are supplied based on compile time information. So if you have a RBuffer of Byte with a Capacity of 4, it will occupy exactly 4 bytes of RAM, not 4 plus the size of the buffer. SBuffer (short for SizeBuffer) was called previously Fixed and is low-level static fixed capacity array, that has a mutable Length that is stored in RAM and an immutable Capacity that is not stored in RAM. So a SBuffer of Byte with a Capacity of 4, will occupy RAM like this: a PtrSize sized chunk to store the Length and exactly 4 bytes. The Capacity is supplied based on compile time information, without it taking up memory. So the difference between SBuffer and RBuffer is that SBuffer has an extra field to store Length.

So far so good. The small problem of library design came from the way we were using these two types. We noticed that in most cases, a RBuffer was passed as in input, but when using RBuffer as an output, we always had an extra reference parameter to store “length”. So we refactored the standard library and now in 99% of cases a const RBuffer is used as in input and a ref SBuffer is used as an output parameter. Additionally, the low-level parts of the library no longer use pointers when those pointers are meant to be a C arrays, but use RBuffer instead. This creates a cleaner standard library.

Function overloading code refactored

All these welcome simplifications worked together and allowed us to refactor and greatly simplify the function overloading code. Simple rules give simple code and now the old super complicated implementation is gone and replaced with a far shorter one that is faster than ever!

Conclusion and future versions

This chunk of simplifications turned out very well. The language is in a far better place right now. Easier to learn, easier to master and cleaner API overall.

On the downside, implementing all this took more than expected. Starting with PT8, we want to do shorter release cycles. This means that the target final PT version moves up from about PT15 to about PT20. To compensate for this, we won’t wait until we have a very stable and feature complete compiler before we make it available for testing and instead will release a super early pre-alpha version of the compiler and ZIDE for adventurous people.

PT7 is in feature freeze and we need a couple of weeks more to fix some bugs, but starting with PT8 the standard library code will begin upload to GitHub. An account was created and I’ll write a couple of explanatory post related to the standard library and then one by one, classes will be tested, documented and uploaded.

CrashCourse – 003 – What you get for free

Happy New Year!

The winter holidays are done for now and it is time to get back to work! In December things worked out as planned. Z2 PT6 was finished, but we did not do any announcements since there is no reason to announce compiler versions which are not publicly available. PT7 development has started and it will have most of the planned features, but we are diverging a bit away from the planned features for this release. We consider Z2 to be syntactically a relatively clean language considering it aims to a have a feature set that is comparable to C++, but we did get the feedback that deep inside the implementation of some of the system classes, especially in containers and OS interaction, the language is not necessarily cluttered, but too complex instead. So we will try to address this in PT7, without breaking compatibility with the rest of language of course.

But back to CrashCourse. Last time we talked about the object model, how literal constants are still instances of classes and about constants in general. Today we shall talk about instances and what are the so-called “values”.

Z2 is a value based language, like C++. It is not a reference based language, like Java. When it comes to most languages, core numerical types are often value types and in Z2, since everything is a class and everything tries to follow one set of rules, everything is a value: all class instances are values. This does not mean that you can’t use references in Z2: you can and they behave as expected. The distinction is made by the ref keyword, which introduces references. In the absence of it, entities are values. I shall use the following short C snippet to illustrate what it means to use values, since C was at least at some point so ubiquitous:

int a = 10;
int b = a;
a = 0;

If you are ever so slightly familiar with programming, that code should be pretty self explanatory, as should “int” and the way these two variables, these two values behave. One line 1, we declare the variable “a” and assign 10 to “a”. On line 2 we declare “b” and assign to it the value of “a”. We have two separate forever independent entities here, “a” and “b”, which are stored in two different memory locations and at both memory locations you can find 10. On line 3, we assign 0 to “a”, but since “a” and “b” are independent, this does not affect “b”. This is the core principle of value types. When dealing with references, two references may refer to the same memory location and changing one variable might “change” the other too (it is not really a change, since there is just one entity accessed under two names), but this is impossible with values. In Z2 every instance of a class is a value, thus no mater how simple or complicated the class is, it behaves like “int” in the sample above.

For simple classes, this value semantic is natural and comes for free. For more complicated classes, classes that manage some resources, you need to write code in order to impose this value semantic. Without additional code, some classes, when trying to copy, might do a “shallow” copy and you can wind up in the situation of two separate instances not being logically independent. As an example, think about implementing a very simple string class that has two members: a pointer to the bytes in the string and its length. Without code to handle the copying of the string, a shallow copy will have two different string instances pointing to the same buffer. There are of course cases where you want a shallow copy, but for now we’ll consider that we want all classes to respect value semantics. Which leads us to the distinction between classes that behave like values by definition and classes where you need to write code to assure this behavior. The first case is called “plain old data” (POD for short). All core types are POD, static vectors of POD classes and classes in which all members are POD are all cases of working with POD. The primary goal of a POD class is to store data in memory. The other classes are called “manager classes”: these classes often own or manage some resources and the act of managing these resources is more important and often more complex than just storing things into memory. So the primary goal of a manager class is its side effect. If at least one member of a class is not POD, the entire class is considered not POD. But still, this distinction is mostly unimportant for now and even once we hit more advanced topics, the distinction comes down to one rule: manager classes have a destructor, a copy constructor and an assignment operator. If you add at least one of these to a class, it is automatically considered non POD and you must add all 3. But otherwise, there are no distinctions and you generally don’t care about POD or not POD. Containers like Vector care, which can do special optimizations for POD, but as a client of such containers you do not care. The introduction of POD here was probably premature, but I included it for completeness’ sake.

Now its time for some practical examples in which I will be using a POD class. To keep things simple, I won’t be using pointers inside the POD class, even though they are valid inside a POD class. Since POD values are so simple and natural, maybe the compiler can take care of a lot of things for you? Since Z2 is also a research project, we are interested in seeing how much the compiler can give you for free while still being useful and general. Values are so straight-forward, that in most cases, that what you do for copying one or verifying equality or serializing it to disk is self-evident. Why should the programmer have to write this code? How about having to write code only when the general solution is not good enough. So let’s see this class we shall be using:

class Point {
	val x = 0;
	val y = 0;
	val z = 0;
}

This is an incredibly simple 3D Point class. You should never have to write such a class in real programming situations, since the standard Z2 library comes with geometric types, but as a didactic example it will do just fine. For numerical types, we know we can get access to instances using literal constants, but how do we create a new instance of Point?

class Test {
	def @main() {
		Point{};
	}
}

By Using the “Foo{}” syntax. This creates a new instance of Foo, Point in our case. The “{}” syntax was selected to not conflict with the function calling syntax of “()”. When you see Foo{} you immediately know that is a constructor and when you see Foo() you immediately know that it is a function call. A “box” is created somewhere, probably in memory and a constructor is called using this syntax. In this case, a memory location large enough to hold a Point instance is reserved on the stack and the Point constructor is called upon it. The execution of a constructor is the only supported way of getting a new instance of a class. For numerical classes one can logically assume that each literal constant is the result of a call to a constructor, but this is just a logical abstraction. You can always call the constructor of core numerical types, so Int{} is absolutely identical to the literal constant 0, and DWord{7} is identical to 7u.

The next question: where did the constructor come from? Well, this is one of the first things the compiler offers for free: default constructors. In Z2 there is no such thing as an implicitly uninitialized variable/instance/value. Everything is initialized and every new instance is the result of a constructor. Z2 is a systems programming language, so you can explicitly have a non-initialized instance using a special syntax, but that is an advanced topic that is rarely needed in practice. So everything is initialized by a default constructor and that constructor is provided by the compiler. You can of course write your own constructor, but Z2 discourages the writing of constructors that do the default initialization logic. If the compiler provided constructor and your own do the same things, why write one? You can also write constructors that take parameters and Z2 supports named constructors. And when writing these constructors, again the compiler will help you with initialization, so these constructors should only have code that differs from the default constructor. And you can disable the default constructor for a class if you want it to be only be constructable using parameters or a named constructor.

After the execution of the Point constructor, the instance is valid and usable, so things like Point{}.x are readable. But how long is the instance valid? Until the destructor is called. The destructor is again generated by the compiler for you. The destructor will be called in most cases at the end of the statement, but the compiler might delay the execution a bit. Still it is guaranteed to be called before the end of the block. So in most cases, by the time execution hits the “;” before the end of line 3 the destructor is called. This is why I wanted to introduce the concept of POD: for POD types the destructor is guaranteed to be a “non-operation” (a NOP). Logically we still consider that the destructor was executed, but the compiler generates zero instructions for a destructor with POD types. It does nothing. Still, the instance is no longer accessible. If we want to make the instance available after the end of the statement, we need to bind it to a name using the “val” keyword:

class Test {
	def @main() {
		val p = Point{};
	}
}

This is new snippet is almost identical to the previous one: a “box” is still reserved for a new Point instance and the constructor is called. But his time, the name “p” is bound to this instance and the execution of the destructor is delayed to the end of the block. Thus we have created a local variable called “p” that can be used to read or write into our instance and is scope-bound, meaning it will be valid form the point of its declaration to the end of the block. The keyword is called “val”, not “var”, like it is encountered in many other languages, though it is functionally identical. “val” is short for “value”, contrasting with the other keyword that allows you to bind a name, “ref”, short for “reference”, which is used for references.

The same “val” keyword is used when declaring the Point class. The variables x, y and z are scope bound. Since they are inside the body of the class, the class itself is the scope. This means that the 3 variables are constructed when a Point is constructed and destructed when a Point is destructed. I mentioned before that Int{} and 0 are identical, so “val x = 0;” is identical to “val x = Int{};”. I prefer the first version since it is shorter and more natural to people coming from other programming languages.

But free constructors and destructors are not such a big deal. C++ is doing this right now! Let’s see what else we get for free looking at the full sample and its output:

class Point {
	val x = 0;
	val y = 0;
	val z = 0;
}


class FreeStuffTest {
	def @main() {
		val first = Point{};
		
		val second = Point{};
		second.x = 1;
		second.y = 10;
		second.z = 100;
		
		val third = Point{} {
			x = 1;
			y = 10;
			z = 100;
		};
		
		if (first == second)
			System.Out << "first is equal to second\n";
		else
			System.Out << "first is NOT equal to second\n";
		
		if (second != third)
			System.Out << "second is NOT equal to third\n";
		else
			System.Out << "second is equal to third\n";
		
		System.Out << "first: " << first << "\n";
		System.Out << "second: " << second << "\n";
		System.Out << "third: " << third << "\n";
	}
}

first is NOT equal to second
second is equal to third
first: 0 0 0
second: 1 10 100
third: 1 10 100

On line we declare first, a default constructed Point. All its members will be 0. On lines 11-14 we create a second variable, called “second”. Not happy with its default values, we initialize them to 1, 10 and 100, in order. Don’t worry about the multiple initializations, first by the constructor, then by the statements. The back-end compiler should take care of them. This is not a good place to use the constructor bypassing method I mentioned before. But the initialization is a bit verbose, so on lines 16-20 we initialize a third variable, called “third”, with the same values, but using a shorter syntax, available only immediately after a constructor.

Next, we get a taste of some other compiler provided features on lines 22-30: default equality checks. The compiler will automatically take care of == and != checks, using member-wise == and logical and for == and member-wise != and logical or for !=. Their purpose is to model value equality. This implementation covers most cases, and when the default is not good enough, all you need to do is provide your own implementations. If you only implement ==, you get != for free as a negation of == and the other way around. And you can implement both if you think you can write a more optimized logical expression. Default equality checks combined with standard library implementations means that you have a wide set of testable entities. Integers, strings, colors, hashmaps, hashmaps of hashmaps and so on are all testable. Other comparisons like < and > are not provided by default by the compiler, but the standard library covers this when appropriate.

Finally, on lines we 32-34 we see another big feature of the compiler capabilities: marshaling! The variables of class Point can be written to a stream without you having to implement this. This is not a case of the compiler generating a call to some toString() method and printing that string. General “toString” support is available though if needed, and yes, the compiler will generate that for you, but this is a case of the compiler generating marshaling code for Point instances. The default implementation uses a member-wise approach, marshaling each member to the stream. If this default implementation is not good enough, again, you can write your own where you can do just about anything. This marshaling solution is provided by a combination of compiler features and library support. Initially you have support for text streams and binary serialization, though the “sys.xml” and “sys.json” packages, when added to your compilation, provide automatic support for “xmlizing” or “jsonizing” most user classes. You can basically take any combination of classes and marshal them to a valid destination using statements about as complex as the lines above.

When you have some technical specification where the binary layout of serialization is a requirement, you’ll want to implement your own compliant methods. But when only wanting to get the data to disk, the default marshaling solution is deigned to be sufficient.

CrashCourse – 002 – The object model

Today I am going to talk about the Z2 object model. I’m afraid for the second post in a row, I will be forced to move fairly quickly and not have time to fully explain all the concepts. Hopefully, starting with post 3 in this series, I can slow down a bit.

Last time I showed the “hello world” snippet and introduced the concept of pure OOP languages: everything you manipulate is an object, a.k.a. an instance of a class. In that sample we printed a literal string constant to the STDOUT and it too must then be an object. Objects have members, so we might try and use some of them. The standard Z2 library uses the convention that if something can be directly counted in a straight-forward manner, it will have two members, called Length and Capacity. These two members must be at least immutable, but some countable classes, like vectors, can have them mutable.

I shall attach a cropped screenshot now of a sample that uses these two members, together with the compilation and execution result from ZIDE, but for the rest of the post I’ll use inline source code (to allow selection and preserve space):

2

As expected, the “Hello world!\n” string literal is a vector like object with countable elements, having a Length of 13. It is also has a Capacity of 13. The difference between Capacity and Length signals how much extra storage beyond Length is in use as a buffer available for future growth and Capacicty always greater or equal to Length. In this case it is equal, but do not be surprised when running this sample to find it greater, probably rounded to 16. The class that is in use here is String and it is UTF8.

Note
In Z2 strings are not meaningfully “null terminated”. The String class and friends make sure that there is a ‘\0’ (null) character at the end of the string, at index = Length, outside the valid index range, but the String class and the entire library does not care about that character. Strings have proper lengths and a String made out of 50 ‘\0’ characters is a legitimate String with Length 50. Reading or writing to the ‘\0’ character is a run-time error when done in user code. The only reason the ‘\0’ character is appended automatically is to make possible the passing of Strings to different APIs, which more often than not have traditionally expected null terminated strings. If you create a String for the purpose of passing it to such APIs, it might be a good idea to not store ‘\0’ characters in the middle of it.

The design choice of still adding a token ‘\0’ character to the end of a string does have consequences though. It makes having String slices extremely difficult, if not impossible.

Hold on a minute! The string is an object, but so is everything! Does that mean that String.Length is an object too?

class Hello {
	def @main() {
		System.Out << "The class of Length is: " << "Hello world!\n".Length.class << "\n";
	}
}

The class of Length is: PtrSize

Yes! The class of Length is PtrSize. In Z2 there are classes for signed and unsigned integers. Like Int and DWord. Here the systems programming language nature of Z2 is exposed a bit. Why isn’t it a signed or unsigned integer and instead it is a PtrSize? When counting random stuff, you should use the appropriate type required by the problem you are trying to solve. When counting eggs in a basket, you may use a DWord to store unsigned values. Or you may use Int out of convenience or maybe you want to use negative values to store stolen eggs. Or maybe even floating point numbers if those eggs are fancy. But when counting, offsetting or indexing into heap, and memory in general, you must use PtrSize. It is an unsigned integer large enough to be used for addressing heap size on your platform. PtrSize is almost always associated with traversing containers, so we can safely ignore it for now and focus on the bread and butter integer classes.

Like Int! Int is always signed and currently on all supported platforms it is a 32 bit value. Literal constants like 0, 1, -1, -55566847, 0xFF, 0b101, -2’147’483’648 and 2’147’483’647 are all instances of the Int class. The syntax of a integer literal constant in Z2 is an optional sign prefix (+ or -), an optional base prefix (0x, 0o or 0b), at least one digit fitting said base and an optional suffix. The optional suffix generally tells you the actual class of the constant. No suffix always means Int. The ‘u’ suffix always means DWord, the 32 bit unsigned integer. So 0 is an Int, 0u is a DWord. The ‘ character is used to optionally separate thousands, but it can be placed anywhere (except before the first digit). So 10000, 10’000 and 1000’0 are all the same constant. Using ‘ is pure syntactic sugar.

So far so good. But performance purists might frown upon core numerical types being classes. The Int class is a normal class with source code of several KiB found in the standard library. It has static and non-static members. Isn’t this slow? Especially for a system programming language?

No! Int is not a class that boxes or unboxes some hidden more fundamental type (and there is no automatic boxing in Z2). It may have a lot of members and it looks and behaves like a normal user class, but this is just syntactic sugar and compiler technology. Behind the scenes, all Int instances are “plain ints”, using the C/C++ definition of “int”. There is native support for manipulating them with the hardware. Int instances can be loaded into CPU registers and manipulated in assembly code directly. It is a strict requirement for all compliant Z2 compilers to have the same performance when using all core numerical classes as the equivalent optimized C code. There are even benchmarks in place making sure of that.

This design assures that there is 0% overhead to having Int be a class rather than some intrinsic keyword introduced type, but the strict performance requirements does mean that the Int class, together with all other core numeric classes, do have some limitations that other user classes do not have. These classes can’t have fields for starters. An Int is an Int, with a fixed hardware imposed structure. If you stick something inside it, like another Int, it will no longer be an Int. Static fields are permitted. Another limitation is that these classes can’t have virtual methods. There is a workaround for this in some specific situations, but that is a fairly advanced topic. But everything else goes. You can add as many symbolic constants, properties or methods to Int (static or not, both are allowed). These classes (and all classes in general) can be reopen, so third party libraries might add extra functionality to them.

I described Int and DWord as core classes. Core classes are a bit special because they have those hardware and performance required limitations. There is a just a small number of core classes that I won’t list for now, but we already saw Int, DWord and PtrSize. They all map directly to some native hardware resource. String is not a core class, since current CPUs do not have some atomic intrinsic understanding of strings. String is a non-core class, being able to benefit from all the features of the language. But it is still a “system” class. System classes are normal classes that are part of the system package, meaning they are available on all supported platforms. “Hello”, the class used in the snippets above is not a system class, since we wrote it from scratch and was not available before that.

Classes are introduced by the “class” keyword. The following snippet introduces 3 new classes into the default namespace:

class Foo {
}

class Bar {
}

class Baz {
}

This is the minim syntax required to define a valid class. Class members use a “block model”, meaning that in the block defined by { and } you must include the class members. All classes must have at least one block, but can have an arbitrary amount of blocks. The one required block is called the default block. A block also imposes access rights upon the members declared inside. The default block is public, so anything declared inside will be fully visible to everyone. Standard OOP access rights apply to Z2, so when designing classes, one would generally use a mix of public, private or protected blocks.

So lets add some members to the classes. I have only just introduced the concept of literal numerical constants and last post I introduced the @main method, so that is all I shall be using today to give a final but more complicated example:

class Foo {
	const AA = 7;
	
	def @main() {
		System.Out << AA << "\n";
		System.Out << Foo.AA << "\n";
		
		System.Out << Bar.AA << "\n";
		System.Out << Bar.BB << "\n";
		
		System.Out << Baz.AA << "\n";
	}
}

class Bar {
	const AA = Baz.AA + 100;
	const BB = 1000;
}

class Baz {
	const AA = 99;
}

Here is the output of the program:

7
7
199
1000
99

Literal constants are useful and convenient, but sometimes you want to define a symbolic constant. Line 2 does just that. 7 is our literal constant and we could use that, but instead using the “const” keyword, we bind the name “AA” to 7, thus creating a symbolic constant. With a class of Int. On line 5, we print the constant using its name, rather than the literal. The constant we defined inside the class Foo can be accessed directly since the @main method is in the same class. But if we wish, we can fully qualify the constant using the class.member syntax, like we did on line 6. AA and Foo.AA are identical and refer to the same symbolic constant.

Things change a little when on line 8, we try to print a constant from the class Bar. @main is in Foo so we can’t refer to Bar.AA without fully qualifying it. This line also shows that Foo.AA and Bar.AA are two different entities, even though they have the same names. So is Baz.AA. Names are locally unique within a class only.

One thing you may find strange when coming from C++ is that on lines 8 and 9 we refer to entities we declare on lines that come after. Not only is Foo referring to Bar.AA, but Bar.AA is dependent on Baz.AA, which in turn is declared after in the source code. C++ uses a rather archaic declaration model. One can learn to deal with its limitations, but juggling around hundreds of include files in large project is never as easy as it should be. In Z2 you have no such concerns. The compiler is just a bunch of algorithms running on powerful enough hardware relative to the tasks it is trying to accomplish. It has no problems with declaration orders and dependencies. The compiler sees all and is all-knowing. If you let it! What we do is artificially and willingly limit its ability to see. When dealing with classes in the same module, we use access rights like private and protected to hide members from classes. When in multi-module situations, we do the same to hide full classes.

And while the compiler is a machine, we as programmers are not. We certainly benefit form having a sensible and maintainable project structure, so while the compiler allows you to structure modules in the most difficult to use and counter-intuitive way possible if you really wish to, it is good design to use natural and intuitive structures and declaration orders. The sample above becomes slightly more readable if we change the declaration order to Baz, Bar and Foo. It also becomes more readable if we give more meaningful names to constants.

Compiler versioning system

The Z2 compiler, called z2c has almost reached its next internal stable version, PT6. It is in the fixing stage of the last few known bugs and undergoing additional testing and should be done before Christmas. Since the detailed change-log is lost (see the first post and the blog) and the change-log is meaningless anyway to people who are not already using the compiler, I thought this would be a great moment to detail the versioning scheme, release structure and schedule of the compiler.

The compiler is developed in two stages. The first stage will have the full set of core features and a full standard library and will officially be labeled “1.0”. Stage two will enhance the language with some optional meta-programming related features and add a few others inspired by some scripting languages. Stage two can be safely ignored for now. While we do have the exact plan on what features we want to add into stage two, development on them won’t begin until stage one is done.

For stage one, we are labeling each internal compiler version with PT##, where ## is a number. PT1 though PT5 are done and PT6 is just about finshed. PT comes from “pre-tier”, signaling the alpha unreleased quality of the software.

Depending on the progress we actually achieve, either PT8 or PT9 will be the first version available for public preview, so right now it is not possible to download a public working compiler from anywhere. Since these milestones are already labeled with “pre”, there is no use to create special preview releases. You’ll be able to download the exact compiler package we use for testing. Starting with PT10, the software should reach beta level quality and by PT15 we’d like to reach 1.0.

Each compiler package comes with at least the following components:

  • z2c: the command line compiler and full build tool. The compiler is also a build tool capable of building arbitrarily complex projects out of the box with a single execution of the tool.
  • zide: a GUI cross platform IDE designed to offer a minimal golden standard of features to new adopters of the language who wish to edit code using an IDE.
  • zsyntax: a command line tool that can output syntax highlighted HTML code using various options for single files or full projects.
  • zut: a command line unit-test execution tool, serving as a sanity check for the compiler and standard library.
  • a full copy of the standard library as source code

Most of these tools are self explanatory, but I’d like to focus a bit on ZIDE, the Z2 IDE. During the past few years, several programming languages have cropped up, some broad scoped, some niche, some popular, some not. But in general, when a new language pops up, especially if it is not a scripting language, the tool support at release is very poor. It can take a lot of time before even a few syntax highlighters crop up for some popular IDEs and it can take years before any decent tools get created. It is very common for somebody picking up such a language to have to edit code in some random editor that does not understand the syntax of the language, have to compile in the command line because said editor can’t launch the appropriate compilation command and be forced to use “printf” debugging, since a real debugger with breakpoints and watches (as a bare minimum) is years away.

It is our goal to eliminate this problem by having the official package come with an IDE, called ZIDE. Since PT5 the package includes this GUI editor and in each PT it is getting improved as part of the official development schedule. The command line compiler is married to ZIDE as development efforts go, so even if the compiler and library are 100% ready for the next release, if ZIDE has not been updated, we will delay the release until some new features have been added to it. ZIDE is supported on all platforms where the compiler is officially supported and requires a X server or Windows.

ZIDE is really not meant to be the best IDE in the world. Hopefully, other better tools will be created by third parties. ZIDE is there to offer a few much needed features and conveniences from day one! A few major features like syntax highlighting, code browsing, project creation and navigation, auto-complete, compiling and debugging are considered by us as being a worthwhile development effort. An early adopter will not be forced to use whatever tools one can muster because ZIDE will always be there if needed. To make sure that ZIDE is pleasant to use and has all the necessary features, the Z2 standard library is fully developed using ZIDE and ZIDE only. Additionally, non-automated testing is also done mostly using ZIDE.

Now that I presented what is in the compiler package, I shall finish by describing how it compiles.

Z2 uses the common IR paradigm (intermediate representation). The compiler has a front-end, which when compiling source code will create an IR tree. And with this the job of the front-end is done. A back-end is needed to output some meaningful machine code based on IR.

The compiler does not have a back-end capable of outputting machine code and this is intentional. It wouldn’t take use more than a couple of months to create one such back-end and it would function properly, but it would produce terribly un-optimized machine code and would also be completely tied to one single machine architecture. Even with a team 20 times larger and working for years, it would still be highly improbable to produce a better optimized machine code generator than some of the solutions that are available today. GCC has been in development for 28 years and LLVM for 12, just to name a couple. So we want the back-ends to use existing solutions for machine code generation. Solutions supporting every major combination of architecture and operating system, while still being capable of outputting highly optimized code.

In order to achieve this, we are developing two back-ends: one that is looking back and one that is looking forward.

The one that is looking back is the C/C++ back-end. This back-end converts IR to C/C++. This is not a binary switch between C and C++: there is a whole spectrum of options related to what kind of C/C++ constructs to use and which parts of the code base to output. As an example, one of the configurations outputs C++ code with maximal formatting, meant to be as readable as hand written code and includes all the code available in a package. This option is meant to make Z2 code available as a library for C++ projects. This way, the Z2 standard library can be used from C++ without having to maintain two libraries for two programming languages. Using ghost-classes and this back-end, it is also possible for Z2 code to call C++ code, for maximum interoperability. Another example is a set of options that produces very short and ugly code, without any meaningful formatting and using short mangled names. This option is used when you only care about the resulting binary and want to potentially speed up compilation. And pretty much everything in between these two extremes is supported.

Since this back-end outputs C/C++ code, but Z2 tries to offer convenience features like ZIDE, it won’t let you to your own devices to compile the resulting C/C++ code. The command line compiler will detect installed GCC versions on Linux and use them to compile the resulting back-end output. Under Windows, a package optionally bundled with a MINGW will be available for binary generation out of the box. Alternatively, Visual Studio versions 7.1, 8, 9, 10, 11 and 12 are auto-detected and can be selected for compilation. These build methods can be edited or added to after detection in order to support custom paths towards other compilers.

The second back-end, the one looking forwards is the LLVM back-end. In consequence it supports the features, code generation models and general interoperability capabilites of LLVM. Currently, this back-end is in early stages of prototyping and we will first finish the C/C++ back-end 100% and then focus fully on the LLVM back-end.