Overview of the New UI System


Asset Store, mobile, community, unity, QA, learn, Android, shader, luug, tutorial, tutorials, windows phone 8, SimViz, contest, flash, london, ios, scripting, testing, occlusion, teaching, Windows Store, Education, company news, indie, event, Union, meetup, Microsoft, graphics, unity 4.3, ar, augmented reality, usergroup, shaders, unity 4, teach, online services, animation, release

Module Manager in 4.5

One feature that rolled out with Unity 4.5 is the module manager. The module manager is a new system for us to be able to deploy updates to specific parts of Unity without making a complete Unity release.

How does it work?

Let’s say that Google releases an amazing new phone, but it requires a small change to Unity’s Android support in order for Unity to properly support it. With the previous release model, we need to gather changes for a bugfix release, perform a full automated and manual quality assurance pass over Unity and all platforms, potentially publish some release candidates, and then publish a new version of Unity, installer packages larger than 1 GB each, for everyone to install and upgrade their projects.

With the module manager system, we can quickly make a single change, test only the Android support module for regressions, and publish a new 15MB Android support module for download on demand.

What parts of Unity will be supported?

In Unity 4.5, we’re beginning by supporting updates to Android, BlackBerry, iOS, and Windows Phone 8 as modules.

How will we receive updates?

We’re still fine-tuning the module manager system, so there aren’t any automatic update notifications yet in Unity 4.5. When we publish a module update, we’ll announce it via our usual communication methods: forums, social media, potentially a blog post. At that point, the module manager window will show an available update for the module in question. Click the “Download” button, restart Unity once the download finishes, and kapow! — your updated module is installed and loaded in Unity.

Module manager: avalable vs. installed

What’s coming in the future?

In upcoming versions of Unity, we’ll continue developing and extending the module manager by adding modular update support for more platforms (the goal is to eventually support updating all our platforms this way), as well as support for updating other Unity subsystems, for example the upcoming Unity GUI system. Additionally, we plan to begin stripping these things out of the base Unity installer, in order to provide you with a smaller Unity download and a faster Unity installation, along with the ability to download and install support for the platforms and subsystems you care about. Other planned module manager features include: automatic update notifications, ability to switch between multiple installed module versions, support for pausing/resuming/restarting module downloads, and more.

Mecanim Humanoids

This post explains the technology behind Mecanim Humanoids. How it works, strengths and limitations, why some choices were made and hopefully some hints on how to get the best out of it. Please refer to Unity Documentation  for general setup and instructions.

HumanoidRig_pngHumanoid Rig and Muscle Space

Mecanim Humanoid Rig and Muscle Space are an alternative solution to standard skeleton node hierarchies and geometric transforms to represent humanoid body and animations.

The Humanoid Rig is a description on top of a skeleton node hierarchy. It identifies a set of human bones and creates a Muscle Referential for each of those. A Muscle Referential is essentially a pre and post rotation with a range and a sign for each axis.

arm_stretch_in-pngA Muscle is a normalized value [-1,1] that moves a bone for one axis between range [min,max]. Note that the Muscle normalized value can go below or over [-1,1] to overshoot the range. The range is not a hard limit, instead it defines the normal motion span for a Muscle. A specific Humanoid Rig can augment or reduce the range of a Muscle Referential to augment or reduce its motion span.

The Muscle Space is the set of all Muscle normalized values for the Humanoid Rig. It is a Normalized Humanoid pose. A range of zero (min= max) for a bone axis means that there is no Muscle for it.

For example, the Elbow does not have a muscle for its Y axis, as it only stretches in and out (Z-Axis) and roll in and out (X-Axis). In the end, the Muscle Space is composed of at most 47 Muscle values that completely describe a Humanoid body pose.


One beautiful thing about Muscle Space, is that it is completely abstracted from its original or any skeleton rig. It can be directly applied to any Humanoid Rig and it always create a believable pose.  Another beautiful thing is how good Muscle Space interpolates. Compare to standard skeleton pose, Muscle Space will always interpolates naturally between animation key frames, during state machine transition or when mixed in a blend tree.

Computation-wise it also performs as the Muscle Space can be treated as a vector of scalar that you can linearly interpolate as opposed to quaternions or Euler angles.

An approximation of human body and human motion

Every new skeleton rig built for a humanoid character or any animation captured will be an approximation of the human body and human motion. No matter how many bones or how good your MOCAP hardware is, the resulting will be an approximation of the real thing.

Riggers, game company, schools or software will propose their own version of what they thinks best represent the human body and motion and what will best fit their production needs.

The elaboration of Mecanim Humanoid Rig and Muscle Space was confronted to some hard choices. We had to find a compromise between fast runtime and animation quality or openness and standard definition.

2 spine bones

spineThis is a tough one. Why 2? not 3? or an arbitrary number of spines bones?  Lets discard the latest, it is not about biomedical research. (Note that you can always use a Generic Rig if you absolutely need this level of precision). One spine bone is clearly under defined.

Adding a second one brings you most of the way. A third or even a forth one will only give a small contribution to the final human pose. Why’s this? When looking at how a human spine bends, you will notice that the part of spine that is on the rib cage is almost rigid. What remains, is a main flexion point at the base of the spine and one other at the base of the rib cage. So there is two main flexion points. Looking at contortionist even in extreme poses clearly show this. For all of this we decided to have 2 spine bones for the Humanoid Rig.

1 neck bones

This one is easier than for spine. Note that many game skeleton rigs don’t even have a neck bone and manage to do the job with only a head bone.

Rotation DoF

As in most skeleton rig (it is even more often the case for games), Mecanim Humanoid Rig only supports rotation animation. The bones are not allowed to change their local translation relative to their parent. Some 3D packages induce a certain amount of translation on bones to simulate elasticity of articulations or squash and stretch animation. We are currently looking at adding translation DoF as it is a relatively cheap way in term of computation performance to compensate for animation quality on less detailed skeleton rigs. It would also allow to create retargetable squash and stretch animation.


No twist bones

Twist bones are often added to skeleton rigs to prevent skin deformation problems on arms and legs when they are in extreme twist configuration.

Twist bone helps to distribute the deformation induced by twist from start to end of the limb.

In the Muscle Space, the amount of twist is represented by a Muscle and it is always associated with the parent bone of a limb. Ex: The twist on the forearm happens at the elbow and not on the wrist.

50_twistpngHumanoid Rigs don’t support twist bones, but Mecanim solver let you specify a percentage of twist to be taken out of the parent and put onto the child of the limb.It is defaulted at 50% and greatly helps to prevent skin deformation problem.

Humanoid Root and Center of mass

Now, what would be the best way to represent the position and orientation of human body in world space?

The top most bone in hierarchy (usually hips, pelvis or whatever it is called) is where lies world space position and orientation curves in a standard skeleton rig. While this works fine for a specific character, it becomes inappropriate when doing retargeting since from one skeleton rig to another the top most bone usually have a different position and rotation relative to the rest of the skeleton.

backflip_pngThe Muscle Space uses the humanoid center of mass to represent its position in world space. The center of mass is approximated using a human average body parts mass distribution. We do the assumption that, after scale adjustments, the center of mass for a humanoid pose is the same for any humanoid character. It is a big assumption, but it has shown to work very well for a wide set of animations and humanoid characters.

It is true that for standing up or walking animation that center of mass lies around hips, but for more dynamic motion like a back flip you can see how body moves away for center of mass and how center of mass feels like the most stable point over the animation.

Body orientation

Similar to what center of mass does for Muscle Space world space position, we use an average body orientation for world space orientation. The average body orientation up vector is computed out of hips and shoulders middle points. The front vector is then the cross product of the up vector and average left/right hips/shoulders vectors. It is also assumed that this average body orientation for a humanoid pose is the same for all humanoid rigs. As for center of mass, average body orientation tends to be a stable referential as lower and upper body orientation naturally compensate when walking, running, etc.

Root Motion

A more in depth paper about root motion will follow, but as an introduction, the projection of center of mass and average body orientation is used to automatically create root motion. The fact that center of mass and average body orientation are stable properties of humanoid animation leads to a stable root motion that can be used for navigation or motion prediction.

The scale

One thing is still missing in Muscle Space to be a completely normalized humanoid pose… the overall size of it. Again we are looking for a way to describe the size of a humanoid that does not rely on a specific point like head bone position since it is not consistent from rig to rig. The center of mass height for a humanoid character in T-Stance is directly used as its scale. The center of mass position of the Muscle Space is divided by this scale to produce the final normalized humanoid pose. Said in another way, the Muscle Space is normalized for a humanoid that have a center of mass height of 1 when in T-Stance. All the positions in the Muscle Space are said to be in normalized meters.

Original hands and feet position

When applying a Muscle Space to a Humanoid Rig, hands and feet may end up in different position and orientation than the original animation due to difference in proportions of Humanoid Rigs. It may result in feet sliding or hands not reaching properly.  This is why Muscle Space optionally contains the original position and orientation of hands and feet. The hands and feet position and orientation are normalized relative to Humanoid Root (center of mass, average body rotation and humanoid scale) in the Muscle Space. Those original positions and orientations can be used to fix the retargeted skeleton pose to match the original world space position using an IK pass.

IK Solver

ikdiff_pngThe main goal of IK Solver on arms and legs is to reach the original hands and feet position and orientation optionally found in the Muscle Space.  This is what happens under the hood for feet when “Foot IK” toggle is enabled in a Mecanim Controller State.

In these cases, the retargeted skeleton pose is never very far from the original IK goals. The IK error to fix is small since it is only induced by difference in proportion of humanoid rigs. The IK solver will only modify the retargeted skeleton pose slightly to produce the final pose that matches original positions and orientations.

Since the IK only modify slightly the retargeted skeleton pose, it will rarely induce animation artefacts like knee or elbow popping. Event then, there is a Squash and Stretch solver, part of IK solver, that is there to prevent popping when arms or legs come close to maximum extension. By default the amount of squash and stretch allowed is limited to 5% of the total length of the arm or leg. An elbow or knee popping is more noticeable (and ugly) than a 5% or less stretch on arm or leg. Note that squash and stretch solve can be turned off by setting it to 0%.

A more in depth paper about IK rigs will follow. It will explain how to handle props, use multiple IK pass, interaction with environment or between humanoid characters, etc.

Optional Bones

The Humanoid Rig has some bones that are optional. It is the case for Chest, Neck, Left Shoulder, Right Shoulder, Left Toes and Right Toes. Many existing skeleton rigs don’t have some of optional bones, but we still wanted to created valid humanoid with those.

The Humanoid Rig also supports LeftEye and RightEye optional bones.  Eye bones have two Muscles each, one that goes up and down and one to move in and out. The Eye bones also work with Humanoid Rig LookAt solver that can distribute look at adjustments on Spine, Chest, Neck, Head and Eyes. There will be more about LookAt solver in the upcoming Humanoid IK rig paper.


Finally, the Humanoid Rig supports fingers. Each finger may have 0 to 3 digits. 0 digit simply means that this finger is not defined. The are two Muscles (Stretch and Spread) for the first digit  and one Muscle (Stretch) for 2nd and last digit. Note that there is no solver overhead for fingers when no fingers are defined for a hand.

Skeleton rig requirements

in-between bones

In many case, skeleton rigs will have more bones than the one defined by Humanoid Rig. In-between bones are bones that are between humanoid defined bones. For example, a 3rd spine bone in a 3DSMAX Biped will be treated as an in-between bone. Those are supported by Humanoid Rig, but keep in mind that in-between bones won’t get animated. They will stay at their default position and orientation relative to their parent defined in the Humanoid Rig.

Standard Hierarchy

The skeleton rig must respect a standard hierarchy to be compatible with Humanoid Rig. Skeleton may have any number of in-between bones between humanoid bones, but it must respect the following pattern:

Hips – Upper Leg – Lower Leg – Foot – Toes

Hips – Spine – Chest – Neck – Head

Chest – Shoulder – Arm – Forearm – Hand

Hand – Proximal – Intermediate – Distal

The T-Stance

tstance_pngThe T-Stance is the most important step of Humanoid Rig creation since muscles setup is based on it. The T-Stance pose was chosen as reference pose since it is easy conceptualize and that there is not that much room for interpretation of what it should be:

– Standing straight facing z axis

– Head and eyes facing z axis

– Feet on the ground parallel to z axis

– Arms open parallel to the ground along x axis

– Hands flat, palm down parallel to the ground along x axis

– Fingers straight parallel to the ground along x axis

-Thumbs straight parallel to the ground half way (45 degrees) between x and z axis

When saying “straight”, it does not mean bones necessarily need to be perfectly aligned. It depends on how skin attaches to skeleton. Some rig may have the skin that looks straight, but underneath skeleton is not. So it is important that the T-Stance be set for final skinned character. In the case you are creating a Humanoid Rig to retarget MOCAP data, it is a good practice to capture at least of few frames of a T-Stance done by the actor in the MOCAP suite.

Muscle range adjustments

By default muscle ranges are set to values that best represent human muscle ranges. Most of the time, they should not be modified. For some more cartoony character you may want to reduce the range to prevent arms entering body or augment it to exaggerate legs motion. If you are creating a Humanoid Rig to retarget MOCAP data you should not modify the ranges since the produced animation clip will not respect default.

Retargeting and Animation Clip

Mecanim retargeting is split into two phases. The first phase consists of converting a standard skeleton transforms animation to a normalized humanoid animation clip (or Muscle Clip). This phase happens in the editor when the animation file is imported. It is internally called “RetargetFrom”. The second phase happens in play mode when Muscle Clip is evaluated and applied to the skeleton bones of a Humanoid Rig.

muscleclip_pngIt is internally called “RetargetTo”.There are two big advantages of splitting retargeting into two phases. First one is solving speed. Half of the retargeting process is done offline, only the other half is done at runtime. The other advantage is scene complexity and memory usage. Since the Muscle Clip is completely abstracted for its original skeleton, the source skeleton does not need to be included in runtime to perform the retargeting.

The second phase is straight forward. Once you have a valid Humanoid Rig, you simply apply Muscle Clip to it with RetargetTo solver. This is done automatically under the hood.

The first phase, converting a skeleton animation to a Muscle Clip, may be a bit trickier. The skeleton animation clip is sampled at a fixed rate. For each sample, the skeleton pose is converted to a muscle space pose and a key is added to the Muscle Clip. Not all the skeleton rig will fit, there are so many different ways a skeleton rig can be built and animated. Some skeleton rig will produce a valid output, but with possible lost of information. We will now review what is needed to create a lossless normalized humanoid animation… the Muscle Clip.

Note: By lossless we mean that retargeting from a skeleton rig to Muscle Clip and then retargeting back to the same skeleton rig will preserve the animation intact. In fact, it will be almost intact. The original twist on arms and legs will be lost and replace by what Twist solver computes. As explained earlier in this document, there is no representation of twist repartition in Muscle Space.

  • The local position of bones must be the same in the humanoid rig and in the animation file. It happens that skeleton use to create the Humanoid Rig differs from the one in the animation file. Be sure to use the exact same skeleton. Warnings will be sent to console at import if it is not the case.
  • In-between bones must have no animation. This often happen for 3DSMAX skeleton where the 3rd spine bone has both translation and rotation animation on it. It also happens when Bip001 is used as Hips and that Pelvis has some animation on it. Warnings will be sent to console at import if it is not the case.
  • The local orientation of in-between bone must be the same in Humanoid Rig and in the animation file. This may happen when using Humanoid Auto Configure that relies on Skin Bind Pose to create T-Stance. Make sure that Skin Bind Pose rotation for in-between bones is the same that one found in the animation file. Warnings will be sent to console at import if it is not the case.
  • Except for Hips, translation animation is not supported on bones. 3DSMAX Biped sometime put translation animation on spine bones. Warnings will be sent to console at imp or if it is not the case.

The 3DSMAX Biped is pointed as a problematic rig here. It is probably because of its popularity and the fact that we had to support many cases of it being used with Mecanim. Note that if you are going to create new animations to be used with Mecanim Humanoid Rig, you should follow the rules stated above from the start. If you want to use already existing animation that break some of the rules, it is still possible, the Mecanim retarget solver is robust and will produce valid output, but the lossless conversion can’t be guarantied.

Easy game development: a peek at the GameAnalytics PlayMaker integration

The integration of the GameAnalytics Unity SDK with PlayMaker has just undergone its latest update. Simon Millard sat down with Magnus I. Møller from Tumblehead to find out how these two star plug-ins on the Asset Store helped them in their transition from an award winning animation studio to game development. 

Simon Millard is responsible for designing and implementing the GameAnalytics SDK for Unity. He’s also a passionate independent game developer. 

Tumblehead is an award winning animation studio from Viborg, Denmark. After having worked on successful games such as The Walking Dead and The Wolf Among Us, they decided to start making games of their own. Now they are on the brink of releasing their first title.

How did you make the transition from animation to game development?

Well, as we’re all graphical artists we needed to start from scratch in terms of programming. So, we found ourselves faced with having to learn Unity… At this point we found the PlayMaker plugin, which allows you to visually script game logic using nodes, transition events, and gives you access to almost the entire Unity API in a finite state machine. Having worked with 3D animation, it came very natural to me to connect the nodes in PlayMaker. It’s very similar to Maya. Also, Jean Fabre from PlayMaker helped me a lot, so over the course of six months I got to learn the plugin really well.

Moving into making our own games was a challenge, especially when it came to getting started with development and structuring the process. But Unity and PlayMaker together provided a solution for that problem. It’s all there, if you want to start making a game now you can, the barrier has lowered considerably, nowadays.

What is your upcoming iOS and Android puzzle game Bottleneck about?

It’s a physics based puzzle game, the likes of Cut the Rope, in which you’re helping Buddy to get all his diamonds back in his bottle. For that you need to solve some puzzles, and collect stars in the process.


Where are you now in your development process?

We’re very close to finishing a vertical slice, as we’re going to the Nordic Game Conference, looking for publishers. It’ll be the first time doing that – it’s a big deal, and the pressure is showing. The vertical slice needs to be pitch perfect. It’s very important for us right now to be able to track how players are experiencing the different levels. A few weeks ago we reached the stage where we felt confident to bring in more testers. As we needed to collect gameplay data from all devices it was going to be tested for, we enabled GameAnalytics in Playmaker.

Why did you choose GameAnalytics?

We knew it had a slick and straightforward interface, plus since it’s integrated in PlayMaker, it was extremely easy to setup, and the two plugins work great together. And if that weren’t enough, it’s free.

What metrics are you tracking?

At this point it’s all about balancing the experience, and getting to the point where it’s challenging, but not frustrating enough for players to drop off. So, we’re tracking if and where players get stuck, how many times they hit retry, how much time they spent on each level, how much stars they get on the second and the third attempt, etc.

What is the most surprising thing you discovered when using GameAnalytics to track playtesting?

I designed the levels myself and it was surprising to see that the ones I thought were going to be hard are not by far the most difficult; whereas the ones that were supposed to be really easy they are in fact the ones that are hard, and problematic. And we got that by looking at the graphs – the conclusion was strikingly obvious then.

What would you say is most useful about analytics to a game designer?

At least at the stage where I’m at now, I feel I can’t trust myself anymore. I’m in too deep, since I’ve been working on this for so long. Seeing all the people playing as a clear graph, or a bar, helps to have a good overview on the whole game without getting stressed out about keeping track of everything, because it’s all there. You can always look it up. So, at least at this stage in production, that’s what I find most valuable.

Do you have any advice for developers who are new to game analytics?

A common pitfall when playtesting is to track too many metrics. You should try thinking about what are the most important ones for what you want to achieve with that data. Also, I find it very useful to cross-reference two metrics to see if there’s consistency between them.

What are the advantages of using Playmaker GameAnalytics together?

Playmaker allows anyone to make a game. GameAnalytics is providing a way of having a great overview on it, both in terms of game design, and monetisation, later on. For me at least, a selling point is the fact that they are both so easy to setup, use and debug. It allows me to spend time on tweaking the things that matter, like creating an engaging experience for the player.

What are your plans for the future? Will you continue making games?

Of course! We love it! We’re gonna look for a publisher for Bottleneck, and we’ve also set up a pitch session for a board of different investors. We hope to get funding to finalise production and polish the game. Otherwise, we have two other games lined up – so yes, we’re hooked.

There you have it: the killer combo that allows anyone to create games and concentrate on tweaking the important things that make up the gaming experience. Unity and the Asset Store have been amazing in facilitating development. This drives innovation, and plug-ins like PlayMaker give a new meaning to lowering entry barriers and giving way to all sorts of creative minds to participate in the industry. While data cannot replace creativity, it can provide the perspective, the overview one needs to keep the balance in check and figure out what doesn’t work. GameAnalytics closes the loop, providing developers with all the metrics they need to improve gameplay and ease game design, profitably tweak monetisation and increase retention and engagement rate.

You can find out all about what is new (and old, for that matter) with the PlayMaker GameAnalytics integration on the GameAnalytics support page. Make sure you check out the Breakout game example the newest GameAnalytics package comes with, which is also available as a complete PlayMaker solution exemplifying both the GameAnalytics and Playmaker integration. You can see for yourself how easy it is, on the Playmaker website.


Twitter: @GameAnalytics @HutongGames @TumbleheadAnim


Unit testing part 1 – Unit tests by the book

If you are a developer, I assume you have heard about unit tests. Most of you probably even wrote one in your life. But how many of you have ever considered what make a unit test a good unit test? Unit test frameworks are just (fairly) simple runners that invoke a list of methods, one after another. But the code that is actually going to be executed in that method is none of the frameworks concern. How nice would it be to have a list of your favorite pizzas with an option of home delivery just by double clicking it. Why not to use a unit test framework to list all the options and save the time on writing GUI! Does it mean you have your test suite for making pizza orders?

A unit test framework is just a tool for writing tests but not every test written in this framework will be a (proper) unit test. Let’s paraphrase the pizza order example and imagine a system for ordering pizzas you want to test. The test case is supposed to validate the call is made the when I press “Make the order” button. The most straightforward solution would be to imitate all the steps in the test but you wouldn’t really want to receive a pizza every time you run a test, would you?

So what’s wrong with that test?

Let’s start with the scope of the test. A unit test, as the name suggest, should test a unit of work. Some people define a unit of work as a method but this definition is quite limiting and it’s better to see a unit of work as a single logical concept. The test shouldn’t actually communicate with the pizzeria and make the order. Instead, you would test if a proper order message goes out from the system once the button is pressed.

How do I verify that then?

Let’s assume the pizzeria has an online system for taking orders and our application has to send an HTTP request to order pizzas. As we don’t want to actually order that pizza, we could create our own server (a test double) that would simulate the behaviour of the original server, but without sending the real pizza. Such approach wouldn’t be that bad but it makes our unit tests depend on external resources and network communication. Additionally, how do we know that our mock server works as the original one. Or works at all? Shouldn’t we test the mock server? No! Instead of using the mock server, you should rather make the verification at lower level. You should validate that the object responsible for network communication would make the call to the server without actually making that call. By the call, I mean the HTTP request. The call in your code should happen, and it’s just the response that is faked (basically, the code in the tested module should not be different from the code in used in production). We want to run the test in memory, without using any resources. To achieve that we will need to mock some of the object using mocking frameworks or creating special implementations for testing purposes.

To decouple your modules use interfaces!

Pizza order example

What if the payment is done by 3rd party system?

The payment system that connects us to a 3rd party website and expects us to type in your credit cards number is a true budget killer. We definitely want to skip that step but not only because we could end up with huge credit card debt, but we also want to save our own time by not typing in the credit card number every time the test is executed. We could of course ‘hire’ a student worker to do the job for us (and let him gain the invaluable experience) or we could be smart about it. A true unit test needs to be fully automated. No user interaction need to be required. We can simply achieve what we want by using mock objects again. Mocking lets us override behaviours of certain parts of the system which allows us to simplify and skip some steps in the ordering process. In this case we mock our Payment module and tell it to confirm our payment instantly, without redirecting us anywhere. Additionally, it gives us full control over the test. Imagine if the 3rd party servers are not responding or became super slow and all our test suites would fail because of that.

A simplified example of a test for making pizza order:

Pizza order test

Great… anything else?

There are a few more things that will make your tests better that are not directly related to unit testing. Readability and maintainability will make it easy for the person that takes over the tests after you, to jump in and make the changes. Readable test can also serve as internal documentation for your feature. Less time spent on writing documentation gives you more time to write tests!

Never make your tests dependent on each other. The order of execution should never matter! That makes the tests hard to debug and maintain. If a test fails consequently after previous one did, simply because it was dependant on it, the true state of your code is obscured by the misleading results. Share the setup between tests, but always make the tests independent of each other.

Last but not least, you should take into consideration the execution time. Some people like to see the execution of unit test as a part of the compilation process. You don’t want the compilation to take too long so you should make your unit test run fast.




Unit tests are an important part of your general test suite and should be the ground base of all types of tests. To visualize the idea, take a look at the testing pyramid (inverted concept of the Ice-cream cone anti pattern) which depicts a healthy distribution of different types of tests in your test project.

So the original pizza ordering wasn’t that bad in the end?

Not necessarily. My point was to show what unit tests are and what they are not. An end-to-end scenario is also an option for an automated test but you shouldn’t base your test suites on them. The higher you go in the pyramide, the harder to maintain and debug the tests are and the slower they get. Integration tests and UI (thunder strikes) tests are also important but it’s all about balance.

It’s all nice in theory but you are probably thinking now how does it apply in Unity. Unity, to compensate the performance, has some limitation that works against testability. Lack of interface serialization is one of them. But not all of your code is required to be serialized! There are workarounds for those limitations I will write about in the future.

The next blogpost will be about designing your MonoBehaviour with testability in mind. Stay tuned!

The future of scripting in Unity

Recently we talked about Unity and WebGL. In that post we briefly spoke about how scripting works in WebGL, using new technology called “IL2CPP”. However, IL2CPP represents a lot more than just a scripting solution for WebGL; it’s our own high performance .NET Runtime, to be rolled out on more platforms.

But before we delve into the future, let’s talk about the present.

Scripting in Unity today

We leverage Mono (and WinRT on Windows Store Apps and Windows Phone) to bring the ease of use of C#, access to 3rd party libraries, and near native performance to Unity. However, there are some challenges:

  • C# runtime performance still lags behind C/C++
  • Latest and greatest .NET language and runtime features are not supported in Unity’s current version of Mono.
  • With around 23 platforms and architecture permutations, a large amount of effort is required for porting, maintaining, and offering feature and quality parity.
  • Garbage collection can cause pauses while running

These issues have remained in the front of our minds over the past few years as we sought to address them. Concurrently, investigations into supporting scripting for WebGL were occurring. As each progressed forward, these two paths converged into a single approach.

With the problem scope clear, experimentation on different ways to solve this were tried. Some of them were promising; others were not. But ultimately we found an innovative solution and it proved to be the right way forward.

That way forward is IL2CPP.

IL2CPP: the quick and dirty intro

IL2CPP consists of two pieces: an Ahead of Time (AOT) compiler and a Virtual Machine (VM).

These two parts represent our own implementation of the Common Language Infrastructure, similar to .NET or Mono. It is compatible with the current scripting implementation in Unity.

Fundamentally, it differs from the current implementation in that the IL2CPP compiler converts assemblies into C++ source code. It then leverages the standard platform C++ compilers to produce native binaries.

At runtime this code is executed with additional services (like a GC, metadata, platform specific resources) that are provided by the IL2CPP VM.

The benefits of IL2CPP.

Let’s talk about each one of the previously mentioned issues and how IL2CPP addresses each of them.


IL2CPP seeks to provide the ease of use and productivity of C# with the performance of C++.

It allows the current, productive scripting workflow to remain the same while giving an immediate performance boost. We’ve seen 2x-3x performance improvements in some of our script-heavy benchmarks. This performance boost is due to a few reasons.

  • C++ compilers and linkers provide a vast array of advanced optimisations previously unavailable.
  • Static analysis is performed on your code for optimisation of both size and speed.
  • Unity-focused optimisations to the scripting runtime.

While IL2CPP is definitely still a work in progress, these early performance gains are indicative of great things to come.

.NET Upgrade

A very frequent request we get is to provide an upgraded runtime. While .NET has advanced over the past years, Unity currently supports .NET 2.0/3.5 era functionality for both the C# compiler and the class libraries. Many users have requested access to newer features, both for their code as well as 3rd party libraries.

To complement IL2CPP, as it matures, we will also be upgrading to recent versions of the Mono C# compiler, base class libraries, and runtime for use in the editor (The editor will not switch to IL2CPP, for fast iteration during development). These two things combined will bring a modern version of .NET to Unity.

It’s also important to note that we are collaborating with Microsoft to bring current and future .NET functionality to Unity, ensuring compatibility and quality.

Portability and Maintenance

While this area may sound like an internal issue for Unity to deal with, it also affects you. The Mono virtual machine has extensive amounts of platform and architecture specific code. When we bring Unity to a new platform, a large amount of our effort goes into porting and maintaining the Mono VM for that platform. Features (and bugs) may exist on some platforms but not others. This affects the value which Unity strives to provide to you; easy deployment of the same content to different platforms.

IL2CPP addresses these issues in a number of ways:

  • All code generation is done to C++ rather than architecture specific machine code. The cost of porting and maintenance of architecture specific code generation is now more amortised.
  • Feature development and bug fixing proceed much faster. For us, days of mucking in architecture specific files are replaced by minutes of changing C++. Features and bug fixes are immediately available for all platforms. In it’s current state, IL2CPP support is being ported to new platforms in short amount of time.

Additionally, platform or architecture specific compilers can be expected to optimise much better than a singular code generator. This allows us to reuse all the effort that has gone into the C++ compilers, rather than reinventing it ourselves.

Garbage Collection

IL2CPP is not tied to any one specific garbage collector, instead interacting with a pluggable API. In its current iteration IL2CPP uses an upgraded version of libgc, even as we look at multiple options. Aside from just the GC itself, we are investigating reducing GC pressure by analysis done in the IL2CPP compiler.

While we don’t have a lot more to share at the moment, research is ongoing. We know this is important to many of you, we’ll continue working on it and keep you informed in future blog posts. Unrelated to IL2CPP, but worth mentioning in the context of garbage collection, Unity 5 will see more and more allocation free APIs.

What IL2CPP is not

IL2CPP is not recreating the whole .NET or Mono toolchain. We will continue to use the Mono C# compiler (and perhaps later, Roslyn). We will continue to use the Mono class libraries. All currently supported features and 3rd party libraries which work with Mono AOT should continue to work with IL2CPP. We are only seeking to provide a replacement for the Mono VM and AOT compiler, and will keep on leveraging the wonderful Mono Project.

When can I try IL2CPP?

By now we hope you are just as excited as we are to use IL2CPP and wondering when you can get your hands on it! An early version of IL2CPP will be available as part of WebGL publishing in Unity 5.

Beyond WebGL, we are continuing development of IL2CPP for other platforms. In fact, we already have working implementations on a number of our supported platforms. We expect to be rolling out at least one additional platform later this year. Our current plan is to have iOS be the next platform shipping with IL2CPP support.

The planned upgrades of our Mono toolchain will follow after IL2CPP is available on more platforms and has matured.

One platform that will never be supported by IL2CPP is the WebPlayer; this is due to security implications. And as noted earlier, the editor will remain to be using Mono.

Additionally, you can see the IL2CPP runtime in action today. As mentioned, the two WebGL demos we posted are IL2CPP-powered.

What’s next?

We are still hard at work on IL2CPP: implementing new features, optimising code generation, fixing bugs, and supporting more platforms. We’ll keep posting more in-depth blogs as we make progress and talk about it with you on the forums.

The Scripting Team.

Teleporter demo

We’d like to share with you a project that was built during the RD period of the Physically Based Shader and Reflection probes.

This benchmark project is one among several which helped us identify what improvements of functionality were necessary from an artist’s production perspective.

We compared offline to realtime rendering methods and output with the aim to achieve both an increase of visual quality, and a better streamlined, smoother production workflow for artists, which will open playful possibilities for graphics to be extended beyond realism to stylism.

The demo uses the Standard PBR shader and displays a range of shiny and rough metallic, plastic and ceramic materials, which naturally use the new native cubemap reflections (or HDR reflection probes). The material output in the movie is at a prototype stage and the shader is still evolving.

The textures consistently changed throughout the process as the shader evolved. In total, it is composed of 30 texture sets or so, both manually authored and procedurally generated textures. At this point, scanned textures were not used whatsoever. Typically, a texture set consists of albedo, specular, gloss, occlusion and a normal map and the sizes range between 256px to 4k. Background surfaces demanded less surface detail and amount of textures. In some cases, we casually created materials by pushing sliders to adjust color and float values until it matched the references. The secondary (detail-map) slots give a layer of dust, cracks, and crevices on the surfaces, which can be spotted on the close-up camera shots.

teleporter 2.1The heated up revolving core is achieved by simply animating emissive values and combining the results with HDR bloom to give a glowing hot impression.

The cave is a large scaled environment and the 100 meter tall machine itself was used intentionally to challenge performance and to serve as a lighting benchmark. This asked for a variety of convoluted HDR reflection probes/cubemaps to be placed along its body that could adapt during the changes of light that gradually diminishes towards the bottom of the cave and when the heated core lights up. Certain elements use real-time reflections while many are kept to static reflections. The application of the HDR reflection probes remains true to Unity’s ideology of keeping workflows simplified and are nearly effortless to apply and use.

The background scene uses directional lightmaps, while the machine is composed of partly skinned- and dynamic meshes that are hooked up to light probes and use Image-Based Lighting and a variety of light sources.

teleporter 2.2To be able to see the output of the shader during production, it is crucial to have HDR rendering represented in the sceneview.

We are most excited to share this short film with you and are impatient to see what our talented community can produce with the new set of tools which is coming. We are looking forward to seeing  artists amaze us with their limitless creativity.

Custom == operator, should we keep it?

When you do this in Unity:

if (myGameObject == null) {}

Unity does something special with the == operator. Instead of what most people would expect, we have a special implementation of the == operator.

This serves two purposes:

1) When a MonoBehaviour has fields, in the editor only[1], we do not set those fields to “real null”, but to a “fake null” object. Our custom == operator is able to check if something is one of these fake null objects, and behaves accordingly. While this is an exotic setup, it allows us to store information in the fake null object that gives you more contextual information when you invoke a method on it, or when you ask the object for a property. Without this trick, you would only get a NullReferenceException, a stack trace, but you would have no idea which GameObject had the MonoBehaviour that had the field that was null. With this trick, we can highlight the GameObject in the inspector, and can also give you more direction: “looks like you are accessing a non initialised field in this MonoBehaviour over here, use the inspector to make the field point to something”.

purpose two is a little bit more complicated.

2) When you get a c# object of type “GameObject”[2], it contains almost nothing. this is because Unity is a C/C++ engine. All the actual information about this GameObject (its name, the list of components it has, its HideFlags, etc) lives in the c++ side. The only thing that the c# object has is a pointer to the native object. We call these c# objects “wrapper objects”. The lifetime of these c++ objects like GameObject and everything else that derives from UnityEngine.Object is explicitly managed. These objects get destroyed when you load a new scene. Or when you call Object.Destroy(myObject); on them. Lifetime of c# objects gets managed the c# way, with a garbage collector. This means that it’s possible to have a c# wrapper object that still exists, that wraps a c++ object that has already been destroyed. If you compare this object to null, our custom == operator will return “true” in this case, even though the actual c# variable is in reality not really null.

While these two use cases are pretty reasonable, the custom null check also comes with a bunch of downsides.

  • It is counterintuitive.
  • Comparing two UnityEngine.Objects to eachother or to null is slower than you’d expect.
  • The custom ==operator is not thread safe, so you cannot compare objects off the main thread. (this one we could fix).
  • It behaves inconsistently with the ?? operator, which also does a null check, but that one does a pure c# null check, and cannot be bypassed to call our custom null check.

Going over all these upsides and downsides, if we were building our API from scratch, we would have chosen not to do a custom null check, but instead have a myObject.destroyed property you can use to check if the object is dead or not, and just live with the fact that we can no longer give better error messages in case you do invoke a function on a field that is null.

What we’re considering is wether or not we should change this. Which is a step in our never ending quest to find the right balance between “fix and cleanup old things” and “do not break old projects”. In this case we’re wondering what you think. For Unity5 we have been working on the ability for Unity to automatically update your scripts (more on this in a subsequent blogpost). Unfortunately, we would be unable to automatically upgrade your scripts for this case. (because we cannot distinguish between “this is an old script that actually wants the old behaviour”, and “this is a new script that actually wants the new behaviour”).

We’re leaning towards “remove the custom == operator”, but are hesitant, because it would change the meaning of all the null checks your projects currently do. And for cases where the object is not “really null” but a destroyed object, a nullcheck used to return true, and will if we change this it will return false. If you wanted to check if your variable was pointing to a destroyed object, you’d need to change the code to check “if (myObject.destroyed) {}” instead. We’re a bit nervous about that, as if you haven’t read this blogpost, and most likely if you have, it’s very easy to not realise this changed behaviour, especially since most people do not realise that this custom null check exists at all.[3]

If we change it, we should do it for Unity5 though, as the threshold for how much upgrade pain we’re willing to have users deal with is even lower for non major releases.

What would you prefer us to do? give you a cleaner experience, at the expense of you having to change null checks in your project, or keep things the way they are?

Bye, Lucas (@lucasmeijer)

[1] We do this in the editor only. This is why when you call GetComponent() to query for a component that doesn’t exist, that you see a C# memory allocation happening, because we are generating this custom warning string inside the newly allocated fake null object. This memory allocation does not happen in built games. This is a very good example why if you are profiling your game, you should always profile the actual standalone player or mobile player, and not profile the editor, since we do a lot of extra security / safety / usage checks in the editor to make your life easier, at the expense of some performance. When profiling for performance and memory allocations, never profile the editor, always profile the built game.

[2] This is true not only for GameObject, but everything that derives from UnityEngine.Object

[3] Fun story: I ran into this while optimising GetComponentT() performance, and while implementing some caching for the transform component I wasn’t seeing any performance benefits. Then @jonasechterhoff looked at the problem, and came to the same conclusion. The caching code looks like this:

private Transform m_CachedTransform
public Transform transform
    if (m_CachedTransform == null)
      m_CachedTransform = InternalGetTransform();
    return m_CachedTransform;

Turns out two of our own engineers missed that the null check was more expensive than expected, and was the cause of not seeing any speed benefit from the caching. This led to the “well if even we missed it, how many of our users will miss it?”, which results in this blogpost

Sustained Engineering: Patch builds

We’re delighted to announce a change to the way we will release builds. Previously all bug fixes have been rolled into publicly released updates of the editor, shipped and published by the RD developer team. As the complexity of the product and the organization grows, it gets increasingly difficult to maintain all these in one place.  This has meant that bug fixes have taken longer and longer to get into the hands of our customers.  We don’t like that.

In January we merged the QA and Support teams into one organization with me (QA Director) heading them. The main purpose of this merge was to enable us to do sustained engineering on our growing number of versions, such that we can better handle the issues our customers face in current released versions. By “sustained engineering” we mean working hard to improve the reliability of a currently shipped version of Unity, and not making fixes in future versions which can take time to complete, pass through alpha and beta cycles, QA fully, and ship. Effectively it is an expansion of the responsibility of the support team to also be able to fix bugs, backport bugs and release them directly to customers as patches.

Since the support team is not of endless size and the risk of making code changes on a released version is high, we are going to be very careful about which bugs we will fix. Pieces of the puzzle include the severity of the issue for the customers, the amount of customers affected, how long it will take to fix and a slew of other factors. The bottom line is that we will not be able to, or even want to, fix everything everyone asks us, but we will be able to do more than we do today and do that more often.  Customers affected by annoying bugs should see these problems resolved much more quickly.

To be able to handle this, we have recently hired a release manager, Jawa, to handle the release train for these patches. He will handle the communication internally between all the teams in Unity and also handle the release procedure for each of the patches and update the forums on releases.

When we say patches, it is actually a full release of the entire editor, with all runtimes. The complexity of Unity forces us to do that. However, if you are hit by the issues we fix, it will be available on our forums for you to download, just like any other version of Unity you use. The main difference is that we will distribute the patches through the forums and we will NOT enable the editor update check on them. A patch will have a version number ending in “..pX”, so for example, a patch for Unity 4.3.8f1 will be called 4.3.8p1. This patch will have a small number of bugs fixed. These bugs will be listed in the release notes for that version. Then a new patch will be generated, which will include these fixes, and newly fix bugs. This will be 4.3.8p2, and so on. Once we have a set of patches ready, we will roll them up to a new “..fX” (f for Final) release, eg. 4.3.9f1, and that will undergo full regression tests and be enabled through the editor update checks for everyone. Note that we expect only customers affected by reported fixed bugs will want to migrate to patched versions of Unity.

To really kickstart this journey, we have gathered support engineers, field engineers, QA, build engineers, infrastructure engineers and release managers in Brighton this week. A total of 28 people have joined in a week of learning the processes, doing the actual fixes and shipping them. I will have to caveat that the focus here is on learning a very hard process of getting the code done right, it is NOT about having large quantity of fixes, so the first few patches will be very limited.

It has been fantastic to see everyone working on a common goal for a week and it is fantastic to be able to present you with the very first results. Join us on the next journey and check the first patch here: http://forum.unity3d.com/threads/246198-Unity-Patch-Releases


Unity Awards 2014 Open Submissions Begins

It’s that time of year again where we open up submissions for the Unity Awards! Submissions will be open from now until June 30, 2014.

If you’ve created something awesome with Unity in the past year, whether it’s a game or some other interactive experience, we’d love to hear about it. All you have to do is head to the submission portal and click the link at the bottom that will start the process.

Submit your project for the Unity Awards now!

For those unfamiliar, the Unity Awards are held each year during the Unite conference to recognize the most impressive creations made using Unity. This year, the conference is taking place on August 20-22 in Seattle and the Awards ceremony itself will take place on August 21 at McCaw Hall. Read more about the conference and grab tickets at our Unite site.

This year, we’re changing the voting process slightly While the nomination committee here at Unity still look through the hundreds of projects submitted and narrow them down to six finalists in each category, we’re going to open up voting to the community for all categories. Community votes will account for 50% of the total vote with Unity employees accounting for the other 50%. This will be the same for all categories except for the Community Choice, which the community will account for 100% of the votes. General voting will begin in July 2014.

The categories this year include:

Best 3D Visual Experience – Submissions for this category will be judged based on artistic merit including thematic and stylistic cohesion, creativity, and/or technical skill.

Best 2D Visual Experience – Submissions for this category will be judged based on artistic merit including thematic and stylistic cohesion, creativity, and/or technical skill.

Best Gameplay – Intuitive control, innovation, creativity, complexity, and fun are what make games enjoyable and entertaining–we’re looking for games that excel in one or all of these areas.

Best VizSim Project – Unity projects come in all shapes and sizes; this year we’re looking for projects that have some real world grounded applications for visualization, simulation, and training.

Best Non-game Project – Unity-authored products that fall outside of games or VIzSim including projects such as art, advertising, interactive books and comics, digital toys, interactive physical installations, and informational programs will want to submit for this award.

Best Student Project – This award is for projects (games or otherwise) worked on by students currently being completed as part of the curriculum of an educational institution. Projects will be judged based on creativity, technical merit, and overall artistic cohesion among graphics, sound, and presentation.

Technical Achievement – Any project that provides an excellent example of technical excellence in Unity including but not limited to graphics, scripting, UI, and/or sound.

Community Choice – This category will be voted on by the community of game developers and represents the favorites of the community across the board.

Golden Cube (best overall) – This award is for the best overall project made with Unity in the last year. Everything from technical achievement and visual styling to sound production and level of fun will be taken into account to choose an overall winner.


Of course, there are some rules for submission that you’ll need to know, so here they are:

  • Only Unity-authored projects are eligible for nomination.
  • Projects must have been released from July 1, 2013 to June 30, 2014 to be eligible with the exception of student project submissions which must have been part of the coursework in the 2013-2014 school year.
  • Any projects nominated for previous years of the Unity Awards are ineligible for the 2014 Unity Awards with the exception of projects that were previously student work and have since turned into finished commercial projects.
  • Games currently in early access programs that not considered “final” products by June 30, 2014 will not be accepted to the 2014 Unity Awards.
  • Individuals or teams are welcome to enter multiple projects so long as they adhere to all other rules.

So submit those projects, tell your friends that release games this last year to submit their projects, and keep your eyes out in July for another announcement that community voting has begun. We’re really looking forward to seeing all of your submissions!