Monitoring Unity Performance, Part I

Hello! My name is Sakari Pitkänen. I work as a developer on the Toolsmiths Team here at Unity. In this blogpost I will tell you about how we do automated performance monitoring of the Unity development branches.

With an ever increasing amount of test configurations (platforms, operating systems, versions) it gets increasingly difficult to keep track of everything that is going on. We need visibility, and to get this we need data. Our main reason for getting performance data from Unity is to prevent performance regressions.

Performance data

Finding performance regressions

As we do day-to-day development we are not likely to notice if performance is degrading as time goes by, which is a big problem. We want to always try to make the next version of Unity perform better than the current one – and we definitely don’t want anything to be slower without us noticing it.

Most of our current performance tests measure time over some specific functionality of Unity. For example we can measure the frame time over some number of frames when utilizing a specific rendering functionality. The tests are designed to look for regressions, so the measurements are implemented in a way that whenever the measurement increases significantly we have a performance regression. Each time we run a test, we run it many times and use the median value of the samples, so that a single bad sample won’t show up as a regression. Besides time we can measure other things like memory usage.

Before we dig into the details of how we do this, let’s look at a concrete example: How we found a performance regression and used our data points to verify that it got fixed.

Performance regression and fix

In Unity version 4.3.0 we had a performance regression that affected a specific platform, Windows Standalone. Below is a table that has results for a limited set of tests run on four different platforms and two versions of Unity, 4.2 and 4.3. For all of these tests, the values are median values of measured frame times in milliseconds. The table is not showing performance per se; instead it lists the sample values. This means that increase in a measurement value can be considered a performance regression (red) and decrease can be considered an improvement (green).

Performance test results for Unity 4.3.0

The results show that  the Windows Standalone platform has a significant performance regression that affects most of the selected tests. From the test names one can already assume the cause of the regression is probably graphics related. Unfortunately, we were still working on the test rig when 4.3.0 was released, so we didn’t get the data to catch this before shipping. That will surely not happen next time, and as we widen the coverage with more configurations we expect to significantly reduce the risk of shipping with performance regressions moving forward.

We did find the cause for this particular regression and promptly fixed it for Unity version 4.3.1. Then we ran the tests again, now comparing last three released versions of Unity, 4.2, 4.3.0 and 4.3.1.

Performance test results for Unity 4.3.1

We could verify that the fix was effective and that these tests show no significant changes in performance between Unity versions 4.2 and 4.3.1.

In Part II of this post I’ll tell you about the rig we have built for performance testing.

Occlusion Culling in Unity 4.3: Troubleshooting

The following blog post was written by Jasin Bushnaief of Umbra Software to explain the updates to occlusion culling in Unity Pro 4.3.

This is the last post in a three-post series. In the first one, I described the new occlusion culling system in Unity 4.3 and went through the basic usage and parameters. In the second one, I gave a list of best practices and general recommendations for getting the most out of Umbra. This last post deals with troubleshooting some common issues people tend to encounter when using Umbra.


Unity offers a couple of helpers for figuring out what’s going on in occlusion culling. These visualizations may help you figuring out why occlusion culling isn’t behaving quite as you’d expect. The visualizations can be found by enabling the Visualization pane in the Occlusion window and selecting the camera.


The individual visualizations can then be enabled and disabled in the Scene view, in the Occlusion Culling dialog.


Let’s take a look at what the different visualizations do.

Camera Volumes

The Camera Volumes visualization simply shows you, as a grey box, in which cell the camera is located. For more information on what the cells are, take a look at the first post. This is one way of figuring out how the value of smallest occluder changes the output resolution of the data, for instance. Also, if it looks like the cell bounds don’t make sense, for example when the cell incorrectly extends to the other side of what should be an occluding wall, something may be amiss.


Visibility Lines

The purpose of the Visibility Lines visualization is to show you the line of sight that Umbra sees. The way it works is that Umbra will project its depth buffer back into the scene and draw lines to the furthermost non-occluded points in the camera’s view. This may help you to figure out, for instance, which holes or gaps cause “leaks” in occlusion, ultimately causing some objects to become visible. This may also reveal some dubious situations where some object that clearly should be a good occluder, doesn’t occlude anything because of, say, forgetting to enable the static occluder flag for the object.



The Portals visualization will draw all the traversed portals as semi-transparent axis-aligned quads. Not only will this help you get an idea of how many portals Umbra traverses and thus help you deal with occlusion culling performance tweaking, but it also provides another way of looking at what’s in Umbra’s line of sight. So you can see if there are some spots in the scene that don’t really cause occlusion, and how the portals get placed into the scene in general.



While occlusion culling should just work in Unity, sometimes things don’t go quite as you’d expect. I’ll go over the most common issues people tend to run into, and how to solve those issues in order to make your game run smoothly.

Hidden objects aren’t being culled!

Sometimes people wonder why some objects are reported visible by Umbra when in reality they seem to be occluded. There can be many reasons for this. The most important thing to understand is that Umbra is always conservative. This means that it always opts for objects being visible rather than invisible whenever there’s any uncertainty in the air. This applies to all tie-breaking situations as well.

Another thing to note is that the occlusion data represents a simplified version of the scene’s occluders. More specifically, it represents a conservatively simplified version, meaning some of the occlusion erodes and loses detail.

The level of detail that gets retained in the data is controlled by smallest occluder. Decreasing the value will produce higher-resolution data that should be less conservative, but at the same time, culling will lose some speed and the data will get larger.

Visible objects are being culled!

Probably the most puzzling problematic scenario is when something gets reported by Umbra as occluded even though it shouldn’t be. After all the promises of always being conservative and never returning false negatives, how can this happen?

Well, there can be a couple of things going on. The first and by far the most common case is that you’re looking at something through a hole, gap or crack which gets solidified by Umbra’s voxelization. So typically the first thing you should try is to reduce the value of smallest hole and see if that fixes the issue. You can try temporarily tuning it down even quite a bit just to test if that’s the issue.

There are situations where this may not be completely obvious. For instance, if you have a book shelf in your scene where individual books are marked as occluders, too large a smallest hole may cause some of the books to be occluded either by the shelf or by the other books. So again, just decreasing the value of smallest hole is probably the first thing you should try.

Another case where objects may disappear is when your backface limit has been set to something less than 100 and your camera is in the vicinity of back-facing triangles. Note that the camera doesn’t have to actually be looking at the triangles nor do the triangles have to be facing away from the camera at that particular spot. It is enough that there is a topologically connected place (i.e. not behind a wall or anything) close to the camera from which some back-facing triangles can be seen.

The first thing to do to remedy this is obviously try with a backface limit of 100 and see if that fixes the issue. If it does, it may make sense to modify the geometry either by re-modeling some of the assets so that they’re two-sided or solid, or just removing the static occluder flag from the problematic objects. Or if you don’t care about the occlusion data size or don’t get a huge benefit out of the backface optimization, just disabling the backface test by setting the value at 100 is of course also an option.

Culling gets weird very close or inside an occluder!

Culling may behave strangely if your camera goes inside an occluder, or infinitesimally close to one. Typically this may occur in a game with a 3rd person camera. Because Umbra considers occluders as solid objects, culling from inside one will typically mean that most of the stuff in your scene will get culled. On the other hand, if the backface test has been enabled, many of the locations inside occluders will have been removed from the data altogether, yielding undefined results. So you should not let the camera go inside occluders!

To be more specific, in general Umbra will be able to guarantee correct culling when the camera is further away from an occluder than the value of smallest hole. In most cases, going even closer will still work, but in some cases, because the limitations the voxel resolution implies on the accuracy of the occlusion data, going super close to an occluder may result in the camera being incorrectly assigned to a location inside an occluder. Hint: use the “camera volume” visualization to see in which cell the camera is located and what it looks like.

Generally, when the backface test is enabled (i.e. when backface threshold is something smaller than 100), Umbra will do a better job near occluders, because it is able to detect the insides of occluders, and correspondingly dilate all valid locations slightly towards them, so that you’ll get correct results even if you go arbitrarily close to an occluder. So if you cannot prevent your camera from going very close (or even slightly inside) an occluder, the first thing you may wish to try is to set backface threshold to something smaller than 100. This will help with dilation and may fix the issue.

If tweaking backface threshold does not help, or if your camera goes very deep inside an occluder, the only thing left to do is to simply remove the occluder flag from the object.

Culling is too slow!

The reason for slow culling is typically very simple. Umbra traverses too many portals, and thus the visibility query takes a long time. The parameter that controls the portal resolution in the occlusion data is smallest occluder. A larger value will produce a lower-resolution portal graph, which is generally faster to traverse, up to a point. There are some situations, however, where this is not the case. Specifically, when having to simplify the occluder data conservatively, sometimes the increased conservativity of a lower-resolution graph may cause the view distances to increase, and the total amount of traversed portals to increase with it as well. But this is not the most typical of situations. In general, a large smallest occluder value will produce data that is faster to process in the runtime, at the cost of reduced accuracy of the occlusion.

Another, but obviously a bit more arduous way of making sure that the number of traversed portals doesn’t get out of hand is to modify the geometry of the scene so that the view distances don’t get too long in the problematic areas. Manually inserting occlusion into open areas will of course cause the traverse to terminate sooner, reducing the amount of processed portals and thus making occlusion culling faster.

Baking is too slow!

The speed of baking largely depends on one thing: the number of voxels that need to be processed. In turn, the number of processed voxels is defined by two factors: the dimensions of the scene and the voxel size. Assuming you can’t do much about the former, the latter you can easily control with the smallest hole parameter. A larger value will of course speed up baking. So, it may make sense to start with a relatively large value and then tune it down if your objects are incorrectly disappearing because of too aggressive occluder generation. A microscopic smallest hole may cause baking to take forever and/or to consume ridiculous amounts of memory.

Occlusion Data is too large!

If baking your scene produces too much occlusion data, there are a couple of things you can try. First, changing the value of backface limit to something smaller than 100, for instance 99, 50 or even 30 may be a good start. If you do this, make sure that culling works correctly in all areas your camera may be in. See the previous post for more information.

If changing backface limit is not an option, produces unpredictable results or doesn’t reduce the data size enough, you can try increasing the value of smallest occluder which determines the resolution of the occlusion data and thus has a very significant impact on the size. Note that increasing smallest occluder also increases the conservativity of the results.

Finally, it’s worth noting that huge scenes will naturally generate more occlusion data than small ones. The size of the occlusion data is displayed at the bottom of the Occlusion window.


“Failure in Split Phase”

In some rare cases, where the scene is vast in size and the smallest occluder parameter has been set to a super small value, Baking may fail with the error “Failure in split phase”. This occurs because the initial step of the bake tries to subdivide the scene into computation tiles. The subdivision is based on the smallest occluder parameter and when the scene is humongous in size (like, dozens of kilometers in each direction) too many computation tiles may be created, resulting in an out of memory error. This, in turn, manifests as “Failure in split phase” to the user. Increasing the value of smallest occluder and/or splitting up the scene into smaller chunks will get rid of this error.

That’s it!

This concludes our three-post series of occlusion culling in Unity 4.3. For more information about Umbra, visit


Unity and Kii Cloud Team Up for the Love of the Game

(This guest blog post comes from our online service partner, Kii)


With Kii, game developers get a fast and scalable backend, powerful analytics and game distribution services, so they can focus on the stuff that matters — the game experience.  Back in September when we released our Unity SDK, we briefly explained what Unity is and how your Unity games can benefit from using Kii Cloud by allowing you to focus on what matters to your players rather than developing the game backend.

We’ve learned a lot about the game development since then, including that practically all successful games have some common components. They start with a great idea, have an awesome user experience, and, they have a scalable backend to support any spikes in growth and ongoing performance needs. The best games also get insights into player behaviors and usage, and create a strong user acquisition machine

While Unity is the gaming engine that transforms an idea into beautiful experience, Kii provides the complete game backend — including tools, insights and a distribution package to help developers aggressively distribute their games

So, we’re excited to announce a partnership with Unity that makes Kii Cloud available via Unity’s Asset Store. Game developers can get their hands on Kii early in the development process so they can add a solid, scalable gaming backend. That means powerful analytics, user management, flexible data storage and retrieval, geolocation and much more. This also means you will have access to more targeted developer communities.  Using Kii Cloud in your game.  If you are a game developer, chances are you’re already using Unity or have at least heard of it — it’s one of the best gaming engines out there. But it might be harder to understand why you need a robust, carrier-grade backend built for games. So we’ll provide examples of how to leverage Kii in your games and also discuss in more detail how we’re taking our Unity commitment to the next level.

Why you need a game backend
As you build a game, you need to determine where you will store data. You can keep everything on the devices themselves, but that makes syncing data across devices and platforms challenging. You could move all data to a server and code all the logic to manage that data, but is that where you want to spend your time and resources? Finally, will it be fast enough to handle that data when your game scales to millions of users?

Building on a backend platform is the best choice for complex games. Our Unity-native SDK lets you easily manage data without sacrificing speed, security or scalability.
A carrier-grade backend also plays a huge role in your frontend user experience, which, as you know, is a determining factor in the success of your game. Response time is a huge part of this, and it depends greatly on your choice of backend.

Middleware that makes your games successful
Making your game competitive—and easier to deploy and manage—depends on the “middleware” pieces you deploy. Components like geolocation, data management, user management, push services and analytics are standard—but not necessarily pieces you should develop yourself. Kii provides these building blocks out of the box.

Track usage behaviors and iterate
You’ll need to understand game metrics like retention, engagement and player behavior to improve game design and monetization. Most popular analytics SDKs only offer “event-based” analytics — meaning when something you want to measure happens in the game, you have to fire an event that gets logged in the backend. This is difficult to maintain, since every time you want to measure a new metric you have to modify your game, redeploy and send new updates to your players. Kii Analytics supports event-based analytics but also leverages user-generated data that’s already being stored in the backend, so you can get deeper insights on the fly without ever touching the deployed game!  In addition to standard analytics, you can create advanced metrics about player-player social interactions, user demographics, usage progress, dropoff points and more—so you can optimize for better design and increased game usage.

Increase the distribution of your game
You’ve built an awesome game, but how do you get people to try it? Through Kii to China, you can distribute your Unity-based games to the world’s largest smartphone market. And through Kii’s handset and carrier partnerships in Japan, you can also distribute your Unity-based games to the world’s best-monetized mobile gaming market.

Unity demos, code and tutorials for Kii Cloud 
Since our addition of asynchronous call support to our Unity SDK (a feature that allows you to use our backend without your players ever noticing it) we have been working on a bunch of Unity demos:

  • KiiUnitySDKSamples is a generic demo that systematically shows all Kii Cloud API calls via the Unity SDK.  It’s not attached to a game, but it’s a Unity project that runs without modification, exposing a Unity-based GUI for interaction.
  • UnityAngryBotsKii takes the official Unity 3D 4.3 AngryBots demo game and makes use of Kii Cloud via the Unity SDK. The demo is under development but it already showcases several Kii features.
  • HelloKii-Unity is a skeleton project that shows basic user management and data management (including queries) in the context of a simple breakout game.  It’s included with the Unity SDK package.

Getting started with Kii Cloud for Unity
Want to get started quickly? Check our Unity Quick Start for both Kii Cloud and Kii Analytics SDKs to get up and running in a snap! Alternatively, you can download the Unity Skeleton Project (an empty project with our SDKs already in place) when you create a Unity project on  If you’re looking for more advanced examples check out our demo section above.  We will keep working closely with the Unity community to bring you the leanest and fastest cloud backend for your games and hope you enjoy all the resources now at your disposal that will let you build better games and have more fun doing it. You can focus more resources on the things that matter like design and playability when you get rid of backend coding and maintenance.

Don’t hesitate to contact us on the Kii Developer Community to let us know about your Kii-powered games. We’ll be happy to showcase them through our channels.

(This guest blog post comes from our online service partner, Kii!)

Monitoring Unity Performance, Part II

This is the second part of my post about how we monitor Unity performance. In Part I, I explained how we monitor for and fix performance regressions. In Part II I’ll go into some details about how we do the monitoring.

Building the rig

We have built a performance test rig that can monitor selected development and release branches and report selected data points to a database.

The performance test rig is continuously running and looking for new builds to test from our build automation system. As soon as new builds are ready, it runs tests on all platforms we have setup for performance testing. The results get reported into a database with all the related information like Unity version used, platform and information about the hardware and operating system. We also have a reporting solution that continuously monitors the data in the database and shows us if we have a significant performance regression. It can tell us in which release branch the regression occurs, which platforms and tests are affected by it.

The figure below shows the components of the performance test framework. For running the tests we have dedicated machines running tests on different platforms. We use a .Net web service to report test results to the database. And we have a reporting solution that presents results in nicely formatted reports.

Performance test framework components

When analyzing the test results we use a fixed set of hardware and software configurations and we look for changes over different versions of Unity. For example when testing Windows Standalone platform we use the same hardware and Windows version for all the runs. Only the Unity version is varying.

As all of the data is saved in the database, we can make reports from the data for different uses. For each Unity version we are about to release we make a chart that shows data points vs previously released versions. We can also see almost realtime status of performance tests for release branches we are developing in parallel. When analyzing a regression we can even see the individual measurements, not only the mean value.

Currently we have three dedicated performance test machines for handling the test running. We have one Mac Mini (OSX 10.8.5) that is used to run tests on the Mac platforms (Editor, Standalone). We have two Windows machines (Windows 7, Windows 8.0) that are used to run tests on the Windows platforms (Editor, Standalone). And we have a Nexus 10 device to run tests on Android. We intend to extend this to more platforms, hardware and software configurations, but more on that later. And before you ask, yes, OSX 10.9 Mavericks is coming. We just didn’t get to it yet.

Extending existing framework

We wrote about one of our test frameworks, the Runtime Test Framework, in an earlier blog post. For performance testing we have extended our existing test frameworks instead of creating a new one. This means we can tap into the existing test suites and use them for performance testing. Further, everyone that can write tests using existing frameworks can write performance tests. This also means it is easy to get new platforms running performance tests as soon as we have the platform running other tests.

The test suites

We have split the tests into two types called Editor Tests and Runtime Tests based on how they get run and what can be measured. Editor Tests are limited to the platforms the editor runs on. Editor Tests can make measurements outside of Unity user scripts. The tests measure things like editor start-up time and asset import times for different types of assets. Runtime Tests, on the other hand, can be run on all the supported runtime platforms. With Runtime Tests it is possible to measure things like frame time of rendered frames or to measure the time it takes to deserialize different types of assets. It is actually possible to measure anything you can do with user scripts in Unity.

The future

First of all we will add more configurations. Hardware and software. OSX 10.9 is high on the list, and so are the remaining mobile platforms and consoles. We truly believe that the performance rig is a lifesaver when it comes to detecting performance regressions as early as possible. Further, the rig makes it very easy to compare performance across different platforms and versions of Unity. We will keep you posted as we find interesting results – and different usages of the rig.

PlayStation® Vita deployment is here!

Back in March 2013, we announced a partnership with Sony based on support for every single one of SCE’s PlayStation platforms, in addition to Unity for PlayStation®3. Today, we’re thrilled to announce that with Unity 4.3 we’re releasing Unity for PlayStation®Vita to the public.

Developers with a licensed developer agreement from SCE for PS Vita will now be able to deploy their game to PS Vita via the Unity engine and make use of platform-specific functionality including:

  • Motion sensors
  • Front and rear cameras
  • Dual analog sticks
  • Rear Touch pad

Furthermore, Unity for PS Vita will enable you to integrate the full suite of PSN features into your game, including Trophies, Friends and Matching functionality into your game.

As with all the other platforms we support, Unity for PS Vita allows you to develop your game once, without rewriting the code from scratch; simply build and run it on your PS Vita devkit. Not only that, you can now create both 2D and 3D games with Unity 4.3, animate almost anything with the native animation system Mecanim, and implement very cool graphics. And all of it can be run directly on your PS Vita devkit for quick iteration! Why don’t you head over to the release notes and take a look?

Zoink! has just released their new game “Stick It To The Man” developed in Unity for both PS3 and PS Vita. The game has already been very well received. Check out their game at their website to see what it is all about!

We are proud to have reached this platform milestone and we are very eager to see what exciting games you will create for PS Vita!

To become a licensed developer for PS Vita please visit the SCE company registration site (

New Sample Assets Beta


* Added simple arcade style 2-axis aircraft control example.
* Added “Handheld Camera” prefab. Has realistic fuzzy tracking of object, and auto zoom to fit target in view.
* Updated mobile particle effects for better performance.
* Character model (‘Ethan’) re-skinned, fixed bone weights, joint orientations and interactions, and blending errors.
* Fix for first-person controller so it sticks to ground when running up/down slopes.
* Better cursor locking handling.
* Fixed camera in car scene, so loop-the-loop is possible!
* Added menu item in Cross Platform Input so user can choose between mobile/standalone input when testing in editor with a mobile build target selected.
… and many more!


Hello! We’re updating the “Standard Assets” packages that come bundled with Unity.

Before we finalise and integrate it with Unity itself, we’re releasing a beta version for you to try out. We’d like you to have a play and give us feedback before we finally integrate it into Unity and replace the old Standard Assets.


There are a number of changes to look out for:

Firstly, the name is now “Sample Assets” rather than “Standard Assets”, which better reflects their purpose: to provide a collection of examples that can be used out-of-the-box, picked apart or expanded upon.

We’re also going to improve the delivery method. The Sample Assets will still be bundled with the Unity installer, but the packages themselves will be linked to the asset store so that we can update them independently of new Unity editor releases.

And of course what’s in the packages themselves.  We’ve addressed some missing examples by adding a mecanim-based third-person character, some shuriken-based particle systems, and some simple AI systems, as well as improving and adding new samples such as a better first-person controller, cross platform input examples and even a few vehicles!

We’re including sample scenes where you can try these out, and our intention is that this collection of playable sample scenes will replace the existing “Angry Bots” project which is included in the current Unity installer. Angry Bots will still be available to download from the asset store, but having these lightweight prototype sample scenes will reduce the download and install size of Unity itself, and hopefully provide more useful material out of the box to help you get your projects started even faster than before.

This project has multiple goals: the new samples will provide an example of good practice in terms of how they are put together. For example, you’ll be able to look at how we used a componentised architecture on our prefabs to separate out core movement, control, audio and peripheral visual effects. We also hope the samples will act as a useful toolbox for rapid prototyping. You can just drop a character, vehicle, camera rig or effect into your scene, and it’s good to go.

So what are the highlights?

A New First Person Character
Unlike the previous first person character, this first person character uses a standard rigidbody capsule and can push physics objects around. It also has configurable head-bob effects and footstep sounds.

A New Third Person Character
The new third person character is Mecanim-based, with root-motion driving its movements giving a more realistic feel. It’s also incredibly easy to replace the character art with your own rigged humanoid model, or swap the user control component with an AI component.

A Sample 2D Character
Swift on the heels of our recent 2D tools release, we have a sample 2D sprite-based character in a platform game scene for you to use.

Camera Rigs
These are a few useful camera set-ups, making it easy to drop in an automatic follow cam, or a free-look camera straight into your scene. All our sample scenes use these rigs.

We’re including an example of a car controller and our sample scene shows one possible configuration: a high-powered futuristic sports car. However, the script itself is also flexible enough to simulate front, rear and four-wheel drive vehicles, SUVs, Karts and Forklift Trucks. Have a play and test the settings to their limit! The car code prioritises fun over realism, but still offers a large degree of freedom in terms of gears, torque, and tendency to burnout or skid.

The aeroplane scripts allow you to set up a forward-flying aircraft with ease. There are optional components if you want to implement moving ailerons, rudders, etc, but you can just as easily throw the basic script on a cube and take to the skies in your cube-o-plane! We provide some slightly more elaborate examples of Jet and Propellor aeroplanes, each set up with different parameters to give the correct feel. Again, the code is designed with a priority of fun over realism, and as such doesn’t include realistic flight equations, but you’ll still find you can glide, stall and barrel-roll to your heart’s content.

Cross Platform Input Example
We’ve also included an example of how you can set up cross-platform input so that you can seamlessly publish to desktop, web and mobile from the same project. Our sample uses a relatively simple script which acts as a ‘middle man’ between the game controls and Unity’s existing input system. When on mobile, simple touch or tilt controls override the axis names, allowing you to easily switch between desktop and mobile with minimal changes to your scripts. We’ve also included a few example “mobile control rigs” which are suitable for many common scenarios such as a tilt-to-steer car control, or a 3rd-person run and turn control.

For the character, cars, and planes, we’ve included a sample of how you can set up AI-controlled versions. These AI systems “pilot” the character or vehicle in the same way that a user can, by sending input values to the controller.

Prototype Environment
To keep the package sizes down and the examples as clear and easy to understand as possible, our sample scenes use simple prototype models for the environments, and these are packaged up for you to use if you want to. They can be a good way of quickly blocking out level designs, to be replaced with your own real artwork later.

AI Character

Of course, since this is an update to the current standard assets, there is a significant amount of good stuff in there that we’re keeping just  the same: you’ll still have the same favourite image effects, skyboxes, shaders, water, etc. These and many of the other standard assets haven’t changed, but we’ll be looking to improve and expand on these areas as time goes on – and from your feedback. For the beta release we’re including only the new stuff to save on download size and import time.

As mentioned above, this is a ‘beta’ release of the new Sample Assets, so it’s not currently included with the Unity Installer. If you’d like to check it out, you can download the entire pack from the Asset Store here:

Having worked on this rather furiously for the last few months along with the guys from the Learn team, we’d all love to hear your feedback and questions! You can reply here, or if you’d like to take part in an altogether more civilized (and detailed) discussion on the forum about it, head over to this forum thread for that very purpose:



The ‘Kii to Unity’ contest

I’m pretty fond of Kii, with whom Unity has a very amicable relationship!  They’ve got what I think is one of the most promising backend-as-a-service platforms to emerge in the last few years, and they’re pretty actively seeking Unity developers to try out and acquaint themselves with their service.  Here’s the official scoop on their new Kii to Unity contest, which you should definitely check out if you’re in need of BaaS, analytics or distribution! The contest is operated by Kii, but we at Unity will be participating as judges in determining the winning games.

Kii is a mobile backend, distribution, analytics platform and Unity online service partner with a new Game Cloud service. As part of Game Cloud’s launch, they’re challenging Unity developers working on a new mobile game to add Game Cloud features into it, and compete to win $10,000, along with distribution in the world’s largest and highest monetized mobile markets: China and Japan.

Basic Rules and Submission Process

  • Entrants must submit 3-5 minute gameplay video showing off core gameplay in a public YouTube link which includes the name of game and the text “for Kii to Unity Game Contest” in the video title. Alternatively, you can submit the link to your game to Kii for review (use your choice of tools e.g. dropbox, google drive, website)
  • In the video’s and/or game’s “About” section, include short description of game and how the entry will use Game Cloud features. Also must include this text and link: “Follow contest results on Twitter @KiiCorp”.
  • E-mail the YouTube video and/or game link to Kii at In the e-mail, be sure to include your full name and Twitter account (if any).
  • First round deadline: Submit YouTube gameplay video and/or your game by end of April 21, 2014 (Pacific time).
  • Your game must not yet be published on Apple or Google Play/Amazon before the commencement of the contest — January 28, 2014.
  • After being notified by Kii, the ten selected Finalists must submit a fully playable build, if you have not already done so, to Kii + Unity for judging by April 28, 2014.
  • Finalist entries must include Kii Game Cloud features linked to at least one gameplay element. (For example, player score and player log-in. See this Kii blog post and this tutorial for advice and resources on using Kii services in your Unity game.)
  • All entrants retain full ownership of their submission, but must allow Kii to feature their games, game artwork and logos on Kii and Unity sites, social networks, and promotional materials.
  • Entries must be “PG-13” (no explicit violence or sexual elements).

Judging Process, Award Info, and Submission Advice

  • Judges will be comprised of game development experts, including developers at Kii, Unity, and the author of Game Design Secrets (Wiley).
  • Judges will consider factors including design, usability, total number of users, likes on the demo video – so please promote your game and video with your fans and fellow developers!
  • Contest concludes Monday, April 28, 2014. Finalists will be notified by May 28, 2014.
  • In addition to cash award, winner will be promoted by Kii at a future gaming conference, in announcement to the game development media, and via social media channels.
  • In addition to cash award, winner will be promoted by Kii at a future gaming conference, in announcement to the game development media, and via social media channels.

So start now free with Kii Game Cloud and enter to win $10,000. Here is a tutorial on how to get started.

Simplygon Now Live in Unity


Greetings all!  I’m pretty chuffed that Simplygon’s AAA quality mesh reduction and automatic LOD technology is now available to Unity users at an indie-friendly price.  In short, it lets artists work at a high polygon count, and then does a brilliant job automatically reducing, remeshing, re-mapping and optimizing the mesh and textures to dramatically increase performance at runtime.  It’s absolutely a paradigm shift that 3D artists everywhere are certain to love: focusing on the creative, visual end of production– without slaving over the tedium of LODs and optimization. Their technology is top-notch and used by almost every major studio in the industry.  I asked my friends at Simplygon to write up a little something describing the basics of the tool, what you need to get started, and how the system works.  Please keep reading, and do give Simplygon a serious shot!  It’s mind-blowing how much time it saves!

Simplygon is now live in Unity, and you can enjoy endlessly all the benefits and features of the automagic tool. Through Simplygon Cloud, users are able to automatically create LODs and other optimized 3D models directly from within the Unity Editor with an affordable pay-per-usage model based on credits or even for free. Simplygon plug-in is available for download in Unity Asset Store

In Unity, Simplygon works as an editor plug-in to your Unity Project. You can access and set all the parameters you want for your optimized model, such as number of LOD steps; reduction ratio; and which Simplygon Components to use. Then when you click the Simplygon logo your asset is sent to the Simplygon Cloud servers where it is optimized. Each asset and optimization counts as one “job” on the servers. You can then preview the result of the job and as soon as you are happy you can download and use your new LOD model.

matLODGet started easily in 3 steps (video)

Use the Free Server, or pay with Simplygon Credits to skip the queue. Find what works for you! Perhaps a majority of free jobs and occasionally spending credits; or, if you are building games professionally in Unity, buy a big pack of credits and disable free jobs, for  access to advanced features and greater efficiency. Experience the power of Simplygon in Unity!


Global Game Jam 2014

It was another record breaking year for the Global Game Jam this year with a mind blowing total of 2292 games created using Unity.  This is pretty impressive considering that last year, there were 1131 Unity games created.  When you factor in that the total of games produced in 2014 was 4291, this means more than 50% of all Global Game Jam 2014 games were built using Unity!

All very cool numbers, but now we are left with a problem, 2292 games is a huge amount to even think about playing. It’s just about doable if you play 6.3 games every day for the next year .

I’ve decided to pick a few that got my attention to showcase here, so please feel free to post a link to your own Unity GGJ project in the comments below.

Ego Monsters

A nice little concept with a cool art style where you have to use the negative space of your character to quickly determine which of the monsters you are.

Ego Monsters


Other Goal

Other Goal is a local multiplayer game with a really cool art and animation style. Players have differing goals which lead to interesting tactics.



We Of The Forest

A really nice art style, I’d love to see the direction this game would go with further development.




This one looks incredibly cool. Its an augmented reality board game Unity for AR along with an Arduino and a tin foil sensing matrix for the board. I’m sure the Unity staff that attend the bi-weekly boardgame nights at Unity’s Copenhagen HQ would love to get their hands on this.


So, there we are, 4 of the 2292 Unity games produced this year.  Only 2288 to go!

I’m looking forward to finding out which of the prototypes end up evolving into something more.





Overcoming issues with iOS App Store submissions


mobile, community, Asset Store, learn, unity, QA, Android, luug, windows phone 8, SimViz, contest, flash, london, shader, tutorial, ios, tutorials, occlusion, Windows Store, Education, company news, indie, event, Union, meetup, Microsoft, unity 4.3, ar, augmented reality, usergroup, unity 4, teach, online services, training, teaching, project, website, Windows 8, RFP, mixamo