Unity Awards 2014 Open Submissions Begin

It’s that time of year again where we open up submissions for the Unity Awards! Submissions will be open from now until June 30, 2014.

If you’ve created something awesome with Unity in the past year, whether it’s a game or some other interactive experience, we’d love to hear about it. All you have to do is head to the submission portal and click the link at the bottom that will start the process.

Submit your project for the Unity Awards now!

For those unfamiliar, the Unity Awards are held each year during the Unite conference to recognize the most impressive creations made using Unity. This year, the conference is taking place on August 20-22 in Seattle and the Awards ceremony itself will take place on August 21 at McCaw Hall. Read more about the conference and grab tickets at our Unite site.

This year, we’re changing the voting process slightly While the nomination committee here at Unity still look through the hundreds of projects submitted and narrow them down to six finalists in each category, we’re going to open up voting to the community for all categories. Community votes will account for 50% of the total vote with Unity employees accounting for the other 50%. This will be the same for all categories except for the Community Choice, which the community will account for 100% of the votes. General voting will begin in July 2014.

The categories this year include:

Best 3D Visual Experience – Submissions for this category will be judged based on artistic merit including thematic and stylistic cohesion, creativity, and/or technical skill.

Best 2D Visual Experience – Submissions for this category will be judged based on artistic merit including thematic and stylistic cohesion, creativity, and/or technical skill.

Best Gameplay – Intuitive control, innovation, creativity, complexity, and fun are what make games enjoyable and entertaining–we’re looking for games that excel in one or all of these areas.

Best VizSim Project – Unity projects come in all shapes and sizes; this year we’re looking for projects that have some real world grounded applications for visualization, simulation, and training.

Best Non-game Project – Unity-authored products that fall outside of games or VIzSim including projects such as art, advertising, interactive books and comics, digital toys, interactive physical installations, and informational programs will want to submit for this award.

Best Student Project – This award is for projects (games or otherwise) worked on by students currently being completed as part of the curriculum of an educational institution. Projects will be judged based on creativity, technical merit, and overall artistic cohesion among graphics, sound, and presentation.

Technical Achievement – Any project that provides an excellent example of technical excellence in Unity including but not limited to graphics, scripting, UI, and/or sound.

Community Choice – This category will be voted on by the community of game developers and represents the favorites of the community across the board.

Golden Cube (best overall) – This award is for the best overall project made with Unity in the last year. Everything from technical achievement and visual styling to sound production and level of fun will be taken into account to choose an overall winner.

Rules:

Of course, there are some rules for submission that you’ll need to know, so here they are:

  • Only Unity-authored projects are eligible for nomination.
  • Projects must have been released from July 1, 2013 to June 30, 2014 to be eligible with the exception of student project submissions which must have been part of the coursework in the 2013-2014 school year.
  • Any projects nominated for previous years of the Unity Awards are ineligible for the 2014 Unity Awards with the exception of projects that were previously student work and have since turned into finished commercial projects.
  • Games currently in early access programs that not considered “final” products by June 30, 2014 will not be accepted to the 2014 Unity Awards.
  • Individuals or teams are welcome to enter multiple projects so long as they adhere to all other rules.

So submit those projects, tell your friends that release games this last year to submit their projects, and keep your eyes out in July for another announcement that community voting has begun. We’re really looking forward to seeing all of your submissions!

Announcing UNET – New Unity Multiplayer Technology

A few weeks ago, at our Unite Asia conferences, we announced that we are developing new multiplayer tools, technologies and services for Unity developers. The internal project name for this is UNET which simply stands for Unity Networking. But our vision goes well beyond simple networking. As you all know, the Unity vision is to Democratize Game Development. The Unity Networking team wants to specifically Democratize Multiplayer Game Development. We want all game developers to be able to build multiplayer games for any type of game with any number of players.

Before joining Unity, members of the networking team worked mainly on MMOs such as Ultima Online, Lord of the Rings Online, Dungeons and Dragons Online, Marvel Heroes, Need for Speed Online and World of Warcraft. We have a lot of passion for and a ton of experience with making multiplayer games, technology and infrastructure. The Unity vision was known to each of us and was always very appealing. When the chance to do something truly great like specializing the Unity vision with multiplayer came up, it was impossible to decline.  So we all left our former jobs and joined Unity to make this vision happen. Right now, we’re working hard to deliver these tools, technology and services so anyone can make their own dreams of a multiplayer game a reality.

This is of course a pretty big undertaking, but, like I said, we have all done this before, and we are all very driven to do it again (because it’s really, really cool!). The way we have tackled this is to divide our overall goal into phases which should be familiar to Unity developers. We take the approach of releasing a Phase 1, getting feedback from our users, adding that feedback to our work to make the next phase even better and repeating that cycle.

For UNET, Phase 1 is what we call the Multiplayer Foundation – more on that in a bit. Phase 2 is where we build on Phase 1 to introduce server authoritative gaming with what we call the Simulation Server, we’ll blog about this later. Finally, Phase 3 is where we want to introduce the ability to coordinate multiple Simulation Servers through a Master Simulation Server. As usual, exact dates for this are not possible and of course things can change, especially after gathering feedback from our users. But we can say that Phase 1 will be part of the 5.x release cycle and Phase 2 is in RD right now.

So what do we mean by the Multiplayer Foundation for Phase 1? The main features are as follows:

  • High performance transport layer based on UDP to support all game types

  • Low Level API (LLAPI) provides complete control through a socket like interface

  • High Level API (HLAPI) provides simple and secure client/server network model

  • Matchmaker Service provides basic functionality for creating rooms and helping players find others to play with

  • Relay Server solves connectivity problems for players trying to connect to each other behind firewalls

We had some inherent limitations with our legacy system that we needed to address and with our greater goal in mind it became clear that we needed to start from scratch. Since our goal is to support all game types and any number of connections, we started with a new high performance transport layer based on UDP. While it’s true that a lot of games are done quite well with TCP, fast action games will need to use UDP as TCP holds the most recently received packets if they arrive out of order.

From this new transport layer we built two new APIs. We have a new High Level API (HLAPI) which introduces a simple and secure client/server networking model. If you’re not a network engineer and you want to easily make a multiplayer game, the HLAPI will interest you.

We also wanted to address feedback we’d received on our old system: some users needed to have a lower level access for greater control. So we also have the Low Level API (LLAPI) which provides a more socket-like interface to the transport layer. If you are a network engineer and want to define a custom network model or just fine tune your network performance, then the LLAPI will interest you.

The Matchmaker service is used to configure rooms for your multiplayer game and get your players to find each other. And finally the Relay Server makes sure your players can always connect to each other.

We know from our prior experiences that making multiplayer games involves a lot of pain.  So the Multiplayer Foundation is a new set of easy to use professional networking technology, tools and infrastructure for making multiplayer games without this pain. To even get started, I think it is fair to say that making a multiplayer game requires a fair bit of knowledge of networking and protocols. You either overcome the painfully steep learning curve yourself or find a network engineer to join you.  Once you’ve gotten past that, you then have to solve the problem of getting your players to find each other.  And once you’ve solved that problem, you now have to deal with getting players to be able to actually connect with each other, which can be troublesome when they are behind firewalls with NAT.  But then if you’ve solved all of that you’ve created a bunch of associated infrastructure which wasn’t game development and probably wasn’t fun. And now you have to worry about dynamically scaling your infrastructure which usually takes a bit of prior experience to get right.

Our Phase 1 addresses each of these pain points. The HLAPI eliminates the need for a deep knowledge of networking. But the LLAPI is there if you are a network engineer and you want to do things your own way. The Matchmaker solves your problem of getting your players to find each other. The Relay Server solves your problem of getting players to be able to connect to each other. And we also solved your problem of the associated infrastructure and dynamically scaling it. The Matchmaker and Relay Server live in Unity’s Multiplayer Cloud. So not only do the physical servers scale up and down based on demand, but the processes scale up and down as well.

We are very excited about UNET and are eager to share more details. Over the next few weeks we’ll follow up with more blogs from the rest of the team.  We would love to hear what you think, and we can’t wait to see what you all make with this in the future.

Community posts on ‘Learn’ – Teach Us!

Tags

Asset Store, mobile, community, unity, QA, learn, Android, luug, tutorial, tutorials, windows phone 8, SimViz, contest, flash, london, shader, ios, testing, occlusion, teaching, Windows Store, Education, company news, indie, event, Union, meetup, Microsoft, unity 4.3, ar, augmented reality, usergroup, shaders, unity 4, teach, online services, training, editor, project, website

The Novelist and the Asset Store: The Visual Scripting Story

Kent Hudson made a game that is part The Shining, part Gone Home and part something new entirely. In The Novelist, you are a ghost helping a writer who’s struggling with work-life balance. The developer told me that the uScript plugin was his own friendly ghost in the machine.

“I know this sounds like a shameless plug, but it’s true: Unity and the Asset Store are the reason I’m able to make games independently.”says Kent, who’s been previously working on games like Deux Ex: Invisible War or BioShock 2 before going indie. He has more than a decade of game development experience, but says that without uScript Visual Scripting Tool, creating The Novelist would be out of reach for him.

“I come from a systems design background, so I think very technically, but I’ve never stuck with programming courses long enough to actually become a proficient engineer. I’m used to architecting reusable systems and game objects, though, so uScript was the perfect tool for me.” explains Kent Hudson

He used it for player movement, the memory system, controlling the UI, the human AI behaviors, the narrative structure of the game, and every other bit of on-screen functionality in the game. “Not a single line of code was written for my game; the entire thing was built in uScript”.

Here’s a the uScript editor window, opened up to the logic that computes character relationships when the player makes decisions. Click on the thumbnail to see the full screenshot:

Screen Shot 2014-04-17 at 5.10.21 PM

Another big advantage of using uScript is its powerful reflection system, which means that it can interface with other Unity plugins. There’s no extra support required to get it working with code from other programmers on the project or other Asset Store plugins and extensions.

Kent Hudson also used NGUI for the UI and its partner plug-in, HUD Text,  to create the thoughts that float above the characters’ heads. “Instead of crafting a UI system from the ground up, I was able to focus on writing the text that would be displayed by the UI.”

The Highlighting System by Deep Dream and Glow Per-Object plug-ins are responsible for object highlighting in the game. All of these are connected with uScript.

So what is Kent Hudson up to next? “Now that I’m so familiar with Unity, I feel like there aren’t many limits on what I can do for my next game. I can start up a new Unity project, import my key plug-ins, and start building things right away. The number of possibilities the Asset Store has opened up has been amazing, and I feel like it’s only going to get better from here.”

Here’s a shot of all of the possible outcomes that can result from the player’s decisions in The Novelist. Click on the thumbnail to see the full screenshot:

Screen Shot 2014-04-17 at 5.11.49 PM

All assets used:

 

 

 

Dependency injection and abstractions

Testability is an important feature of any software product – game development is not an exception. To enable testability, all the components should be independent and testable in isolation.

When we want to test something in isolation it means that we want to decouple it. Loose coupling is what we need. It is so easy to embed hidden dependencies into your game and it is so hard to break them. This article will help you understand loose coupling and dependency injection within a project in Unity, using the example project on github.

Lets take handling input as an example.


public class SpaceshipMotor : MonoBehaviour
{
  void MoveHorizontally ()
  {
    var horizontal = Input.GetAxis ("Horizontal");
    // ...
  }
}

MoveHorizontally method uses static Unity API (Input class) without telling you. It considers this call to be his private business and you can’t control or influence the situation. It makes SpaceshipMotor class tightly coupled with Unity static API, and you can’t verify the behaviour of the SpaceshipMotor class unless you physically press the key on the keyboard. It’s annoying.

Now lets take this situation under control. You are in charge here.

The SpaceshipMotor class is using only horizontal axis, so we can define a short description what kind of functionality it expects from user input.


public interface IUserInputProxy
{
  float GetAxis(string axisName);
}

Then you can substitute the call to real Input with the call to our abstraction.


public class SpaceshipMotor : MonoBehaviour
{
  public IUserInputProxy UserInputProxy {get;set;}

  void MoveHorizontally ()
  {
    var horizontal = UserInputProxy.GetAxis (“Horizontal”);
    // …
  }
}

Now you are in charge of the situation! The class can’t operate unless you provide it IUserInputProxy implementation.

This is called Dependency Injection (DI). When the dependency (Input in our case) is passed to the dependant object(SpaceshipMotor class) and it becomes part of it’s state (a field in our case).

There are several options of passing a dependency: constructor injection, property injection, method injection.

Constructor injection is considered to be the most popular and the most robust approach as when the dependency is passed in the construction phase our chances to have object in uninitialized state is minimal.

public class SpaceshipMotor : MonoBehaviour
{
  private readonly IUserInputProxy userInputProxy;

  public SpaceshipMotor (IUserInputProxy userInputProxy)
  {
    this.userInputProxy = userInputProxy;
  }
}

But Unity engine is calling the constructors for MonoBehaviours and we can’t control this process.

 Still, property and method injection are both usable in this case.

 The easiest approach for manual Dependency Injection (DI) would be to use the script that will inject the dependencies.

In “Growing Games Guided by Tests” we are using an interface to expose property dependency.


public interface IRequireUserInput
{
  IUserInputProxy InputProxy { get; set;}
}

And a script that allows us to set the parameters of fake input in the scene and inject it when the tests start.


public class ArrangeFakeUserInput : MonoBehaviour
{
  public GameObject Spaceship;
  public FakeUserInput FakeInput;

  void Start () {
    var components = Spaceship.GetComponentsMonoBehaviour ();
    var dependents = components.Where(c=c is IRequireUserInput)
              .CastIRequireUserInput();
    foreach(var dependent in dependents)
      dependents.InputProxy = FakeInput;
  }
}

How does this contribute to testability?

 

We have lots of examples in “Growing Games Guided by Tests” where fake user input is injected with helper script and it lets us test the behaviour.

On the other hand we can write unit tests for classes that depend on abstractions.


[Test]
public void ChangesStateToIsFiringOnFire1ButtnPressed()
{
  // Arrange
  // Setting test double for user input
  IUserInputProxy userInput = Substitute.ForIUserInputProxy ();
  // Telling GetButton method of test double to return true
  // if state of “Fire1” was requested
  userInput.GetButton(Arg.Is("Fire1")).Returns (true);
  // Passing the dependency to Gun object on creation
  Gun gun = new Gun(userInput);
  // Act
  gun.ProcessInput ();
  // Assert
  Assert.That(gun.IsFiring, Is.True);
  }

Now you see that there is no magic to dependency injection. It is the process of substitution of concrete dependencies with the abstractions and making them external to the dependant object.

To use DI on a large scale you need a tool to automate it . This will be the topic for our next blogpost.

 

Shader Compilation in Unity 4.5

A story in several parts. 1) how shader compilation is done in upcoming Unity 4.5; and 2) how it was developed. First one is probably interesting to Unity users; whereas second one for the ones curious on how we work and develop stuff.

Short summary: Unity 4.5 will have a “wow, many shaders, much fast” shader importing and better error reporting.

Current state (Unity =4.3)

When you create a new shader file (.shader) in Unity or edit existing one, we launch a “shader importer”. Just like for any other changed asset. That shader importer does some parsing, and then compiles the whole shader into all platform backends we support.

Typically when you create a simple surface shader, it internally expands into 50 or so internal shader variants (classic “preprocessor driven uber-shader” approach). And typically there 7 or so platform backends to compile into (d3d9, d3d11, opengl, gles, gles3, d3d11_9x, flash – more if you have console licenses). This means, each time you change anything in the shader, a couple hundred shaders are being compiled. And all that assuming you have a fairly simple shader – if you throw in some multi_compile directives, you’ll be looking at thousands or tens of thousands shaders being compiled. Each. And. Every. Time.

Does it make sense to do that? Not really.

Like most of “why are we doing this?” situations, this one also evolved organically, and can be explained with “it sounded like a good idea at the time” and “it does not fix itself unless someone works on it”.

A long time ago, Unity only had one or two shader platform backends (opengl and d3d9). And the amount of shader variants people were doing was much lower. With time, we got both more backends, and more variants; and it became very apparent that someone needs to solve this problem.

In addition to the above, there were other problems with shader compilation, for example:

  • Errors in shaders were reported, well, “in a funny way”. Sometimes the line numbers did not make any sense – which is quite confusing.
  • Debugging generated surface shader code involved quite some voodoo tricks (#pragma debug etc.).
  • Shader importer tried to multi-thread compilation of these hundreds of shaders, but some backend compilers (Cg) have internal global mutexes and do not parallelize well.
  • Shader importer process was running out of memory for really large multi_compile variant counts.

So we’re changing how shader importing works in Unity 4.5. The rest of this post will be mostly dumps of our internal wiki pages.

Shader importing in Unity 4.5

  • No runtime/platforms changes compared to 4.3/4.5 – all changes are editor only.
  • No shader functionality changes compared to 4.3/4.5.
  • Shader importing is much faster; especially complex surface shaders (Marmoset Skyshop etc.).
    • Reimporting all shaders in graphics tests project: 3 minutes with 4.3, 15 seconds with this.
  • shaders-errorsErrors in shaders are reported on correct lines; errors in shader include (.cginc) files are reported with the filename line number correctly.
    • Was mostly “completely broken” before, especially when include files came into play.
    • On d3d11 backend we were reporting error column as the line, hah. At some point during d3dcompiler DLL upgrade it changed error printing syntax and we were parsing it wrong. Now added unit tests so hopefully it will never break again.
  • shaders-surfaceSurface shader debugging workflow is much better.
    • No more “add #pragma debug, open compiled shader, remove tons of assembly” nonsense. Just one button in inspector, “Show generated code”.
    • Generated surface shader code has some comments and better indentation. It is actually readable code now!
  • Shader inspector improvements:
    • Errors list has scrollview when it’s long; can double click on errors to open correct file/line; can copy error text via context click menu; each error clearly indicates which platform it happened for.
    • Investigating compiled shader is saner. One button to show compiled results for currently active platform; another button to show for all platforms.
  • Misc bugfixes
    • Fixed multi_compile preprocessor directives in surface shaders sometimes producing very unexpected results.
    • UTF8 BOM markers in .shader or .cginc files don’t produce errors.
    • Shader include files can be at non-ASCII folders and filenames.

Overview of how it works

  • Instead of compiling all shader variants for all possible platforms at import time:
    • Only do minimal processing of the shader (surface shader generation etc.).
    • Actually compile the shader variants only when needed.
    • Instead of typical work of compiling 100-1000 internal shaders at import time, this usually ends up compiling just a handful.
  • At player build time, compile all the shader variants for that target platform
    • Cache identical shaders under Library/ShaderCache.
    • So at player build time, only not-yet-ever-compiled shaders are compiled; and always only for the platforms that need them. If you never ever use Flash, for example, then none of shaders will be compiled for Flash (as opposed to 4.3, where all shaders are compiled to all platforms, even if you never ever need them).
  • Shader compiler (CgBatch) changes from being invoked for each shader import, into being run as a “service process”
    • Inter-process communication between compiler process Unity; using same infrastructure as for VersionControl plugins integration.
    • At player build time, go wide and use all CPU cores to do shader compilation. Old compiler tried to internally multithread, but couldn’t due to some platforms not being thread-safe. Now, we just launch one compiler process per core and they can go fully parallel.
    • Helps with out-of-memory crashes as well, since shader compiler process never needs to hold bazillion of shader variants in memory all at once – what it sees is one variant at a time.

How it was developed

This was mostly a one-or-two person effort, and developed in several “sprints”. For this one we used our internal wiki for detailed task planning (Confluence “task lists”), but we could have just as well use Trello or something similar. Overall this was probably around two months of actual work – but spread out during much longer time. Initial sprint started in 2013 March, and landed in a “we think we can ship this tomorrow” state to 4.5 codebase just in time for 1st alpha build (2013 October). Minor tweaks and fixes were done during 4.5 alpha beta period. Should ship anyday now, fingers crossed!

Surprisingly (or perhaps not), largest piece of work was around “how do you report errors in shaders?” area. Since now shader variants are imported only on demand, that means some errors can be discovered only “some time after initial import”. This is a by-design change, however – as the previous approach of “let’s compile all possible variants for all possible platforms” clearly does not scale in terms of iteration time. However, this “shader seemed like it did not have any errors, but whoops now it has” is clearly a potential downside. Oh well; as with almost everything there are upsides downsides.

Most of development was done on a Unity 4.3-based branch, and after something was working we were sending off custom “4.3 + new shader importer” builds to the beta testing group. We were doing this before any 4.5 alpha even started to get early feedback. Perhaps the nicest feedback I ever got:

I’ve now used the build for about a week and I’m completely blown away with how it has changed how I work with shaders.

I can try out things way quicker.
I am no longer scared of making a typo in an include file.
These two combine into making me play around a LOT more when working.
Because of this I found out how to do fake HDR with filmic tonemapping [on my mobile target].

The thought of going back to regular beta without this [shader compiler] really scares me.

Anyhoo, here’s a dump of tasks from our wiki (all of them had little checkboxes that we’d tick off when done). As usual, “it basically works and is awesome!” was achieved after first week of work (1st sprint). What was left after that was “fix all the TODOs, do all the boring remaining work” etc.

2013 March Sprint:

  • Make CgBatch a DLL
    • Run unit tests
    • Import shaders from DLL
    • Don’t use temp files all over the place
  • Shader importer changes
    • Change surface shader part to only generate source code and not do any compilation
    • Make a “Open surface compiler output” button
    • At import time, do surface shader generation cache the result (serialize in Shader, editor only)
    • Also process all CGINCLUDE blocks and actually do #includes at import time, and cache the result (after this, left with CGPROGRAM blocks, with no #include statements)
    • ShaderLab::Pass needs to know it will have yet-uncompiled programs inside, and able to find appropriate CGPROGRAM block:
      • Add syntax to shaderlab, something like Pass { GpuProgramID int }
      • Make CgBatch not do any compilation, just extract CGPROGRAM blocks, assign IDs to them, and replace them with “GpuProgramID xxx”
      • “cache the result” as editor-only data in shader: map of snippet ID – CGPROGRAM block text
    • CgBatch, add function to compile one shader variant (cg program block source + platform + keywords in, bytecode + errors out)
    • Remove all #include handling from actual shader compilers in CgBatch
    • Change output of single shader compilation to not be in shaderlab program/subprogram/bindings syntax, but to produce data directly. Shader code as a string, some virtual interface that would report all uniforms/textures/… for the reflection data.
  • Compile shaders on demand
    • Data file format for gpu programs their params
    • ShaderLab Pass has map: m_GpuProgramLookup (keywords – GPUProgram).
    • GetMatchingSubProgram:
      • return one from m_GpuProgramLookup if found. Get from cache if found
      • Compile program snippet if not found
      • Write into cache

2013 July Sprint:

  • Pull and merge last 3 months of trunk
  • Player build pipeline
    • When building player/bundle, compile all shader snippets and include them
    • exclude_renderers/include_renderers, trickle down to shader snippet data
    • Do that properly when building for a “no target” (everything in) platforms
      • Snippets are saved in built-in resource files (needed? not?)
    • Make building built-in resource files work
      • DX11 9.x shaders aren’t included
      • Make building editor resource file work
    • Multithread the “missing combinations” compilation while building the player.
      • Ensure thread safety in snippet cache
  • Report errors sensibly
  • Misc
    • Each shader snippet needs to know keyword permutation possibly needed: CgBatch extracts that, serialized in snippet (like vector vector )
    • Fix GLSLPROGRAM snippets
    • Separate “compiler version” from “cgbatch version”; embed compiler version into snippet data hash
    • Fix UsePass

2013 August Sprint:

  • Move to a 4.3-based branch
  • Gfx test failures
    • Metro, failing shadow related tests
    • Flash, failing custom lightmap function test
  • Error reporting: Figure out how to deal with late-discovered errors. If there’s bad syntax, typo etc.; effectively shader is “broken”. If a backend shader compiler reports an error:
    • Return pink “error shader” for all programs ­ i.e. if any of vertex/pixel/… had an error, we need to use the pink shaders for all of them.
    • Log the error to console.
    • Add error to the shader, so it’s displayed in the editor. Can’t serialize shader at that time, so add shaders to some database under Library (guid­errors).
      • SQLite database with shader GUID – set of errors.
    • Add shader to list of “shaders with errors”; after rendering loop is done go over them and make them use pink error shader. (Effectively this does not change current (4.2) behavior: if you have a syntax error, shader is pink).
  • Misc
    • Fix shader Fallback when it pulls in shader snippets
    • “Mesh components required by shader” part at build time – need to figure them out! Problem; needs to compile the variants to even know it.
    • Better #include processing, now includes same files multiple times
  • Make CgBatch again into an executable (for future 64 bit mac…)
    • Adapt ExternalProcess for all communication
    • Make unit tests work again
    • Remove all JobScheduler/Mutex stuff from CgBatch; spawn multiple processes instead
    • Feels like is leaking memory, have to check
  • Shader Inspector
    • Only show “open surface shader” button for surface shaders
    • “open compiled shader” is useless now, doesn’t display shader asm. Need to redo it somehow.

2013 September Sprint:

  • Make ready for 4.5 trunk
    • Merge with current trunk
    • Make TeamCity green
    • Land to trunk!
  • Make 4.3-based TeamCity green
    • Build Builtin Resources, fails with shader compiler RPC errors GL-only gfx test failures (CgProps test)
    • GLSLPROGRAM preprocessing broken, add tests
    • Mobile gfx test failures in ToonyColors
  • Error reporting and #include handling
    • Fixing line number reporting once and for all, with tests.
    • Report errors on correct .cginc files and correct lines on them
    • Solve multiple includes preprocessor affecting includes this way: at snippet extraction time, do not do include processing! Just hash include contents and feed that into the snippet hash.
    • UTF8 BOM in included files confusing some compilers
    • Unicode paths to files confusing some compilers
    • After shader import, immediately compile at least one variant, so that any stupid errors are caught displayed immediately.
  • Misc
    • Make flags like “does this shader support shadows?” work with new gpu programs coming in
    • Check up case 550197
    • multi_compile vs. surface shaders, fix that
  • Shader Inspector
    • Better display of errors (lines locations)
    • Button to “exhaustively check shader” – compiles all variants / platforms.
    • Shader snippet / total size stats

What’s next?

Some more in shader compilation land will go into Unity 5.0 and 5.x. Outline of our another wiki page describing 5.x related work:

  • 4.5 fixes “compiling shaders is slow” problem.
  • Need to fix “New standard shader produces very large shader files” (due to lots of variants – 5000 variants, 100MB) problem.
  • Need to fix “how to do shader LOD with new standard shader” problem.

Showing off the Shy Shaders

You can never have too many shaders. Whether you’re pursuing the elusive goal of hyper realistic 3D graphics or making a cute cartoon game for kids, shaders are definitely on your radar. And the Asset Store is here to help. 

While Unity 5 will make shader programming a breeze, there are still a lot of specialist assets that will come in handy in specific situations. We’d like to show you a few shaders that are currently on the shelves of the Asset Store, have great ratings and stellar support, but are a bit hidden behind the row of top sellers.

Candela SSRR: Advanced Screen Space Glossy Reflections by Livenda

This asset makes beautifully realistic reflections. In other words, it’s a highly optimized advanced screen space ray-traced glossy reflection post effect solution. And very easy to deal with, giving you the final control over the shiny surfaces in your desktop game. Pixel accurate. Pretty awesome. 

Depth of Field Mobile Shader by Barking Mouse Studio

If you’re making 3D mobile games with Unity Pro, you should definitely check this out. Just like a camera, it has an adjustable aperture, so you can intuitively control the depth of field while minimizing memory usage.

ab7e5666-d59b-4b38-aca3-0f4dc9ce66d1_scaled

Planets by NexGen Assets

This great asset has a diffuse and a specular texture, it can also control the opacity of the clouds and night-lights on the planet, as well as its rotation and halo. Includes shaders for gas giants, stars and galaxies. Indispensable for space adventures!

planets

Mobile HDR by Science Laboratory

Adapting the brightness of your scene swiftly can be draining on both the performance of your game and your development time. This HDR Bloom and Adaptive Brightness Correction tool has a custom inspector and includes full C# and Cg source code access. Save yourself the pain and get it!

f9614d5c-540b-4ec0-841b-5f6caf9ca759_scaled

Lens Dirtiness by David Miranda

Making a fast paced game and want to give players the feeling that the camera is right there in the dirt? Check out this camera post-processing effect for Unity Pro! Lens Dirtiness also includes lens flares and works on desktop and mobile. It costs less than a pizza!

1d349c58-c344-4579-bda1-e24cb51ae977_scaled

On the future of Web publishing in Unity

A few weeks ago at GDC, we announced support for WebGL publishing for Unity 5. Now I’d like to share some more information on what this is all about, and what you can expect from it.

Some background

WebGL is a 3d graphics library built into the browser which allows JavaScript programs to do 3d rendering inside any supported browser without requiring any plug-ins. To us, this always seemed like a perfect fit for running Unity content on the web, as it would give end users the most barrier-free experience – as the browser would supply everything needed out of the box, and everything would just work without the user bothering with installing any plug-ins.

WebGL_500

However, we initially had some doubts on whether this would be technically achievable, as  WebGL is a JavaScript API – which means that all our code (both our Unity runtime and your game code) needs to run in JavaScript somehow. But at the same time, we thought that this technology was too cool not to try it anyways, so we started experimenting with it at a HackWeek in Copenhagen two years ago. Also we had been talking to Mozilla around that time, who have been very eager to help us and to proof to us that this can indeed be done – so they had some engineers come over to Copenhagen to join the fun.

It took us a few more HackWeeks of tinkering around and some developments on the browser side as well, until we reached a point where we realized that we could make a real viable product out of this – which is when we started going into real production.

To give you an idea of what is possible right now, here is a Unity player exported to WebGL with a current alpha version of Unity 5

Currently supported browsers for this content are Firefox and Chrome 35 (Chrome 35 is currently in beta, and is needed, as the current Chrome 34 release version has a JavaScript bug which is causing this game to hang).

Click the icon below to play Dead Trigger 2 by Madfinger games in your browser, demonstrating an immersive fullscreen FPS experience in WebGL. Controls are WASD to walk, mouse to look, Q to switch weapons, Tab to switch to Melee combat, and 1, 2, and 3 for special powers (try them!).

DEAD-TRIGGER-2-Icon

And here is a build of our classic AngryBots demo (which runs fine on Firefox and the release version of Chrome):

angrybots

Technical details

As mentioned above, to run in WebGL, all our code needs to be JavaScript. We use the emscripten compiler toolchain to cross-compile the Unity runtime code (written in C and C++) into asm.js JavaScript. asm.js is a very optimizable subset of JavaScript which allows JavaScript engines to AOT-compile asm.js code into very performant native code (see here for a better explanation).

687474703a2f2f646c2e64726f70626f782e636f6d2f752f38303636343934362f656d736372697074656e5f6c6f676f2e6a7067

To convert the .NET game code (your C# and UnityScript scripts) into JavaScript, we developed a new technology in-house which we call IL2CPP.  IL2CPP takes .NET bytecode and converts it to corresponding C++ source files, which we can then compile using any C++ compiler — such as emscripten to get your scripts converted to JavaScript. Expect more information on IL2CPP soon.

 

WebGL in Unity 5.0

We plan to WebGL support available in Unity 5.0 as an early-access add-on (before you ask: the terms and prices of this add-on have not been decided on yet). Early-Access means that it will be capable of publishing content to WebGL (like the examples above), but it will have some limitations in features and in browser compatibility. In particular, the following features will not be supported:

  • Runtime generation of Substance textures
  • MovieTextures
  • Networking other then WWW class (a WebSockets plug-in is available)
  • Support for WebCam and Microphone access
  • Hardware cursor support
  • Most of the non-basic audio features
  • Script debugging
  • Threads
  • Any .NET features requiring dynamic code generation

In terms of browser support, this initial version will only support the desktop versions of Firefox and Chrome (other browsers might work for some content, but only these two will be officially supported).

We expect to resolve most of those limitations (except for things which are restrictions imposed by the platform) during the 5.x release cycle, and to be able to support a wider range of browsers as well as the platform matures – at which point we will drop the early-access label and make WebGL a fully supported build platform in Unity.

The Unity Web Player in Unity 5

While WebGL is a very exciting new technology, currently, the Unity Web Player is still the most feature-complete and the most performant solution for targeting the web with Unity, and will stay as a supported platform in Unity 5.x. It may be a very useful strategy to dual-publish your content using both WebGL and the Web Player, in order to get the widest possible reach for your audience.

Longer term, however, we expect that the performance and feature gap between the Web Player and WebGL will become much more narrow, and we expect that browser vendors will make the Web Player obsolete by dropping support for plug-ins, at which point WebGL will become the prime solution for targeting the web with Unity.

 

Tutorial: Behave 2 for Unity

Background

The Behave project, along with Path (now open-source MIT), were among the first projects I did in Unity after picking it up in the beginning of 2008. In an all too familiar story, I created the tools to replace those I had used at my previous job, but ended up focusing more on the tools than my game project.

old-behave I first shared the project at Unite 08, after Tom Higgins and David Helgason cornered me in a bar and persuaded me to give an open mic talk the next day on what at the time was the only full-on middleware solution integrated in the Unity editor.

This was Behave 0.3b. 1.0 was released a couple of months later and 1.2  went live in 2010 as one of the launch packages of the Asset Store.

When at Unity, my schedule was pretty packed, so 2.0 was quite a while under way. Large refactors, support for multiple engines and platforms plus feature creep did not help much either. But here we are now on 2.3.2 – fortunately updates since the release of 2.0 in August 13 did not take the time that 1.4→2.0 did.

Overview

So with the history lesson out of the way, what is Behave, really? In short, it is an AI behaviour logic system. It allows you to visually design behaviour trees, link them directly to your own code through a highly efficient compiler and finally debug the trees, running in the editor or on your target device.

One of the guiding principles behind Behave is to as much as possible avoid systemic requirements. That is designs which might chain your runtime integration into a certain way of operating. The result is a very lean and efficient runtime, with the integration possibilities more or less just limited to your imagination and practical needs.

Behaviour trees

Behaviour trees you say? Yes I do. A widely standardised method of describing behaviour logic, first used on scale in Halo, behaviour trees set themselves apart from methods like state machines in that they scale much better, are easy to read be made responsive.

Behave2I am going to assume familiarity with state machines (as you might know them from Mecanim or plugins like Playmaker) – to use them as a reference in describing behaviour trees. Though I clearly cannot describe all that is behaviour trees in the length of this article.

While state machines are in the business of selecting states within which actions are performed, behaviour trees build state implicitly from their structure and focus squarely on selecting actions to perform.

This means that while state machines allow you to set up states with any number of transitions (at scale often ending up in a hard to maintain spider-web of transitions), behaviour trees have a strict set of rules for connectivity and evaluation.

Them rules

behave4A behaviour tree is basically an upside-down tree structure – evaluation starting from the root at the top, filtering through a number of interconnected control nodes and ending in leaf nodes – actions. Actions are where you interface game logic with behaviour logic,  hooking up sensors and motors.

The responsiveness of behaviour trees stems from the fact that they are most often evaluated continuously, at some frame rate. Each evaluation start at the top and given the rules for the different control nodes, the flow is directed until an action node is hit.

Each node will then, rather than block on execution, return a signal back to its parent node. The parent then interprets, reacts and returns its own signal back up the chain until the root is reached again.

This signal can be one of three: Success, Failure or Running. Success and Failure obviously meaning that the node succeeded or failed in its task and Running meaning that the node has not yet reached the conclusion of its task and requests to get re-pinged on the next tree evaluation.

Example actions could be HasTarget, which would return Success if the agent executing the tree has a target and otherwise Failure or GoToTarget, which would return Running while on its way to the target and then Success when reached or Failure when determined to be unreachable.

Behave integration

So while the graphical editor lets you easily connect together these control nodes and actions, you of course need to hook this up to your AI agents at some point.

This is achieved via the one-click compilation of your Behave library (the asset containing your trees), which for the Unity target compiler generates a .net assembly. As it is output in your assets folder, Unity will automatically compile it in with the rest of your code.

What this means is that once you hit compile, you will be able to access generated classes from your code, representing your behaviour trees at runtime.

The central method of the generated library class “BL[AssetName]” is the InstantiateTreemethod. This takes as parameter first the tree type you wish instantiated (via an enum generated from the tree names in the editor) and second the agent you wish to integrate the tree with. This is the class which will need to implement the action handlers described earlier.

Flexibility

Out of the box Behave offers two ways of implementing action handlers. The default is you derive from the IAgent interface. In this case Behave will reflect your class for action handlers on instantiation, much like the Unity messaging system.

The second way of implementing action handlers is to define an agent blueprint in your library. At runtime, this results in an abstract agent class being defined, with predefined virtual handlers for all actions used by the trees supported by that blueprint. This method is less flexible, but removes the overhead of reflection on tree instantiation and gives you auto-complete on action handler methods in your favourite code editor.

With handlers defined, you then simply call the Tick method on the tree instance at a frame-rate or in response to some game event and the tree will in turn call one or more of your action handlers, depending on its design.

For core character behaviour logic, I usually create a coroutine named AIUpdate or turn Start into a coroutine, containing a loop which ticks the tree and then yields WaitForSeconds one divided by the frequency property of the tree. This property serves no other purpose at runtime than to communicate an intend from the designer to the programmer.

So as you can already see at this point, Behave does indeed follow the design goal of low complexity, leaving design decisions in the integration layer completely up to you.

The Behave runtime has much more runtime API and integration flexibility, but that is unfortunately a bit much to cover in this overview.

EOF

profileI hope you found this introduction useful and that you will consider using Behave for your next AI project. I would recommend you check out the following sources for more information on behaviour trees:

And of-course more information on Behave canbe found at:

Have fun!

Emil “AngryAnt” Johansen

Testing by gaming, ACCU and Ukraine

For several weeks I’ve been preparing a playtesting blank solution that contains integration tests based on Unity Test Tools and stubs for game objects.

Earlier this month, I was lucky enough to have the opportunity to present it at a workshop held at one of the best programming conferences in the world – ACCU conference. Each year, ACCU attracts top speakers from the computing community including Andrei Alexandrescu, James Coplien, Tom Gilb, Robert Martin, Kevlin Henney, Andrew Koenig, Eric S. Raymond, Guido van Rossum, Greg Stein, Bjarne Stroustrup and Herb Sutter.

Workshop attendees got access to the project source files which they could then work on in Unity. Scenes that contain tests are called “Level1”, “Level2” and so on. When you open the scene, the tests fail. The challenge is to start implementing functionality to make tests pass, and as you do so, the game starts growing.

When all the tests pass, you can proceed to the next level, and the process itself is like a game. After completing each level you can open the scene called “Game” and try it out.

If you’d like to play around with it, the Growing Games Guided by Tests project is available on GitHub. The game involves building an ultimate weapon of intergalactic destruction to fight back an invasion by green aliens: Have fun!

Solution packages are available for each level. If you get stuck, just navigate to the Solutions folder and open the package with the corresponding level name. Using these solutions you can navigate back and forth within the exercise. “Level 0” reverts the solution to its initial state.

photo

My workshop gimmick is to trade chocolate coins for audience attention. If someone asks me a question or points to a mistake, I give them a chocolate coin in exchange. As it was a live coding session, I made both intentional and unintentional mistakes but the audience always noticed them.

They also asked lots of questions, even asking me to show how the tests were made and how to make one from scratch. That input that will let me make my next workshop much better. By the end I was right out of chocolate coins. Thanks guys!

On the conference’s second day I volunteered to hold a lightning talk: “Public Speaking for Geeks.” I’ve been holding talks since 2011, and when I delivered my first conference address it didn’t go smoothly. Actually, it was a failure. But I’ve learned a lot since then and I wanted to inspire people to try public speaking, learn from their experience and try again.

As you might already know, Unity Technologies has an office in Odessa, Ukraine; a beautiful city on the Black Sea coast. The Odessa office is home to 11 engineers from 3 teams: SDET, STE and Toolsmiths, and it’s where I’m based.

Ten minutes before my lightning talk, I got a message from my friend Tom Gilb: “Forget public speaking. Tell them about Ukraine!” It came as a shock. I suddenly realized how much I wanted to tell the truth about Ukraine, to tell people what has happened and how it affects us.

In a strange way this helped keep me calm and meant that my Public Speaking for Geeks address went well. Already, I had another idea for a talk I really wanted to hold.

The feedback I received after my Geek talk was very positive, and a number of people approached me the following day and told me that, after hearing my talk, they had also submitted lightning talk proposals. And that gave me extra motivation to talk about Ukraine.

In the end, the act of explaining the situation in my homeland to my audience made my talk a very emotional occasion, not least because of the feedback and support I received from so many people. ACCU, I already miss you.

DSC02712