We're using it as render engine for our visual live-programming environment vvvv: https://visualprogramming.net It allows you to play around with the engine fairly quickly. To get an impression, here is an intro tutorial that shows it in action: https://youtu.be/Cs60A_pSIy0 Also check out FUSE which builds on top of vvvv/stride: https://www.thefuselab.io/
It looks like a cool project very early in development, I'd almost consider using it over Unity but it seems like too much work and too buggy right now but I hope the main dev's are able to get together and figure out the rough points. I know it's a community project but there are always some champions helping direct everything.
the engine wa first created as paradox by the company Silicon Studios (they worked on some games and some middle ware for post effects for other renderers)
Then it has been renamed as xenko for some reason.
Once it has been open sourced in 2018, we wanted to make it part of the .NET foundation but we didn't have the trademark rights for it. Or more like, Silicon Studios was taking a lot of time answering to give us the trademark. We voted to change the name and allowing us to be part of the foundation.
The editor is windows only, but you can export for all these platforms, including osx.
A Stride project is just a normal visual studio solution, so you can build the solution on other platforms too, if you don't use the asset compiler, which is also a windows only application.
I am curious if this is actually using the official .NET6+ runtime and how gen2 collections are managed. GC can become a monster problem once you start scaling up the # of things in a system like this.
Not sure how this works under the hood but if I made a game in .NET I'd basically have a ban on any per-frame allocations. That means you have to be quite non-idiomatic when it comes so C# within the game loop, e.g. no dynamic collections (resizing will allocate), no "foreach" risks allocating enumerators. There are very few "zerp cost abstractions" in C# so when you want zero cost, you must also avoid some abstractions. Basically you use a very C-looking subset of C# for the parts of the code that run in the game loop. Allocate everything up front, use pools and arenas, etc. etc.
It's easy to trip up but tooling is good so it's easy to notice when you accidentally introduced allocation. The engine isn't the problem, it's that when you expose C# as your scripting language then users might think they can write "normal" C# in their game scripts. Which you probably don't want to do.
> I'd basically have a ban on any per-frame allocations.
I have a prototype where I intentionally invoke GC after every frame. My reasoning is that some amount of GC will eventually need to occur, so why not do it as frequently as possible and at the exact moment where we can compensate with other measures, such as the frame scheduler & time step abstractions.
Even with heavy allocation (i.e. deserializing the scene graph from a wire protocol before every frame), this approach seems to keep GC pauses bounded to ~1-5 milliseconds in my implementations.
If I did not explicitly call GC when I thought best, the runtime would decide to run a 100-1000 ms collection at its convenience and the entire experience would become worthless to human perception.
This is NOT a great strategy, although better than waiting for the 100-1000ms collection you're still at the mercy of the GC which can be very random and you will miss some VSync for sure doing that even if you're supposed to have some "room".
Missing a VSync in mobile means that the refresh rate will switch to 30hz for a brief moment, there is no in-between (ie; 54 FPS or something like that) which is pretty noticeable.
The only "great strategy" is to have zero alloc (or close) while gameplay is running and if you want to go the extra miles, disable GC completely and call it only in some appropriate moment (level loading, bringing up the menu, etc.).
> The only "great strategy" is to have zero alloc (or close)
This is the path I am taking.
Ultimately, I would try as hard as I can to not allocate, but having a good plan for dealing with some per-frame blips is going to be essential. It gives me a reasonable starting point to work with on the journey to zero alloc.
This is a great strategy as long as you can keep the pause times low enough - especially if you manage to hide the gc behind vertical sync. Getting the exact timing for that is a little tricky, but if your game is running fast enough you've probably got upwards of 5ms waiting to be used while your main thread blocks inside Present/SwapBuffers. (Because those functions are native OS APIs, there won't be managed code on the stack and the GC should be able to run.)
That Roslyn analyzer linked by notanaverageman below is really cool, I didn't know about it. But I don't think that even that analyzer will be able to tell if you call a method which itself allocates memory.
But it's not very complicated to write a simple IL analyzer, using Mono.Cecil to go over the raw IL, instead of using Roslyn. The idea is to look for the newobj or box IL opcodes, but also to recurse to called methods (when meeting the callX opcodes), by finding using Cecil where the call leads us, and loading that method.
This could be used after compilation instead of during compilation, but it can be used without actually running the code, so it's a sort of static analysis.
I threw together a simple demo, that has some bugs but is generic enough and should work. It's not very efficient though.
(it doesn't run there since Godbolt apparently doesn't support nugets)
It's self contained and will run as a console application by just copying and pasting the code into an IDE. You need to add Mono.Cecil as a nuget package though.
Stride is written in C#, uses the official .NET6 and was developed by world-class engineers. It takes this topic very serious. The game loop has almost zero allocations. It's a very good learning source for writing high performance C# for real-time applications.
As a Stride user you have to follow similar guide lines in your scripts to avoid pressure on the garbage collector. But that's not so difficult as soon as you get the hang of it.
Official runtime yes. For having worked with the source, I can safely say it has been written to not stress the GC too much.
But it's a very modular engine, you can change parts of it to better accommodate your needs, I'm currently finishing implementing my "no-allocation" ECS library and will 100% try to integrate it into my fork.
.NET platform is server focused, it does a lot at runtime, including recompiling bunch of functions
It was a huge problem before Unity had the IL2CPP target, you had to deal with the slow cold start and you also had to manually call your hot functions to avoid getting them recompiled during gameplay, wich would cause ton of micro-stutters
Hence people prefer languages like LUA for their scripting needs, even for Unity [1], great perf for interpreted and you can ship it on consoles/mobile platform that forbid JIT
Now that .net has a dedicated AOT compiler, things could improve, but i doubt this engine supports NativeAOT
Generally speaking if a .NET game has GC pauses, it's probably because they didn't try. You can represent most game state using structs and arrays without significant ergonomics problems, and your GCs for the classes you do heap allocate will be much shorter if you design your data structures to be quick to trace.
Things were different for Unity games using the Boehm collector though, since it's a conservative non-generational collector that as a result is very slow to collect and collects more frequently. That's why you saw a lot of frequent pauses in Unity games (my understanding is they no longer use Boehm).
The game I'm currently working on GCs maybe 1-2 times per minute and the pauses are short enough that it doesn't drop frames, despite the fact that it frequently allocates strings and uses 'async Task' extensively. If you tune the collector settings you can control this even more, and you also have the option to manually force GCs to happen while the game is waiting for vsync, which hides them further.
Visual Studio has a built in allocation profiler that can help if you're looking to optimize your games for fewer GCs, and to make tracing faster what you want to do is avoid having any reference types in your struct arrays (use handles instead), which will let the GC skip tracing your big arrays of structs entirely. So for example the big tables of draw calls I use refer to textures by numeric handle instead of reference, which means I can have a 50mb table without making GCs slower.
Using handles instead of references also means that copying the struct doesn't require write barriers, which will boost your overall perf a bit.
Unfortunately you mostly don't do low-level optimisations in C#. You can manipulate raw memory but unsafe C# is horrifyingly unsafe - far more dangerous than writing C - and code written with it has to navigate a minefield to interact safely with ordinary C#.
Instead, heavy optimisation in C# is usually "high level but knowing the runtime in great detail" optimisation. Stuff like knowing the hairy details of the GC implementation (eg by reading Pro .NET Memory Management and/or the single 30,000 line C++ file that contains the GC implementation [1]) and using this knowledge to write ordinary C# code in such a way that the GC can handle its usage patterns very efficiently, or doing things like object pooling that try to cut the GC out of the picture altogether.
Essentially, it's knowing what you want the machine to do and figuring out a way to trick the runtime into doing that instead of what it wants to do. I find it massively frustrating and after doing it for a few years I moved back to languages that try to do what you tell them to do instead of the other way round. Some people seem to thrive on it, though, and manage to get impressive results - see, eg. Marc Gravell's blog.
I've shipped games in C, C++ and C# and I don't understand your basis for 'unsafe C# is more dangerous than C'. What is the argument here? Modern C# especially has things like ref returns and span to allow you to do type safe, bounds checked operations on stack allocated buffers, mmap'd files etc in a way that isn't possible in C.
I do a lot of gnarly stuff in C# on .NET 4.8 (no spans! it's too old!) and still almost never run into memory safety issues or crashes, it's far more stable than my experiences working with C/C++ codebases.
I recently got rid of one of the main crash sources in my codebase - a line for line port (unsafe, pointers) of a C library that I was using. I replaced it with a from scratch C# rewrite using 'ref' (no pointers) and got better performance with no memory safety crashes.
So I'm mostly talking about using unsafe C# as a way of doing things that the runtime otherwise prevents you from doing - i.e. you're in a C# codebase and want to implement some data structure or algorithm that doesn't sit welll with the GC/runtime. As an example, perhaps it relies on many contiguous buffers larger than the LOH threshold, or uses many medium-lifespan objects (a common usage pattern that GC, at least of the .NET variety, has no better answer to than "try not to do that" but which arena allocation handles with essentially no overhead).
I'd distinguish between those use cases and ones where you're using unsafe C# to interface with external unmanaged code (I've done both). Such interop-like use cases are also rather hairier than writing C against those APIs because with C at least you can generally consume a header file and have some confidence that your compiler understands the binary interface you'll be interacting with at runtime, whereas in C# the burden of trying to ensure that the data types and signatures are correct against the actual binary loaded is on you, and it can be a heavy one especially if you want your code to work cross platform. (Yes, MS have come up with things like C++/CLI that should help, but have shown no commitment to them. If you want things to reliably work across platforms and in the future you're basically stuck with P/Invoke and unsafe.)
But the dangers are worse for the former kind of code. Here the worst risks arise around the interactions between the unsafe code and the safe code. Typically in this sort of situation you want to wrap your nasty unsafe code in a nice, safe API. You want to make it so that ordinary C# callers can't make mistakes with your managed API that would cause crashes, leaks, etc. In fact, it's more than "want". You really have to. It's C# and people need to be able to program in it like it's C#, without expecting that a missed "using" somewhere is going to cause a segfault. And that can be really hard to do. The crux of the problem is that the GC wants to manage the lifetimes of the managed objects and offers no way to hook into those lifetimes other than finalizers, with their well known limitations/downsides or disposal which you can't rely on callers to invoke. If you create/manipulate some unmanaged resource and wrap it in a managed API there are lots of fun little gotchas to run into, such as the fact that an object can be garbage collected while a method on it is executing. [1]
I believe it's for this reason that interesting unmanaged data structures written in unsafe C# and exposed as safe-to-use C# types aren't really a thing in the .NET world. It really is very hard indeed to do it safely and efficiently. Unsafe is more commonly used for interop glue, where it's better suited (but still a bit risky).
The need to do things like this has led to improvements such as spans, but I'm not a fan. A ref struct is a horribly, arbitrarily limited thing (it can basically only exist on the stack) and I found programming with Spans an exercise in frustration as a result. For one thing, the moment you find you need a span of spans you'll find yourself reaching for those grubby raw pointers again. And at best they really just give you bounds checking. They don't help at all with the hard problem: resource management (and neither does Memory).
So the comparison with C - what do I mean by "more dangerous"? Well, it's a subjective thing, of course. I don't really mean there are more opportunities to screw up. What I mean is that when I was writing this stuff I had to work harder, think harder and move slower not to screw up. The pitfalls were less obvious, there was less prior art, fewer established practices and a general feeling that I was going against the grain. YMMV, but it's not something I'm going to miss.
I advise to update yourself what using C#11 offers in possibilities beyond only unsafe code blocks.
Based on my C and C++ years, it is hardly any difference from doing optimization, knowing what the compiler actually generates, how is the runtime and standard library actually implemented, and above all getting to use the proper data structures, algorithms and a profiler.
Nah, I'm not going back to C# and don't plan to stay up to date with it. If you're talking about ref structs, Spans etc, I'm perfectly aware and have used those features in anger (in more ways than one).
I agree that understanding how the GC was built is very important. Little nuances like forcing workstation mode and explicitly calling GC.Collect can make all the difference.
You can get very far with a GC language like this. The part that bothers me are the built-ins that allocate. TPL, AspNetCore, et. al. If I want a truly zero-alloc C# solution, I have to go all the way to the bottom with abstractions like Socket, NetworkStream, SslStream, etc.
As somebody with C# scar tissue, I feel like one C# game engine is already too many. But, then again, I haven't touched C# since the advent of Maui & the lessened relevance of .NET Framework in favor of .NET standard. Maybe things have gotten better.
There are a surprising numbers of .NET game engines. Ranging from Unity, over Gadot to things like this one here.
.NET is nowadays a very different beast that the .NET Framework. Maui is not the relevant factor. The modernization of the runtime, portability and the language itself changed it lot.
Give it another try. You will be surprised. But do not start with Maui :)
Not true for Unity, since HPC# and Burst they have been slowly replacing C++ with C#.
They are ones of the main contributors to low level improvements in .NET and C# since version 7, alongside the team's own learnings in Midori and .NET Native.
.NET Framework itself wasn't that bad apart from all the tying to weird Windows functions(good when writing enterprise stuff, horrible for everything else by tying to Windows) and reliance on IIS for web (with it's engineering focused on multi-tenant hosting being a PITA for devs doing the actual web dev).
Imho the main improvements lately for gamedevs are greater portability, Span<T>/Memory<T>'s (access and manage memory more freely), stack-allocs (enabled by the former) and vector math intrinsics (Add to it that C# code was never as reliant on GC perfection as Java by being more pragmatic and with support for proper generics saving a ton of memory allocs).
> .NET Framework itself wasn't that bad apart from all the tying to weird Windows functions(good when writing enterprise stuff, horrible for everything else by tying to Windows) and reliance on IIS for web (with it's engineering focused on multi-tenant hosting being a PITA for devs doing the actual web dev).
Personally, I feel like .NET Core and embracing cross-platform was exactly what the ecosystem needed. Albeit not quite 1:1 with the older versions of the framework (the Mono project tried that approach, with mixed results), things like Kestrel instead of IIS didn't feel like a high cost to pay. Plus, I think that the C# language itself is reasonably nice.
It's a bit curious how C# seems to be more popular in gamedev than Java, both of the languages and runtimes not being that different from one another. For whatever reason projects like jMonkeyEngine or LWJGL don't get the same popularity as Unity or even Godot.
> It's a bit curious how C# seems to be more popular in gamedev than Java, both of the languages and runtimes not being that different from one another
I think we can thank Miguel de Icaza for that. Mono/Xamarin allowed Unity to run on smartphones and turned it into a textbook disruptive innovation.
I'm not sure I follow: having access to lower level functionality is good, of course, but I don't think that's quite the deciding factor for the majority of folks, who want to knock together a game. Number crunching will be relevant for a limited subset of game projects out there, otherwise Lua wouldn't have seen widespread usage, or maybe something like GDScript wouldn't have been created in the first place.
Span<T>. In .NET Core 2.1 we’ve added Span<T> which is an array-like type that allows representing managed and unmanaged memory in a uniform way and supports slicing without copying.
And yet, C# and .NET (or Mono) were already used by popular engines like Unity all the way back in 2005, so surely there's more at play here! Maybe what held Java back were social reasons, like concerns around licensing, or how easy it would be to integrate JDK or package it back in the day and so on.
You should try to write allocation free code in Java and C# and see what feels easier. I don't think its a factor of what was tried, its a factor of what survived and C# engines are feasible while Java engines aren't.
This. Something must have happened internally at Microsoft because .NET core & WSL all popped out around the same time and it made Microsoft look like a vaguely reasonable gang again.
Azure. When you google long enough you will find interviews with the key people behind ASP.NET Core and .NET Core about their proposal meeting with Guthrie (the head of Azure) and how surprised how easy that was to get that approved. Same applies for how PowerShell happened. WSL, I have no idea.
And both Unity and Godot only use C# on the surface, under the hood it's mostly C++ (although apparently in Unity this has started and parts are being rewritten in their low level C# dialect using the Burst compiler).
IIRC the lowlevel stuff isn't exclusive to Unity's Burst compiler but rather enabled by the Span<T>/Memory<T>+stack-allocs introduced in the C# language (they might've used it more aggressivly though but shouldn't be anything stopping as aggressive compilation with the mainline corert-tool).
.net framework And .net core (i think you mean instead of standard) are different beasts.
In this case .net core doesn't have WCF support. The framework is more rich for something like a game engine.
I would be curious your aversion to c# in general though, I hear a lot of people saying it's bad but not explaining what they're trying to use it for etc. It's just a tool.
1. Syntax, verbosity, suggested coding conventions feel like sandpaper on the elbows
2. Visual Studio product slow & clunky to drive, makes me want to cry, esp when VSCode so good. SO GOOD.
3. WPF (never again, ever, not once, ever never fucking ever)
4. NuGet sucks shit through a crazy straw. Hosting your own private packages is basically sadism with a whip hiding behind a door marked “peace&quiet”.
5. Online documentation & community support is consistently SEO’d to be a bunch of articles generated out of India with immature/ridiculous CSS choices. This was circa 2017.
So let me try just address at a high level some answers.
1. I don't see this as an "issue" more a taste. You're not forced into a way, but the compiler has a preference. Usually it's the path of least resistance.
2. You can use whatever IDE you like , actually I'm proud to say I use vim exclusively and while i agree vs is heavy and strange. You're not tied to it.
3. Gaming engine will need something. If you don't like this one.. you should see some others, they're not far off. I agree, it could be improved but nothing to do with the language as a whole.
4. Why? I host my own very easily. Company I work for hosts quite a significant amount all fed from build pipelines. So you don't like it.. feel free to share experiences.. love to hear.
5. I don't know what to say about that in relation to the language. Some of the best articles I've learned from have been indian. I don't know why it matters where there from to be fair. Infact my time in a company suit purchased from Microsoft was exclusively supported extremely well by the previous engineers who were primarily indian. So I don't buy the stereotype.
Aside from that it's a great language so you should have nothing else? Nothing about it's garbage collection, state management, compile state issues. Just general nonsense?
You asked me what my reasons were for disliking C# (circa 2017, remember I said it’s been awhile, so these are the impressions I left with), and then when I gave my reasons, as you requested, you changed your tune and said “well, those are all personal preferences.” Yeah, dunno if you noticed but, a lot of the time things boil down to developer preference.
For example, I’ve proposed to teams before that we write new necessary services in different languages than the vogue one for the team, for performance purposes. The (arguably reasonable, despite being displeasing) answer given was “no, we’re a JavaScript shop, it’s our preference.” Software dev is social as much as it is technical.
So, all things considered, if a language has the typical feature set of garbage collection, FCF/FP patterns, some flavor of OO, a framework for interacting with the OS for things like threading and file IO and networking, a package manager and an ecosystem, is maybe even holding hands with LLVM, the average enterprise CRUD SaaS software developer (me in 2017) isn’t going to need to make the investment to implement the same code multiple times and benchmark to make sure they’ve picked the most technically optimal system. It’s going to boil down to personal preference, or indirectly the personal preference of available engineers, or there legacy of code left behind by other engineers, and I don’t really care that you don’t think my personal preferences are valid.