That is not a very big studio or very big production, Blender falls over in the pipeline department. It’s a constantly changing API that doesn’t allow for the extensibility needed to get a major project out the door, just the fact that only a Python API is provided is enough for most people who have worked on massive scenes with massive amounts of data to consider it a non starter.
Saying Evangelion isn't big is like saying Minions are some irrelevant little flick. Evangelion is quite possibly the biggest series in Japan for 3 decades running. You won't find a person who has not seen it to some extent. Evangelion goods are sold everywhere at all times. You really cannot escape it. For the biggest series in Japan to use Blender is a huge sign to the rest of the industry in one of the most risk-averse countries that yes, it's good enough.
A relevant opportunity may not occur again so here is a great video by Red Bard on whether it's possible to live entirely off of Evangelion merchandise: https://www.youtube.com/watch?v=_0Qr9rztRw4
The same way Star Wars was still running in between the original series and the prequels. It had an active fan base and lots of side content that was constantly being produced.
I'm sure "major project" is a subjective label, but Flow made headlines earlier this year with an Academy Award (Best Animated Feature) and Golden Globe (Best Animated Feature Film)
Flow is good filmmaking expressed through low-tech production, which is totally valid, but it doing a lot with a little isn't going to stop Disney from one-upping themselves with the next Zootopia movie so Blender needs to handle that angle too if it's going to become a catch-all solution for all kinds of production.
For sure, it was made by a small team and rendered on a single computer using the Eevee renderer (the fast partially-rasterization one). It's a major project, just not an enormously huge bleeding edge major project. Here's hoping Blender can keep on rolling toward those types of capabilities.
Not disagreeing that usage in large productions is something that Blender isn't really designed for, but I don't think that it's for a lack of Python API features (if a studio wants something specific it could just maintain an internal fork) or the ever changing Python API surface (the versions aren't upgraded during a production anyways)
VFX studios have been using Python APIs for twenty+ years, backed by C. They were one of the first industries to use it. That's where I learned it, around the turn of the century.
3.0+1.0 was the highest grossing box office release that year in Japan and has a worldwide fanbase. The original series + End of Evangelion are considered by many critics and fans to sit among the best anime series of all time, and the Rebuild movies were absolutely huge.
Personally, I think they pale in comparison to the original series and lose a lot of what makes Eva special and interesting to begin with, so I'd kinda love to dump on them a bit, but... it's about as big of a production as it gets in the anime industry. They're of course nowhere near Pixar level or similar, but it is clearly an example of Blender being battle tested by a serious studio on a serious project.
> constantly changing API that doesn’t allow for the extensibility
You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
> And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
I wouldn't frame it as "scared". The issue is that at a certain scene scale Python becomes the performance bottleneck if that's all you can use.
> You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
This is fine if you only ever have one show in production. Most non-boutique studios have multiple shows being worked on in tandem, be it internal productions or contract bids that require interfacing with other studios. These separate productions can have any given permutation of DCC and plugin versions, all of which the internal pipeline and production engineering teams have to support simultaneously. Apps that provide a stable C/C++ SDK and Python interface across versions are significantly more amenable to these kinds of environments as the core studio hub app, rather than being ancillary, task specific tools.
If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
If the company is more than a boutique shop, I would expect them to have a somewhat competent CTO to manage this kind of problem - one that isn't specific to Blender, even!
Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
Small timers, boutique shops, and humble folks like me just try to get by with the tools we can afford.
On a related note, though: I built a Blender plugin with version 2.93 and recently learned it still works fine on Blender 4. The "constantly changing API" isn't the beast some claim it is.
> If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
Considering productions span years, not months, artists would never get to use newer tools if studios operated that way. And it really only works if shows share similar end dates, which is not the reality we live in. Productions can start and end at any point in another show's schedule, and newer tools can offer features that upcoming productions can take advantage of. Each show will freeze their stacks, of course, but a studio could be juggling multiple stacks simultaneously each with their own dependency variants (see the VFX Reference Platform).
> Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
That would be the ideal, something that can be difficult to achieve in practice. You'll find small teams of quality engineers overwhelmed with the sheer volume of work, and other larger teams with less experience who don't have enough senior folks to guide them. The industry is far from perfect, but it does generally work.
> But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
And back to reality XD
That being said a number of studios have been reducing their Autodesk spend over the past few years because it's honestly a sick joke the way the M&E division is run. It's a free several hundred million a year revenue earner, but they foist the CAD business operations onto it and the products suffer. Houdini's getting really close, but if another AIO can cover effectively everything in a way that each team sees is better, you will start to see the ramp up of migrations occur. Realistically this comes down to the rigging and animation departments more than any other. But Maya will never go away completely as it'll still need to be used for referring to and opening older projects from productions that used it, beyond just converting assets to a different format. USD is pretty much that intermediary anyways, it's the training and migration effort that becomes the final roadblock.
I might be in the minority, but I hate type re-definitions, I want types to just tell me how much memory a variable is using and it’s bit interpretation. Every variable already has a name, use that to communicate the data’s representation and if it’s really important that representation mismatches are caught at compile time wrap it in a struct. I don’t want to guess how much memory the compiler decided a variable needed (though that is also present to an extent in C/C++)
Understandable. Many many years ago I sat in front of a commercial code base for the very first time, a well-known large database company, and my first despair-level was reached quickly when I saw that everybody who ever submitted a patch seemed to have created their own version of very basic types such as 32 bit unsigned integers, making following the code and checking the types on the way very hard.
On the other hand, these kinds of types for different kinds of numbers are meant for higher-level checks. Not confusing some random number with a monetary amount, or, as in the example, miles with kilometers, sure helps. The latter might have prevented that Mars mission that failed due to such a unit error (Mars Climate Orbiter).
The problem, as so often, is that we only have one source code (level), but very different levels of abstraction. Similar when you try to add very low-level things like caching, that have to be concerned with implementation and hardware, and mix it with business logic (alternative: even more abstraction and code to try to separate the two, but then you will have more indirection and less knowledge of what is really going on when you need to debug something).
Sometimes you are interested in the low-level concepts, such as number of bytes, but other types you want some higher level ideas expressed in your types. Yet, we only have one type layer and have to put it all in there, or choose only one and then the other kind of view suffers.
I've been working on language for a little over a year now. There's no documentation at all, just some examples if you can figure out how to run them. I thought building a compiler would take less time than it has, but it's been feeling like a good investment in my future of making things. It's a project I can just keep moving with forever.
PlayCanvas is a game engine that runs on browsers, but I’m not certain what it’s future will be, it was bought by SnapChat, but SnapChat has shut down running games in app.
A long time ago I had a company backchannel someone I actually would have listed as a reference except I knew he was out of country and on vacation. That left a sour taste in mouth and I ended up not going with their offer. So if you’re going to backchannel I’d suggest at least not cold calling.
Red Flags:
I spend too much time picking fonts, I have trust issues with black box code, I spend more time on making things easier to make than just making the things, I pace when I’m thinking and I can’t solve most problems without drawing it out.
Depends on the asset, most code or UI tools would probably be a no without a good amount of effort to port, 2D and particle assets you can most likely rip the image files and recreate in another engine with some work. 3D assets usually come with an FBX file (If they were made using Probuilder or something inside Unity you can use Unity's FBX exporter to get an FBX file) that you can easily transfer to another engine, there may be some edge cases where you'd have to re-rig the assets depending on the engine you're moving to. For animation assets they'll either be an FBX file that you can transfer over or an Unity Animation Clip that you can also convert to an FBX using the FBX exporter. Shaders are a little bit vendor locked if they were made with Shader Graph, you can get the generated source for the shaders, but it is generated which can make it hard to read (single letter variables/function names). There's a ton of edge cases you could run into depending on the engine you're moving too, like Z being up vs Y being up or engines using a different normal map tangent basis, but there's tools that can fix those issues when you come up against them.
If you’re on OSX I haven’t found any program better than Sequel Pro (sometimes referred to as Sequel pancakes). It’s one of a few programs that makes keeping a Mac around worth it.
Does anyone know how accurate these are to the raw data a CT scan produces, it looks really clean, has it been touched up any significant amount or is this actually the quality of the machines?
I've seen slightly better scans from the Lumafield machine I have access to at $DAYJOB, but only slightly better. The scans shown here are very high quality.
You don't get access to raw data, assuming you mean something like individual X-ray images. The service runs the tomography on some cluster in the cloud and you get access to the reconstruction through a web app.
Doing metrology on production parts normally means disassembling them and putting them under the microscope or X-raying them, but sometimes there are problems that only manifest when the pen is assembled and closed. There's a lot of geometry that isn't visible externally in a pen, more so in certain markers. The writing systems are very sensitive to manufacturing tolerances, and out of spec parts are perceived by users as a bad pen or marker (which we don't want). With a normal X-ray, it is very difficult to resolve internal geometry deep in the assembled pen with any degree of accuracy.
CT scans allow us to examine internal geometry non-destructively and they are relatively fast to run. The scans shown in that blog post I would guess took about 6-8 hours of scanning + 1 hour of reconstruction to generate. Once you start the machine, it's completely automated from there, so you don't need a technician or an engineer sitting at the X-ray machine (which BTW is running Windows XP or something worse) takings images of parts.
I’m a radiographer, but haven’t done CT in a long time.
These look cleaned up, as the metal artifact from dense things is minimal.
I’ve scanned things then converted the Dicom file into a format suitable for printing (I’d broken a part of a coffee grinder). These images look like the item when moves to a 3D print in format.
Side story: finding out what’s inside things is what CT is for. We used to scan the chip packets before loading them into the vending machine. We sort out the ones with prizes inside.
To answer the question about whether these are cleaned up, these scans aren't processed beyond what our software does automatically during the reconstruction. Industrial CT scanners are designed to scan a wider range of material densities than medical scanners. We use some copper filtration to scan parts with lots of dense materials, but no extra processing is required once we've reconstructed the model.