If performed correctly, maybe. I'm not convinced that street sweeping in most cities much but a low-level scam so the city can cash in on parking violations. In LA, I've seen street sweepers my whole life going around and mostly kicking around dry material and moving so fast that they barely pick up anything, often followed by a parking enforcer some kind of golf cart type vehicle. Street sweeping makes more sense when the debris is big and wet enough to be swept up, which may not be the case for most tire debris. If it's tire particulate, street sweeping might make things worse by making it more airborne.
There is nothing pre-defined, but it's not hard to scope one either, basically a freestanding c++20 will do most of it already, just disable exceptions rtti etc.
The short version is to adopt modern tooling (the vcpkg suggestion is an excellent one) and dependency management rather than using OS specific tools (unless you are on Windows). Part of the reason for this mess is because the Unix world operates on an evergreen philosophy and nothing is truly backwards compatible out of the box without manual intervention. The modern web development and machine learning world runs on the opposite doctrine that programmer time is the most expensive commodity above all else; bandwidth is cheap, storage has a negligible cost, and horizontal scaling can sometimes fix compute bound problems. Deployment processes are thus optimized for reliably reproducible builds. Docker is the classic example: bundle literally every dependency possible just to ensure that the build always succeeds, anywhere, anytime. It has its downsides but it is still one of the most widely used deployment methods for a reason.
In the Windows world, you often find desktops with ten different copies of the "Windows C++/.Net redistributable" (the windows version of the C++/CLR standard library dynamically loaded artefacts) installed because each individual app have their own specific dependencies and it's better for them to bundle/specify it rather than rely on the OS to figure out what to load. The JavaScript, Julia, Rust, Go ecosystems all have first party support for pulling in driver binaries that may be hundred of gigabytes in size (because Nvidia is about as cooperative as a 3 year old child). You don't waste time fiddling with autotools and ./configure and praying that everything would run. Just run `npm install` and most if not all of the popular dependency heavy libraries would work out of the box.
To further these suggestions. Act as if your "installation" is your "deployment" and perform all the necessary checks to ensure your dependencies are there (and are the correct versions) before running. In .Net, this is handled for you mostly by the framework. In Go, everything is compiled together mostly so you (again) don't have to worry about it. In javascript or python it's assumed that you can npm install or pip install your requirements and that the versions will match. From there, you can treat that as your final build and run it.
As a C++ game developer myself, I make sure that my dependencies are part of my repo as submodules so that I can update/pull and build the version I need to from git tag versions.
So if you are tagging your releases, your final outputs, in your git source tree, then going back to a version from 20 years ago is just as simple as git checkout v0.0.1
Vcpkg for C++ dependencies is another option (my preferred if you don't go git submodule route) and ALWAYS USE CMAKE! Don't opt for some crazy build setup, or some internal build tool used by <insert FAANG here> that they force you to use (V8 team, if you're reading this, fix your build pipeline).
KISS. Keep it simple slick. If your package isn't available in the OS package manager, it's time to adopt a package manager or adopt a devops practice that allows you to revert to any version of the code you need (git submodule route).
Whats the point in "being" Bob Dylan, when anyone can prompt AI to output the same? And if I were to innovate how to be the next Dylan, what would my motive be to continue to create once my innovation was absorbed into language models?
Those depth estimation algorithms can't be used to distinguish a photo of a photo from just a photo. They will report false depth in a photograph of a flat photograph.
Yes, that's my point. You can't rely on the depth map in the image metadata to be the differentiator because it can easily be faked with depth estimation.