Lenses for full frame cameras are super cheap -- you can find tons of old Russian, Japanese, and East German lenses that will work really well. Many of those lenses are built like tanks and can be had for <$100, some <$50. Most of them produce very nice images and aesthetically have much better look than what I see out of these CCTV lenses for the Pi HQ camera. CCTV lenses were never designed for art, and among other things produce horrible out-of-focus highlights.
> The image processing code
Well yes, that's also the point, by having an open source APS-C or full frame camera you can tinker to your heart's content with changing the image processing code.
I use Magic Lantern extensively and there's only so much you can do with it, and it's a pain in the ass to recompile code for it. Having a full-fledged Linux system with gcc, opencv, python, and pytorch at my disposal on camera, and with Wi-Fi, Bluetooth, USB, and running an SSH server, and the ability to connect arbitrary I2C and SPI sensors, would be freaking amazing, to say the least.
Wildlife camera with thermal camera trigger and a neural net that recognizes mountain lions? You got it.
LIDAR-based insanely accurate servo-driven autofocus? You got it.
Microphone array that figures out who in the picture is talking and refocuses the camera to that person? You got it.
Home-made Alt-Az tracker with built-in autoguider and remote Wi-Fi progress monitoring? You got it.
And if it can be made to work with the Pi, someone will hopefully also make it work with a Jetson Nano or Xavier NX and then voila I could do some neural net processing in real-time on-board. I've been able to blow Canon's in-camera denoising out of the water with state-of-the-art neural nets by postprocessing RAW images, and if I had a Xavier or Nano on-board I could easily put those neural nets in-camera for convenience.
The possibilities are endless, which is why I really want this hardware so much.
Everything you described can be built out with this existing sensor and hardware.
You don't need an APS-C or larger sensor to get decent images. Most APS-C sensors use a different high-speed interface that won't work with the Raspberry Pi anyway.
Really, this solution from the Raspberry Pi foundation is a great start for any of the projects you mentioned. It's also cheap and highly available.
> You don't need an APS-C or larger sensor to get decent images.
I don't have the space and time to debate the merits here but there is a reason they exist, there are lots of things you can get by having a large sensor (including a different aesthetic and better SNR for low light images) and I want those things with a hackable interface and programmatic control of whatever the sensor is capable of.
I've been doing photography with full frame sensors for a a decade after upgrading from APS-C and telling me "you don't need an APS-C camera" without understanding why I use a full frame camera or the work I produce with them isn't really helpful.
Lenses for full frame cameras are super cheap -- you can find tons of old Russian, Japanese, and East German lenses that will work really well. Many of those lenses are built like tanks and can be had for <$100, some <$50. Most of them produce very nice images and aesthetically have much better look than what I see out of these CCTV lenses for the Pi HQ camera. CCTV lenses were never designed for art, and among other things produce horrible out-of-focus highlights.
> The image processing code
Well yes, that's also the point, by having an open source APS-C or full frame camera you can tinker to your heart's content with changing the image processing code.
I use Magic Lantern extensively and there's only so much you can do with it, and it's a pain in the ass to recompile code for it. Having a full-fledged Linux system with gcc, opencv, python, and pytorch at my disposal on camera, and with Wi-Fi, Bluetooth, USB, and running an SSH server, and the ability to connect arbitrary I2C and SPI sensors, would be freaking amazing, to say the least.
Wildlife camera with thermal camera trigger and a neural net that recognizes mountain lions? You got it.
LIDAR-based insanely accurate servo-driven autofocus? You got it.
Microphone array that figures out who in the picture is talking and refocuses the camera to that person? You got it.
Home-made Alt-Az tracker with built-in autoguider and remote Wi-Fi progress monitoring? You got it.
And if it can be made to work with the Pi, someone will hopefully also make it work with a Jetson Nano or Xavier NX and then voila I could do some neural net processing in real-time on-board. I've been able to blow Canon's in-camera denoising out of the water with state-of-the-art neural nets by postprocessing RAW images, and if I had a Xavier or Nano on-board I could easily put those neural nets in-camera for convenience.
The possibilities are endless, which is why I really want this hardware so much.