Yes, the decryption happens in hardware. For your OS (and potential capturing software running on it) the place where you see the video is just an empty canvas on which the hardware renders the decrypted image.
I think the noise emissions of a successful launch already make it an unattractive and potentially hazardous (for your hearing) place to live, especially considering SpaceX' launch frequency.
"The US Government studied what regular sonic booms do to people during the supersonic jet craze in the 1960s and 70s! It drives them to murderous rage. While the 1964 OKC test had multiple events per day, they were limited to 2 psf."
With a difference of 5 dB, one approximation for a Starship flyback boom at 20 km is a ∼50% increase in loudness over the Concorde boom (where 9 dB represents a loudness doubling;)
Unfortunately not. I can't say for current gen, but the 5000 series APUs like the 5600G do not support ECC. I know, I tried...
But yes, most Ryzen CPUs do have ECC functionality, and have had it since the 1000 series, even if not officially supported. Official support for ECC is only on Ryzen PRO parts.
They have a video where they have an eInk display show video at 60 Hz. In contrast to a previous video, where the display was running at 2.4 Hz and the video then sped up by 10x, this is not sped up. What kind of black magic is this?
> As you'd expect, the embryo was only tiny and measured just 27cm long.
27cm is not exactly what I would call tiny. For comparison, this is what Wikipedia has to say on the topic of ostrich eggs:
> on average they are 15 cm (5.9 in) long, 13 cm (5.1 in) wide, and weigh 1.4 kilograms (3.1 lb)
It's almost twice as long. Talk about megafauna.
Looking for alternative sources, I found this:
> The unhatched dinosaur’s 24-centimetre-long skeleton is curled inside the egg, with its head tucked tightly into its body. The egg is 17 centimetres long and 8 centimetres wide.
Okay, so they were talking about the size of the dinosaur if it stretched out of its curled position inside the egg. The egg meanwhile is a little larger than an ostrich's egg. Still not tiny by any means, but slightly less mindblowing.
The BSDs have diverged significantly since then and not just in userland. Unlike Linux distros they do not all have the same kernel. There are of course common parts in their kernels, many of which date back to Unix, but there are also big differences between all of them.
I was also surprised to see Sailfish OS, Meego and Maemo listed separate from Linux, but my guess would be that the list comes from the build system of curl. Everything that is its own build target is listed there.
I recently used this (via https://depenguin.me/) to install FreeBSD from a Linux Rescue Image on a Hetzner root server. Hetzner sadly discontinued the FreeBSD Rescue Image.
This installation method uses KVM to boot the mfsBSD image, giving the VM the actual hard drives to install on. The one thing that tripped me up was that the network interface presented to the VM did not use the same driver as the physical network interface. So the FreeBSD installation configured (in my case) `em0`, but once I rebooted into FreeBSD, the network interface was `igb0`.
> Surely they can transfer data through water efficiently enough?
Actually, no. Water absorbs most of the electromagnetic spectrum pretty well, severely limiting the communication range. So you're limited to low frequencies or acoustic communication. Both have a low bandwidth, so forget live video footage.
Should be within the realm of the possible to use a kevlar-reinforced umbilical with fiber optics in it going to a buoy on the surface, though. Starlink to the rescue!
I am not sure you would need that much bandwidth from down there, though - I really can't see that much happening that quickly at the bottom of the Challenger Deep.
What if you could "stereolitho-encode" a bunch of data in an image, then just send the image and let the receiving sub decode the image for the data, as opposed to a bunch of linear sonic-"packets"?
For example, what if the image was a pic of various graphs and stats of the status of the machine, but at the same time other critical information is encoded in the image of a Status-Page/Dashboard style KPI image?
It is likely I’ve missed something because your idea seems clever and mine is dumb, but what about just bundling all the data up together and then letting some compression algorithm take a swing at it. (Which is I’m sure what they do already)
I am saying that you can likely encapsulate a boat-load more pay-load in a small image, than if youre attempting to send blobs, timesequence, or other info - especially if your image is intercepted and itsa pic of your enemy's leader/flag/whatever and they have no idea what to do with the image...
Better yet, is if the image is of a flase map of current in-situ..
Why is it more likely that an image can store more information than just encoding the information directly in some format like binary? You have the added overhead of the actual image itself, now you have to transmit compress(bits(image) + bits(message)) when you could just do compress(bits(message)).
Originally the topic was undersea exploration. There’s no need for steganography, the issue is signal strength, not obscuring communications. There isn’t an enemy to intercept these messages.
I didnt realize that - I was just trying to convey, that mayhaps ; If a secrete message would be sent via Sonar, an image encoding might be efficient, such that the blok of info one wishes to convey, might be a really long stream, as opposed to encoded into an image which would result in shorter sonic comms...
When pointing out that HDDs can outperform these SSDs, 'sequential' is the key word. I regularly pull remote backups with syncoid (i.e. `zfs send | zfs receive`) and over time that fragmented the receiving side considerably. In the end `zpool list` showed over 80% capacity and 40% fragmentation. The hard drives were seeking constantly and the syncoid task would take over eight hours to complete. I replaced the disks with SSDs and now the task completes within 20 minutes.
Back in the day, a common suggestion for speeding up your PC was to defragment your hdd. I didn't start using Linux until right around the SSD transition, so I've never done it there, but for setups like this are there not still tools to do something similar?
I'm sure you got other benefits out of swapping to SSD's, but your comment just got me thinking.
No, there is no defragmentation for ZFS, unfortunately. A way to get around that is to send the pool's content to another (fresh) ZFS pool, where it would be written sequentially. But for that you would need a set of drives of same (or larger) capacity.
There are ideas on how one would do an actual defrag. They are generally based on a concept called block pointer rewrite, which Matt Ahrens once said could be the 'last feature ever implemented in ZFS', as it would make everything so much more complicated, that it would be hard to add new features afterwards [1].
There's no point in defragging an SSD unless the low-level controller is doing it; the controller is always presenting a false picture of the mapping between data addresses and physical location of pages.
There's no good ZFS defragging tool, although the initial send to a new pool will accomplish that. This is just a thing for COW-style filesystems.
ZFS in particular has an architecture that's very hostile to ever moving things.
BTRFS has a design that's amenable to defragmentation, but the builtin option doesn't work with snapshots and the external programs I've tried are partial and finnicky.
long ago I worked on a graphical tool that showed disk fragmentation. Of course all the devs would test on their various hardware, pre-SSD. It was true that you could change the performance for daily tasks by some fragmentation management.
In recent years I use Linux with default ext4 mostly. Linux and ext4 appear to me to regularly maintain the disk allocations somehow, but I do not have a graphical tool to show that; details welcome.
The moment you as IO seek on hard drives they just suck, as you experienced.
In 'almost' every user based usage scenario a SSD is going to perform better than an HDD. About the only time an HDD is better is when you're writing out large singular data files. But even then you have to be cautious, as if the drive is shared with other read/write operations you can find the performance again drops off a cliff.
- a FreeNAS with a bunch of Samba file shares and a Plex. I tried Jellyfin, because I got annoyed with Plex trying to force me to create an account on their cloud stuff, when I just want to use it locally. But the Playstation wouldn't play videos from Jellyfin, so I stuck with Plex.
On a dedicated server with public IP addresses:
- mail (opensmtpd + rspamd + dovecot)
- blog (made with Hugo, a static site generator)
- git (gogs)
- Nextcloud
- XMPP (ejabberd)
- VPN (tinc)
Each of those services is in a separate jail and the jail with the blog has an nginx that serves as a reverse proxy for all http-speaking services.
I'm considering replacing XMPP with Matrix (looking at conduit) and tinc with Wireguard. With the latter I might wait until FreeBSD 14 with in-kernel Wireguard is out.