It seems like this page is updated with the followup questions asked by every visitor. That's an easy way to leak your search history and it's (amusingly) happening live as I'm typing this.
That's so cool. And horrifying. It's like back when Twitter was one global feed on the front page. I doubt that's intended behavior since this URL is generated by the share link.
For me, the gradual build up of (1) maximum likelihood estimation to (2) maximum a posteriori estimation to (3) full posterior approximation (or posterior sampling) was helpful to understand where Bayesian methods are in machine learning. Here’s a great video series by Erik Bekkers, who is at the University of Amsterdam. It assumes solid knowledge of calculus & linear algebra and takes you through the math and intuition of all fundamental ML methods: https://youtube.com/playlist?list=PL8FnQMH2k7jzhtVYbKmvrMyXD...
Slight tangent: does anyone know of a good source for more useful blog posts on ML in the wild? Many that I come across are a funnel into some product, are very short, too theoretical, for beginners, or all of the above. This post strikes a nice balance in simply sharing some experiences and opinions, like you would see in blogposts on how to do software engineering well.
That's such a cool nugget of information. The Versailles one is apparently A = 435 Hz, the 'diapason normal'. Nowadays we mostly use concert pitch at A = 440 Hz. The 99% Invisible podcast has a mini-story on it:
> London’s Royal Philharmonic Society would still tune higher to about 439 Hz though. This was because the mandate specified that Concert A should be 435 Hz according to a tuning fork of a specific weight at 15 degrees Celsius. The temperature was specified so that the metal tuning fork could be accurately reproduced, but British orchestras reasoned that their concert halls were warmer than that, and so to compensate they would tune higher.
I wonder if they tested their assumption with a conformant tuning fork.
Apparently their pitch standard was an Oboe at room temperature (20 degrees) which went up just enough to make up that 4 Hz difference compared to tuned at 15 degrees against that fork. It's a neat little hack.
Conventional methods of rendering 3D objects and spaces rely on specifying geometry and material properties in some format. You then simulate a viewpoint using that info and physics simulations.
A NeRF takes over both the role of the file format and part of the rendering in the form of a neural network. You feed in a world coordinate and a viewpoint and you get an RGB tuple and density out of it. If you interrogate the NeRF enough you can render any traditional 2D or 3D image out of it by combining all the datapoints.
One theoretical benefit is that that a NeRF is a continuous function, so the resolution is only limited by the capacity of the neural network. Another cool thing is that a NeRF is trained on pictures (with info about where they were taken from), so if you train a NeRF successfully in high-res it’s like scanning an object. A major practical challenge is that it is (was?) pretty frickin’ slow to work with. I wrote a more elaborate comment about it on the previous NeRF improvement post [1]. There I closed with:
> It would be amazing to have NeRF-based graphics engines that can make up spaces out of layers of NeRFs, all probed in real-time.
Here they’ve taken a major step in that direction by speeding up the rendering 3000X.
This technique isn't actually speeding up the NeRF rendering algorithm.
It bakes the NeRF back to a semi-discrete representation (Octree of Spherical Harmonics voxels) which can render near-identical results at interactive speeds.
The baked data is much larger than the original NeRF model (2gb vs 5mb), but they can be downsampled to 30-100mb with little loss in quality.
So if I understand right, for the real-time version rather than querying the NeRF to compute the frame pixels on the fly, they instead use the NeRF to pre-generate 3D Voxel data representing the scene which can then be rendered in real time using more traditional voxel rendering?
The point about spherical harmonics hits home. You could sample the different harmonics with a probabilistic scattering to construct a probability distribution for a signed distance function render, and use half the very solid render pipeline
How do you deal with adversarial/byzantine updates that attempt to degrade performance or even install a backdoor? Do you use plain averaging, or some other aggregation algorithm like Multi-Krum?
For now, the only separation we have is that each worker is responsible for its own weights, since network security has not been our top priority. Still, we've been thinking about adding some security measures like proof-of-work for each node and detection of anomalous inputs/gradients (or simply NaN values). Right now we're running experiments on internal hardware, but before a public launch we'll make sure that malicious participants won't put everybody else's work to waste :)
This is also what I was thinking about. Considering that making up bad data does not require any GPU work as opposed to honest calculating nodes, the model can fall quickly if without taking some measures to deal with them (adverserial nodes).
A draft solution would be for the central server to measure the goodness of each update and drop the ones that don't perform well. This could somehow work since inference is much cheaper than gradients computing.
The last time a post like this appeared on HN prompted me to write a gist with a simple neural network in Python (with Numpy). It downloads the MNIST dataset for you, trains a fully connected network on it, prints the accuracy on the validation set and plots the loss. It's pretty verbose with plenty of terms and comments to search the web for if you're interested.
Yes, I'm also using Hetzner and have not run into any limits in what plugins you can install and use yet. They used to call 'storage share' just 'Nextcloud', but moved it to the new name for licensing reasons.
I don't know what type of hardware they're running their Nextcloud instances on, but I've been using calendar/contacts and public file sharing (since Firefox Send went down) on the cheapest plan for a while now and it's fine. I also dabbled a bit in using Nextcloud's maps, video calling, bookmarks and rss reader applications and it all works OK, although I didn't stick to them because they're not the most rich in features. I had to Hetzner email support a few times and got a helpful reply within 1-2 work days, sometimes quicker, which I thought was fine.
I used to be all in on Apple. On macOS I had a little program called Magnet to snap windows to sides and corners, and on my iPad (with external keyboard) I SSH’d into a VPS to write and run code there. I used Alfred and had all kinds of workflows in there. I thought it was great.
But then during my AI studies I wanted some beefier hardware, which was just not affordable for me within Apple’s ecosystem, plus they only used AMD graphics cards. I built a desktop computer that outperformed the top of the line Mac Pro for a fraction of the cost and turned it into a Hackintosh. Two weeks later Mojave came out, and Apple never approved any Nvidia drivers from then on.
My eyes opened to Linux and i3 in particular, which looked like Magnet taken to the extreme. What had taken me hours to install and configure on macOS (GPU-acceleration for PyTorch, for example) just worked with one package install on Linux. All my expensive apps were replaced with simple and free, much more configurable alternatives. At first I spent a day or two getting things just right. Since then not much has changed because not much needed changing, which I really like.
Now I look at macOS and iOS and cringe how locked down it all is. Users are very creative in their workarounds to make it work, but it is ultimately quite silly that you need to use special URL schemes and workflows to open a text file across different apps.
I used to be all in on Apple, using their iMac G4 and later on the Intel iMacs at home, but needed a beefier machine. So when I started in a new job, instead of getting a macBook, I asked the company IT to build me a workstation (and been asking for it from the companies I worked for after that). Installed Arch Linux and i3 to it, and never looked back. Did run Linux and i3 in the macs I owned for a while, until I built myself a home desktop and replaced my laptop with a ThinkPad, all running Arch Linux and either i3 or sway as a window manager.
This is the end game for me. Emacs, Alacritty and Firefox with a rolling release distribution. The same configuration with even the same wallpaper for the past 10 years that follows me from Gitlab.
This is kind of my story too. I stick with ubuntu LTS just for an added level of confidence things are compatible, but Arch might be a good idea to look into. The arch packages are a big plus. Do you find many compatibility issues?
I have found that a lot of people _think_ Ubuntu is safe/stable. But because the Linux kernel is a little behind, it usually isn't. I have had way more stability and compatibility with arch because everything is always up to date.
> I have found that a lot of people _think_ Ubuntu is safe/stable.
As a 10+ years long user of Ubuntu and its derivatives (currently Pop_OS!) I would say I know it is stable. Ubuntu still feels like a polished Debian for a typical computer user, which has many of pre-configuration made that non-techies and not-that-much-knowledgable persons will love. I don't like many decision they have made, e.g. favouring snap over apt, complete lack of a proper GUI for managing installed software/packages, sometimes missing packages that are on Debian but not on Ubuntu (though, the deb file can be downloaded and installed with a single command). After tremendous work done by GNOME users to tweak gnome-shell I have never had a freeze / hang up since three years on Ubuntu. I am also happy having recent drivers from Nvidia that works good enough to also never see any problem in 5+ year history related to GPU card. I also didn't have a problem to install drivers for my printer and scanner, which have a dedicated installer (deb file). I am just a happy user.
Calling kernel for a reason Ubuntu feels stable is wrong I think. I have tried many times Arch and I had many occasions to be not very convinced to its "stability". The most annoying issue I had on Arch was related to sound - when I was changing volume in Spotify client it was changing the system volume. Imagine the situation when you had a headphones with a volume level set at 3-4% and you have increased it by 20-50%, a nightmare for ears and hearth. After that "feature" I have completely removed Arch from my disk and I am not looking back to it nor other rolling distro.
These days I also don't see any unique selling point that Arch had in the past. It is still known for a best documentation and the most recent software. During last 2-3 years I don't remember if I had been complaining on outdated software in Ubuntu. If I would do then there are awesome projects like flatpak, xbps[0] or nix[1] that can be installed without changing my OS and they provide everything what I probably can win using Arch.
For me, why I like Arch is its package system and how easy it is to have the most recent libraries and tools for gaming.
I also like Ubuntu and I've used NixOS in the past. What I don't miss from Ubuntu are the PPA's, AUR is much much much nicer. If you plan to do any gaming in Linux, Arch is in the end much more convenient due to AUR.
And, I'm biased a bit because my installation is always quite minimal. I only have i3 and my tools for programming, and whatever is needed to run proton games.
I have just looked into the extra sources to see if I have any PPA there but I was surprised to see the only one, which come with a Pop_OS! to bring some custom tiling manager and other tweaks.
Frankly, era of PPAs has ended. I don't find many software packages shared on Launchpad nor I won't trust PPA owners to install anything from most of PPAs. There are better ways to share software and easier to set own APT server. Today, almost all software distributors I know (Microsoft, Slack, Sublime, Spotify, Hashicorp...) have their APT repository on also their server or on a packagecloud[0] / JFrog Artifcatory[1]
> If you plan to do any gaming in Linux.
To be honest, I find KVM with GPU pass-though a better idea overall (I can use software like Affinity Designer thanks to that). KVM is a really top-notch software if you know to configure it properly :) However, proton and wine can be installed without any problem on Ubuntu.
> And, I'm biased a bit because my installation is always quite minimal. I only have i3 and my tools for programming, and whatever is needed to run proton games.
I have a complete range of various programming tools and SDKs that hardly can be packed in 240GB disk. My current setup is rather small: Jetbrains IDEs, Unity3D, Qt Creator/Designer, emacs with orgmode, vim, various chat apps, Spotify, Syncthing, Sublime Text & Merge, rofi, pass, fossil, nginx, flutter, nimrod, nodejs, deno, zsh, golang, chicken-scheme, milena+ivona, orca, backup tools, terraform with terragrunt, multipass and some dotfiles and private scripts. It takes about 3h to complete format disk and install OS and provision all software I need (thanks to terraform). My DE is GNOME with a taskbar (Dash To Panel extension) instead of a dock. Is it minimal? Likely no, but it does its job.
It's all good! Happy that we have many open source distros to choose from. I've used Linux since 1995, and have always a soft spot for Debian based distros. For some weird reason though, Arch has been my main driver for some years and right now, I see the distribution to be not an issue anymore. I see no reason to change, and what I have just works fine for my main tasks.
With Ubuntu you have the choice to install newer kernels and video drivers if you need it and you are competent enough to read instructions.
The advantage of stable distros is that you are not surprised with new bugs,feature removal or shiny redesigns.
When I needed a new driver to play some game I just installed an official PPA and got the driver and if I did not like it I could install the older version.
I've had the exact same amount of major issues using either stable or rolling release distributions. Using one of the other just changes when/why I have to work around the distro.
Arch usually has more small "papercuts" I have to fix myself - often problems around suspend/power management or drivers, in my experience. Ubuntu and the likes I usually struggle either when I encounter some bug OOTB (meaning that won't get fixed in a timely manner because "stable") or once I need some package that's just not there. Having to figure out what are the damn dependency equivalents for some program I'm trying to compile that only has Arch instructions is never fun.
Yeah this always gets overlooked for some reason. Other distros I periodically hit issues with something out-of-date; on Arch, I almost never do. And on the rare occasion that I do, installing the `-git` package from the AUR fixes it.
What I find annoying about the distro upgrade cycle is that it always automatically disables my 3rd party PPAs and then I have to go and manually re-enable and update them for the new distro version. This is a really poor experience.
I generally prefer rolling distros, but while Ubuntu generally doesn't use the latest kernel, it's still providing security patched versions of the kernel regularly.
My only/major beefs with Arch are the frequency with which they introduce breaking changes, the fact that your system may not work if you miss a news item, and the lack of humility in the community. It works, for sure, but I always felt a bit iffy when doing a massive update after a while, and I felt like I was being dragged along with their choices instead of making my own.
I bought a beefy computer last year and installed Gentoo on it - it's obviously not for everyone, but its reputation as a hard to install distro is overstated, and if you have enough horsepower and ram, an emerge update isn't a big deal. Additionally, having the ability to tailor your builds to not include libraries you don't need AND to install the sources and debug information is huge. Mostly, though, it feels logically designed, and standard sysadmin tools allow you to do maintenance without much hassle.
I'd recommend at least looking into it to see if it fits your use case.
> My only/major beefs with Arch are the frequency with which they introduce breaking changes, the fact that your system may not work if you miss a news item, and the lack of humility in the community. It works, for sure, but I always felt a bit iffy when doing a massive update after a while, and I felt like I was being dragged along with their choices instead of making my own.
This was my experience with Arch as well. It left me feeling like I needed to check the wiki to see if there were any new warnings before updating.
I switched to distros that release ~6-12mo, and I have found my environment is much stable. Currently on Fedora, but considering trying out Suse Leap.
I've been running Suse for a couple of years on a lot of my kit, and it's been pretty worry free once I learned the deltas from my previous distro (e.g. zypper instead of apt). The only issue I've had is that once you get beyond the general desktop productivity environment it's a bit of a second class citizen. Usually not an issue, but make sure your favorite workflows/apps come over painlessly.
It has been much better for the past 5+ years, but what I did with my workstation is I installed Arch into a btrfs file system. Now when I `pacman` whatever, it'll create a snapshot first and then run the updates. If anything breaks, I go to the grub menu, boot into the previous snapshot and rollback into that state.
Never needed it yet, but it's good to have. I remember the problems in the past...
You can also do it with zfs, but I wanted to not have the filesystem as DKMS.
I do exactly the same thing. Snapper is amazing. I've used it once, but only because it was a bit more convenient than downgrading the one app that got messed up.
I also used it recently when I tried to get the new Assassin's Creed to run. I knew it was a bit of a crap shoot, so I took a snapshot before I started compiling and installing the Git versions of graphics and translations libraries. After a couple hours I realized it wasn't going to work and just rolled my root directory back to before I started throwing packages all over my system. It was very satisfying. :D
If you don't use the computer in a while, like my office workstation when needing to stay home, `pacman -Syyu` packet upgrade might be painful after six months.
Otherwise it's the distro I'm having the least amount of problems. Specifically don't miss the PPA's from Ubuntu. With dist upgrades they are quite painful.
I started on gentoo, went to arch, then to ubuntu LTS.
Both switches were for the same basic reason: packages broke just often enough that it wasn't worth the benefits of having a simple/clean system or bleeding edge libraries.
My switch to ubuntu was maybe 7-8 years ago, though, so I can't speak to whether this has changed. I just know that I used to need to spend 1-5 hours fixing an esoteric X.org problem (or similar) every few months on arch and that I don't need to do this on ubuntu.
I stay on ubuntu because of its critical mass of users. If there's a prebuilt package for something, it's probably a .deb that's compatible with ubuntu and debian. And PPAs are nice. I don't like where they're going with snap at all (I disabled snap and added flatpak support instead) and would probably switch if I thought another distro offered the same benefits.
I will never, ever forgive Apple for removing two-dimensional virtual desktops (I think they called it "spaces"). Then just to spite all of us, a few releases later they made it impossible for TotalSpaces (a hack that gave us back this functionality) to do the kernel code-injection wizardry that they sold for the incredible bargain price of $12 or something like that. Because, you know, "security".
That was the end of my Apple decade.
Now I use sway, which is an astonishingly-faithful (like, down to the config files) wayland clone of i3.
Exactly this. And why I nowadays only use open source software is that I know my experience stays the same for good. I use i3 because it works just like it worked 10 years ago. I don't need a new experience with every release, new graphics, new ways to use my computer. I just need a utilitarian desktop with my editor, browser and a terminal.
This functionality does still exist - it's just changed a bit. When you full screen an app in MacOS Big Sur, you can add another app side by side with it in it's space.
It was better before as just spaces though. Windows does this right. If you add powertoys you can even go beyond the i3 tiling options but i'd argue i3 is a bit more naturally usable than the powertoys desktop stuff.
I think the biggest contribution i3 has made to my computing life is helping me recognize just how superfluous most of what I normally did on my Windows and Mac computers was.
Take away a lot of the chrome and it’s incredible how much can be achieved via simple text files and unfussy Linux tools.
As an example, my GTD workflow (before I switched to a physical bullet journal) went from a variety of apps to a single i3 workspace that auto opened with 5 terms with each containing a single markdown file opened in vim.
The leftmost column contained the processing file from top to bottom. The middle column had Doing and Follow up files.
The rightmost column had Done and Projects files.
My done would be archived each week with a script and a new done file opened for the upcoming week.
If there were any other major files I wanted (for example I would maintain a ling term projects file) that would be opened in a different buffer in the VIM app containing the projects file.
Moving items from one folder to another was as simple as quickly navigating to that folder, using a shortcut to yank or cut the line to the global clipboard, ans then pasting it to the file I wanted to move it to.
I suspect it would have been trivial to write a script that would have automatically moved the entire line over with a single shortcut.
There is absolutely no additional overhead and it works extremely fast and well.
I am still on macOS but I totally agree with your post. macOS and iOS are seriously locked down and it can be very frustrating.
Like you I used Magnet but wanting to get something more tiling window manager-like I switched to Amethyst[1] which while not as powerful is pretty good for what it is.
Sadly macOS has frustratingly laggy window resizing for a lot of things. Finder and Safari are usually fine but pretty much everything else feels laggy to resize and generally macOS just has absolutely terrible window manager so a tiling window manager is a god send.
I mostly use macOS because I have been doing iOS and macOS development and so what else you gonna use but I have to admit I want to move away from the platform. It is all 'very exciting' right now with all the Apple Silicon hype and yeah it genuinely is impressive but for me I kinda want to get back to something more open even if it doesn't have a lovely highly efficient and performant custom SoC driving it.
I am like 90% there in convincing myself to build a nice AMD desktop next year and moving away from macOS as my main OS. Luckily I am not dependent on any Apple services so switching shouldn't be that bad.
yeah, I agree that macos seems to have the worst window management, both in design and implementation (particularily performance).
i3 sounds like a dream window manager to me, but in the meantime while I use Mac OS (& Windows), I've been using yabai, which is heavily inspired by i3 but limited by Mac OS.
i3 aside, the continuation to your story for me is, use Linux machine, be decently productive but nothing works quite the way it should.
Change your keyboard layout to one of your own? Eh maybe Xmodmaps, oh but then it gets reset every so often because $HISTORICAL_ARTIFACT. You have to change the actual xkb mapping, which is very much not a “happy path” in any DE.
The only graphical e-mail client that actually seems feature complete is Thunderbird, and for _years_ I have to race to disable the global search within 10 seconds of starting it. Otherwise the program just freezes up. I don’t even know what to use for calendar stuff.
I would really love it if Linux was viable for me as a desktop OS. It just isn’t, because of things like these.
Not sure what you're trying to do in keyboard layouts. I've yet to encounter the flexibility for keyboard layouts that you have in linux in any other OS.
Regarding email clients, again I'm not sure what you consider feature complete, but even some of the proprietary options like mailspring or hiri exist on linux.
For calendar it again depends on what you are trying to do. I've recently started using Minetime, which is not OSS but free and is working very well for me.
One of the weird phenomena that I find in these discussions, is that people are often happy for proprietary software (like OSX or windows) to restrict what they are able to do, and having to adjust their workflow to these systems, but when it comes to OSS suddenly not being able to match the previous workflow, makes the system unsuitable. (Note that this is not aimed at you directly, just a general observation).
Can't speak to your firebird issues, never used it, but as for keyboard issues?
Spouse uses my main machine for basic stuff, she has her own account. I use dvorak, she uses qwerty. About 75% of the time when the layout is switched back from qwerty to dvorak it becomes completly fubar, and all layouts output non-sense garbage until a restart. Because the layouts are broken, I can't save any of my terminal work.
I'm still pretty on the fence about things, but honestly I'd say linux and os-x are pretty even as far as shit breaking, and imo linux is a little bit better. I remember the old days when you sudo apt-get update && sudo apt-get upgrade & pray, but stuff is pretty stable even on distros like arch these days.
Every time there's a major update for os x, I already know it will definitely break something for me. Catalina broke most my game collection that could run on os x to begin with, and I'll be waiting at least another couple months before upgrading to big sur.
This isn't to try and invalidate your experience, but just saying that mine has been: Switched to OS X because things were more stable and I need a solid work machine, but linux is getting more stable while my faith in apple's software quality goes down dramatically every year.
Changing the xkb mapping is not a happy path? It's perfectly portable to any system running X11, you can define your own shortcuts to quickly switch maps with setxkbmap... It's a backbone of my workflow with a 60% keyboard.
I haven't had that experience. I have had the slightly annoying experience of mapping keys using tools for X, and then realizing some time later that when in the console those mappings vanish. Of course, there are tools for remapping keys in the console, but they are a little less user friendly.
In the end, I got a keyboard that incidentally had functionality to remap keys built in (I wanted the keyboard already anyway, but the remapping feature was a nice bonus). This ended up routing around the problem entirely, and seems like a generally better solution. Now my remaps survive all the way to Windows, for my rare trips over there.
Your complaint about the xkb/xmodmap/whatever situation is dead on, so my solution has been to simply go with a programmable keyboard. Don't have to remap capslock -> ctrl anymore - no matter what distro I use, capslock is always ctrl, right from the get-go.
I'm jealous of you and others in this thread that shared this path. I've unfortunately been dragged in the other direction: I was a happy i3/Linux user and now I'm coping with a multi-monitor Mac setup which stubbornly refuses to support workflows which I find to be very fast and efficient.
There are some tools (all $$) for Mac which attempt to replicate a tiling WM but Apple's support for it is extremely poor, affecting the quality of the tools.
I'm no longer using a Mac (for employer reasons), but I had a lot of success with Yabai (free/open-source) before I switched. https://github.com/koekeishiya/yabai
I haven't disabled SIP yet. As a Mac novice, I'm wary of breaking something required for work (and in a pandemic WFH to boot) that I can't fix. But it looks like it's optional.
Amethyst is nice but has odd and unfortunate edge cases. The use of the Accessibility APIs limits what it can do. For example, keybinds stop working if the focus is on an empty screen with no windows. Occassionaly it "loses" a window and you can't cycle to it any more, even if you focus it manually. I also don't think it can throw windows to other workspaces but maybe that's changed?
For very simple i3-like configurations (single workspace+monitor, always fullscreen or side-by-side), it's not bad. It's limited by poor support from Apple.
How did you get used to the different modifier key layout? I am missing the cmd every time I try to switch to Linux. I have tried rbreaves/kinto, but it was not working well for me, so I decided to keep the native layout.
It took me about 2 weeks to get kind-of used to the different modifier keys, but it did not feel the same.
But the thing that made me switch back is when I was buying a laptop. I was set on not going with Apple anymore after my previous macbook screen died 2.5 years in and they were asking almost 1000€ to fix it.
But I could not find an option that would match the 16" macbook laptop. I was considering the build quality, screen size and its resolution, battery life, usb type-c charger, trackpad experience and the overall software support that I will expect from a daily driver to do my web development work.
I was seriously considering the dell xps 15 9500 that launched this year, but I saw a lot of feedback about generally poor battery life on linux and some cases of trackpad problems.
I was thinking about getting a second, cheaper device like System76 Pangolin and see how it works and use this as a transition to a different platform instead of going all in by immediately switching the daily driver.
Waiting to get an M1 with a custom motherboard, a fast GPU and Arch Linux installed. I doubt that'll happen with Apple, but I suspect in a few years we have some other ARM CPU that has integrated RAM and competes with M1.
For now, a Ryzen runs circles around M1 (at least with my workloads, C/C++/Rust compilation with needs for 64+ GB of RAM) if you're willing to use some more energy.
On mac there’s a window manager called yabai which I’m using and works very well. It’s a bsp (binary space partitioner) for some advanced uses as moving to other desktops it requires SIP to be disabled (System Integrity Protection) but for the basic stuff it works much much better than Magnet or Amethyst
Got it yeah, this is probably why I was confused. URL schemes are the only way to do a lot of things on iOS, whereas (as far as I know) they're additive to the *nix ways of doing things on the Mac.
Is it easy to reconfigure keyboard shortcuts like cmd+C for Mac but ctrl+C on Linux? I have spent way too much time trying to do it on Ubuntu and it has made it hard for me to make the switch.
In macOS it is not entirely clear to me what category of things the cmd key is for. On one hand it's used for keyboard shortcuts related to window management and the system, but on the other hand it's also used to issue commands to the application itself.
In i3 you assign a $mod key, which then always and exclusively used for 'system and window management'. Opening, closing, moving and resizing windows, switching workspaces and showing toolbars is all done with some key combination that starts with $mod.
Since reading and writing the clipboard is done by the application, in my brain it should not involve $mod. So I use ctrl+c and ctrl+v for all applications. The one exception is the terminal, in which ctrl+c terminates programs. So there I use ctrl+shift+c. I also unified the register vim yanks to with the system clipboard, so for me most copying and pasting that happens in the terminal is done via `y` (yank) and `p` (paste).
A more straight answer to your question is: yes, you can remap everything to bits. You can modify what keypress each keycode should trigger, so there is really no limit to what you can do there.
I'm with you, I find the meta key on mac messy.
I personally change ctrl+shift+c and ctrl+shift+v to ctrl+c and ctrl+v in my terminal emulator so that it's consistent accross the board. This way it's clear:
* meta key: my windows manager shortcuts
* ctrl: app specific shortcuts
Cmd isn't really a meta key, though. It's just the equivalent of Control on Windows; there's no equivalent of a Windows key on macOS. That said, it's not quite necessary due to the decoupling of app windows and app open/close state.
Personally I find Apple's approach to key commands far and away the most well thought out and intuitive.
Biggest hurdle is finding a clean way to do application-specific overrides, because even if you can get the general case working right, there is a lot of per-app variation.
These are exactly the type of silly applications that are needed on macOS because the OS too locked down, or at least not made for people who like to do this type of stuff.
On Linux, almost all native applications keep their data in regular, non-proprietary files on the file system. This means your data always exists separate from the applications and you can do with it what you want. If you want to run a script every Monday at 3PM that sifts through your ~/downloads folder and uploads the top 10% largest files to a server, except in January, on Easter, when when your drive is below 70% capacity, if it's not hooked up to power, and also not if Kanye West tweeted something in all caps, you can do that. You can make it retry every 42 minutes if conditions weren't met, email Bob and message Alice in Slack about it. You can make it machine learn your behavior and pick a window within 3 hours after sunrise, log the bandwidth usage, and have it play a tune for your friend's MPD server on the other side of the country.
Why you would want to do all of this is beyond me, but odds are you want to do one of these things at some point. Then Hazel won't be able to do it because they didn't consider your use case, that functionality is only available in version 6 (pay up for the upgrade!), Apple didn't provide the right API, or they recently took it away in the name of security. Surely there's another app that can do it, but that one can't do the other thing! Oh well, maybe the next version?
On Linux, you dream up the code in the language of your choice and it will work forever. Your learned skills will transfer one-to-one to programming you do for your job or hobby, instead of being weirdly specific and locked down to this one, overpriced app's clickity click interface. The OSS mentality of interoperability and doing one thing well almost guarantees there are battle-tested tools available that can solve pieces of your specific puzzle, while not locking you in to their way of doing things.
Of course scripting is not exclusive to Linux, but it is the only OS that was and is built with it in mind.
That's a remarkably off-base take. You're attributing qualities that linux has to things that linux exclusively has. Your entire rant about emulating the functionality of Hazel with scripts is something you can do with either operating system. The fact that hazel exists is because it lets people that have neither the time nor the disposition towards writing their own program for their own use case and still get a lot of the benefits of such. In fact, even those that do have a proclivity towards scripting their own maybe would rather do something else with their time.
> On Linux, you dream up the code in the language of your choice and it will work forever.
...I'm sorry, how is this exclusive to linux, exactly? You can do the exact same thing in MacOS, Windows, FreeBSD, etc. Also, maybe some people just don't want to fucking program every function and application they run on their operating system?
> Your learned skills will transfer one-to-one to programming you do for your job or hobby
While I understand that there exists a much higher population of programmers on HN than on other websites, please stop assuming that 1) everyone here is one and 2) that everyone wants to spend their time writing software in their spare time and 3) even if they do like programming in their spare time, that they want to spend it writing automation scripts.
> instead of being weirdly specific and locked down to this one, overpriced app's clickity click interface.
What is overpriced to you is potentially someone else's pennies. I don't really think the jab at GUI interfaces is appropriate, either.
> Of course scripting is not exclusive to Linux, but it is the only OS that was and is built with it in mind.
There was the opportunity to help someone make the switch from MacOS to Linux but instead it was used to call apps that the person probably likes "silly" and "clickity click". It's more interested in making a point than helping.
I emphatically don't want to run any scripts. I have enough programming at my day job. The last thing I want is spend all of my free time programming and debugging scripts.
I'm not a MacOS user so I don't know all that much about keyboard maestro's feature set, but with i3 all of it's keybindings are configured dynamically through the config file so you can map any keybind you want to invoke an arbitrary program. It won't come nice and prepackaged but you can do a whole lot just by using included tools and i3-msg, which lets you manage i3 windows programmatically.
I wrote and maintain a FOSS Hazel-like tool called pushbroom [1] that works on both macOS and Linux. It’s worked extremely well for me for several years.
Aside: I just started using KM to implement Gmail shortcuts in Mail.app because I'm sick of the direction of third-party mail applications, and I'm super-impressed by it. Seeing the two mentioned in the same breath made me think Hazel must be important, and it solves a great need for me as well. Thanks for the tip!
If you're asking for a recommendation: Ubuntu, Fedora or Elementary. Manjaro is good if you're okay with occasional instability (due to rolling releases).
My experience has been much the same switching from Windows after the past month. The Linux desktop is better.
I don't mean to bring back the forum lifestyle of the 00s, but this is just plainly FUD. Linux does not cause hardware issues like this with any high frequency (of course it may occasionally, but any software will damage that hardware eventually). I've installed Linux, and helped others install Linux, on dozens of machines over my life. I've worked with others who install Linux on everything they touch. I've literally never heard of a story like this.
So it is POSSIBLE that Linux might have caused your issue. Your computer repair guy, however, is probably just wrong. And I would personally guess that your damage was due to power or environmental issues rather than Linux.
Any computer repair shop that says "Linux destroys computer Hardware" and "not allowed in his Shop" is clearly trying to scam you.
Also, If the computer stopped working entirely and did not even boot how did you spend hours fighting it?
If you have a malfunctioning CPU/RAM it should not be possible for you to "try to reinstall linux". What would happen I think is failure to boot and your laptop playing a certain sound as a diagnostic message.
Either way, you should refrain from making comments like "avoid Linux like a deadly disease", clearly you've messed up the installation process. You could've just chosen a one-click install linux distribution.
If you don't know how to make a chair, don't try to make yourself one. And if you do don't blame the people that gave you the wood. It wasn't even sold to you, you grabbed some free wood.
I've been using linux for over a decade and never even _heard_ of this or something similar happening. AFAIK the only typical way to configure these things is through the BIOS, well below the OS level. It sounds like something got borked and the computer repair shop decided it was easier to blame linux than to figure out or explain what happened.
It would be truly impressive if the thousands of developers of the Linux kernel had, over the course of 30 years, managed to create an operating system that physically destroys your computer. I've half a mind that this comment is meant to be humorous.
Anyway, sounds like your computer repair place was looking for a quick buck. At the very least, they have no idea what they're talking about.
> It would be truly impressive if the thousands of developers of the Linux kernel had, over the course of 30 years, managed to create an operating system that physically destroys your computer.
It used to be possible for malware to damage CPU, RAM, hard-drives, and CRTs deliberately, but the relevant techniques were fairly hardware-specific to begin with, and modern hardware is (for the most part) no longer vulnerable to these classes of attack[0].
But, I guess we don't really know what year this anecdote supposedly occurred.
[0] The one modern exception I'm aware of would be flash memory (aka SSDs), which can still be deliberately worn out with a little effort.
Let me add my own anecdata: I've been running linux on machines since 1998, and I've never experienced a failure like the one you describe. In fact, the only hardware failures I've experienced have been external (usb) hard drives failing after a few years, and a laptop keyboard failing after someone spilled tea on it.
In terms of what the repairman told you, when I called Compaq's support to get help with the keyboard, once the person on the other side of the line heard me say I was running linux on it, he said "that's your problem". I had to reiterate that my problem was someone spilling tea on it, and that I had been running linux on this particular laptop for almost 2 years without any keyboard issues, thank you very much. In the end, Compaq told me they didn't have spare parts, and the keyboard started working again after letting the machine dry up for a few days (that was the last time I bought Compaq though!).
My point is: computer repair and tech support people are almost programmed to find out something that they're not responsible for and that could explain your problem. I don't hold it against them, I think it's a normal bias, but I would never take linux advise from a repair person, unless they work on a linux shop.
I can't tell if this is poor satire or trolling, because I REALLY don't want to believe there's someone on HN that actually thinks any of that is even REMOTELY true.
I’ve been running Linux on and off on a variety of hardware since the early 2000’s and it has never ever caused any hardware damage. Also, if “Linux kills hardware” do you really think the cloud providers like AWS would be running Linux on all of their hardware? The only hardware failures I’ve ever had to deal with were when running Windows.
Sounds like you have a computer repair guy who doesn’t know anything about Linux and is bullshitting you.
I also don’t understand how your installation process was painful. Installing Ubuntu or Manjaro (the two I’ve installed most recently and most frequently over the past few years) is simply a few clicks for me and everything’s done. It really wasn’t any harder than the last time I installed OSX or windows.
If you look at their post history here and their github profile it's pretty obviously a troll. Claiming in this (copypasta style?) post that Linux destroyed their computer, but in another comment saying how much they love bash programming. In one comment they grew up playing Roblox, in another they have decades of experience across multiple programming languages including C++, Rust, Lisp, and Python.
I studied piano full-time for a short while. My favorite thing about it was how working on my technique or a particularly tricky rhythmic pattern tickled my brain. I often found that I went through a few phases: “ahrg, this is impossible!”, “getting there but sloppy” and finally, often suddenly (after a good night’s sleep) “this is easy! I don’t know how I was ever not able to do this”.
Using vim and learning new features tickles my brain in kinda the same way. I don’t care much for the hacker credits. It’s a fun variety in my work day to practice some little feature that I read about, and it doesn’t look like I’ll run out of new stuff to check out anytime soon.
I never had the words to express that feeling. Tickling the brain is exactly the experience. I also love learning a new trick in Vim even after all these years.
I love the related "I know kung fu" feeling (from the Matrix) when you realize what was very difficult, or almost impossible is doable now.
Great feeling.