> My dotfiles are private for now cause I need to clean some commits
I had to do similar. I ended up deleting the git history and just recreating it before pushing. The best thing was to add a dependency on `~/.secrets` or other similar un-tracked file, which is basically just a source-able script that defines things like API keys, private URLs, etc.
I thought about recreating the repo from scratch too. I was transfering from Intel to ARM Mac and keep a few tags/commits related to previous configs, so it is hard to just let it go :)
You weren't kidding. I was amused by the humor in the first few minutes, but then I got to its showcase of what you can do, and am just even more blown away. They weren't kidding about doing _just about everything_ pretty well.
> every time it's like I'm reading it for the first time. I can only remember thew "mood" so to speak,
I am like this with a lot of books. I'll remember a very high level overview ("The Historian is about a modern day hunt for Dracula, and it's really cool, and I liked how the story was told, but I can't remember why or any of what happened."), but can't remember much about plot details.
It makes re-reading things fun, but also is frustrating because I can't explain why something was good, and I also remember just enough that plot twists don't surprise me the second time. It also means that I completely forget about the "bad" parts of the book, or the parts that didn't resonate with me.
I appreciate you laying it out like that. I've seen these NVME NAS things mentioned and had been thinking that the reliability of SSDs was so much worse than HDDs.
SSDs are just limited write cycles whereas HDDs literally spin themselves to death. In a simple consumer NAS usage, like if this was just photo backup, that basically means SSDs will last forever. Meanwhile those HDDs start hitting borrowed time at 5-8 years, regardless of write cycles.
I have had two Sandisk 2.5 inch SSDS just suddenly fail. No warning that I could discern, and no way to recover afterwards. Both were while running Debian variants as a / partition, luckily I keep /home on a separate partition.
Any idea what that failure mode could have been? It worries me tremendously to keep data on an SSD now.
Training AI models uses a large amount of energy (according to what I've read / headlines I've seen /etc), and increases water usage. [0] I don't have a lot to offer as proof, merely that this is an idea that I have encountered enough that I was suprised you hadn't heard of it. I did a very cursory bit of googling, so the quality + dodginess distribution is a bit wild, but there appear to be indiustry reports [2, page 20] that support this:
"""
[G]lobal data centre electricity use reached 415 TWh in 2024, or 1.5 per cent of global electricity consumption.... While these figures include all types of data centres, the growing subset of data centres focused on AI are particularly energy intensive. AI-focused data centres can consume as much electricity as aluminium smelters but are more geographically concentrated. The rapid expansion of AI is driving a significant surge in global electricity demand, posing new challenges for sustainability. Data centre electricity consumption has been growing at 12 per cent per year since 2017, outpacing total electricity consumption by a factor of four.
"""
The numbers are about data center power use in total, but AI seems to be one of the bigger driving forces behind that growth, so it seems plausible that there is some harm.
USA uses 21.3 TWh of petroleum per day for transportation. Even if AI was fully responsible for all data center usage (it is not even close) we're quibbling over 20 days of US transportation oil usage, which actually has devastating effects on the environment.
Data centers are already significant users of renewable electricity. They do not contaminate water in any appreciable amount.
There's an "AI is using all the water" meme online currently (especially on Bluesky, home of anti-AI scolds), which turns out to come from a study that counted hydroelectric power as using water.
As an example, Ren and his colleagues calculated the emissions from training a large language model, or LLM, at the scale of Meta’s Llama-3.1, an advanced open-weight LLM released by the owner of Facebook in July to compete with leading proprietary models like OpenAI's GPT-4. The study found that producing the electricity to train this model produced an air pollution equivalent of more than 10,000 round trips by car between Los Angeles and New York City.
(https://news.ucr.edu/articles/2024/12/09/ais-deadly-air-poll...)
> The study found that producing the electricity to train this model produced an air pollution equivalent of more than 10,000 round trips by car between Los Angeles and New York City.
I am totally on board with making sure data center energy usage is rational and aligned with climate policy, but "10k trips between LA and NY" doesn't seem like something that is just on its face outrageous to me.
Isn't the goal that these LLMs provide so much utility they're worth the cost? I think it's pretty plausible that efficiency gains from LLMs could add up to 10k cross USA trips worth of air pollution.
Of course this excludes the cost of actually running the model, which I suspect could be far higher
> 10,000 round trips by car between Los Angeles and New York City.
That seems like very low impact, especially considering training only happens once. I have to imagine that the ongoing cost of inference is the real energy sink.
It doesn't happen only once. It happened once, for one version of one model, but every model (and there are others much larger) has its own cost and that cost is repeated with each version as models are continuously being retrained
Thank you for painting it that way. As someone who has normally done back end stuff in Django, having the ORM magic is so deeply ingrained for me. I was about to ask what one should use for an ORM, but looking at the Hono examples is pretty helpful. It looks Prisma is one good example of what I was looking for :D
The intention of "minimum wage" in the US is not merely subsistence level. FDR said, "by living wages, I mean more than a bare subsistence level-I mean the wages of decent living." [0]
The "iron law of wages" is instead an economic principle that wages tend to trend downwards until people are paid the minimum possible for subsistence. It's not meant to be a goal.
There are ten million stated reasons for the minimum wage! Pretty much the only one that economists can agree on is that it's helpful to prevent abuses of monopsony power in small towns.
Regardless of what FDR said, a living wage is guaranteed because people will not accept anything lower.
The problem with a "decent" living is that reasonable people can disagree about what that looks like. Roommates? Children? An unemployed spouse? Vacations and retirement?
It's not the government's job to guarantee all of that stuff and I would rather we focus on stopping wage and tip theft and protecting the rights of workers (banning noncompetes, decoupling health insurance, etc.) instead of increasing the minimum wage towards some poorly-defined goal.
There's also the other side of the minimum wage debate, which is that most of the specific numbers people list as "liveable" do actually result in some folks losing their jobs and becoming unemployable. There was even a recent BERKELEY study that showed this!
I really like that your page talks about _why_ a Hilbert curve is good. I don't remember ever learning about those before, and now hopefully if I'm ever trying to visualize 1D data, I might remember that :)
This seems like it would be solved completely by printing the error message ("option b unknown"), and then also printing the "--help" stuff. You can see the error inthe first line(s), so `head` will make that easy to see, but anyone trying to print the help text will get it.
I had to do similar. I ended up deleting the git history and just recreating it before pushing. The best thing was to add a dependency on `~/.secrets` or other similar un-tracked file, which is basically just a source-able script that defines things like API keys, private URLs, etc.