Hacker Newsnew | past | comments | ask | show | jobs | submit | BearOso's commentslogin

Programmers aren't good at checking if the name is taken. We've done this particular one before. Phoenix (Firefox) had to change names because of Phoenix Technologies, then again because of the Borland Firebird Database.

> Front loading

That's the problem. Front-loading washers have generally been a terrible invention. Unbalancing and mold are among the widespread problems. The actually reliable washers are still top-load.


I've always wondered, since we only have front-load washers here in the UK, is there some sort of advantage to it, aside from space, which seems to be the obvious one, does gravity help with battering the clothes around when the drum spins slowly enough they can fall from the top of the drum?

Front loaders are gentler on clothes, use a lot less water, use a lot less energy, and spin faster in the spin cycle so there is less work for your dryer if you use one.

Top loaders are easier to load and unload, cheaper, and slightly easier to maintain.

With front loaders you should wipe the gasket after use because water left in its folds can promote mold and odors. With both you should leave the door open when not in use so air can circulate in the drum. With a front loader the open door can get in the way and is easier to accidentally close.

Front loaders are easier to stack.


Interesting, thanks, I had no idea about much of this, I was aware of the door/mould thing, and stacking, though it's not something I've ever seen done here in the UK personally.

As a "typical" British household, we don't use a dryer, don't even own one in fact, we just hang our clothes to dry, which always struck me as ironic for such a humid, cold country, with smaller (than the US) homes and thus less space to hang stuff to dry.


It's seemingly an experiment to see how an LLM performs when the task is just outside of its milieu. The answer is not very well.


The terminal screenshots are terrible, too. They're using a non-monospace font and the kerning is messed up, making everything double-wide.


But how do you know you're getting the correct picture from that throwaway UI? A little while back there was an blog posted where the author wrote an article praising AI for his vibe-coded earth-viewer app that used Vulkan to render inside a GUI window. Unfortunately, that wasn't the case and AI just copied from somewhere and inserted code for a rudimentary software rendering. The AI couldn't do what was asked because it had seldom been done. Nobody on the internet ever discussed that particular objective, so it wasn't in the training set.

The lesson to learn is that these are "large-language models." That means it can regurgitate what someone else has done before textually, but not actually create something novel. So it's fine if someone on the internet has posted or talked about a quick UI in whatever particular toolkit you're using to analyze data. But it'll throw out BS if you ask for something brand new. I suspect a lot of AI users are web developers who write a lot of repetitive rote boilerplate, and that's the kind of thing these LLMs really thrive with.


> But how do you know you're getting the correct picture from that throwaway UI?

You get the AI to generate code that lets you spot-check individual data points :-)

Most of my work these days is in fact that kind of code. I'm working on something research-y that requires a lot of visualization, and at this point I've actually produced more throwaway code than code in the project.

Here's an example: I had ChatGPT generate some relatively straightforward but cumbersome geometric code. Saved me 30 - 60 minutes right there, but to be sure, I had it generate tests, which all passed. Another 30 minutes saved.

I reviewed the code and the tests and felt it needed more edge cases, which I added manually. However, these started failing and it was really cumbersome to make sense of a bunch of coordinates in arrays.

So I had it generate code to visualize my test cases! That instantly showed me that some assertions in my manually added edge cases were incorrect, which became a quick fix.

The answer to "how do you trust AI" is human in the loop... AND MOAR AI!!! ;-)


> Nintendo games even temporarily dropped music voices (typically preferring the second square wave voice, then the first) to play sound effects.

This even happened on the SNES. A few Square games like Chrono Trigger and Final Fantasy 6 have tracks that are noticeably missing a music channel when playing sound effects. It wasn't totally necessary to use all the channels, but they were very adamant about their music quality.


Maybe it's the opposite. IBM spent years on the technology. Watson used neural networks, just not nearly as large. Perhaps they foresaw that it wouldn't scale or that it would plateau.


I don't know when the phrase "em dash" got popular. It was probably due to web development, because, unless you were into typesetting, nobody knew what "em" was. We always just called them dashes--two hyphens make a dash.


Typographical fun fact: An em-dash is approximately the width of the letter "M", and an en-dash is the width of a lowercase n!

The latter is barely used, but is the right way to indicate dates like 2023-25.

The more you know!


I would go back to the advent of Desktop Publishing. The early Macintosh + Laserwriter really did a number in bringing esoteric terms like "font" to us commoners.

Some of us found out we were typography nerds and didn't know it until then.


I think the move to more test-driven development has made everyone a little bit overconfident. I've seen pull requests merged that pass all tests, but still end up bug-ridden. There's just no substitute for human eyes looking over the changes.


I think it just tokenizes everything and does pattern matching to find compositions it can exploit. It's not unlike compiler optimization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: