Hacker Newsnew | past | comments | ask | show | jobs | submit | mjfisher's commentslogin

Serious question: what are those things from windows 95/98 I might miss?

Rose tinted glasses perhaps, but I remember it as a very straightforward and consistent UI that provided great feedback, was snappy and did everything I needed. Up to and including little hints for power users like underlining shortcut letters for the & key.


I miss my search bar actually being a dumb grep of my indexed files. It's still frustrating typing 3 characters, seeing the result pop up in the 2nd key stroke, but having it transform into something else by the time I process the result.


Inevitably windows search fails to highlight what I’m looking for almost all of the time, and often doesn’t even find it at all. If I have an application installed, it picks the installer in the downloads folder. If I don’t have an app installed, it searches Bing for it. Sometimes it even searches when I do have the application installed!

Microsoft seems not to believe that users want to use search primarily as an application launcher, which is strange because Mac, Linux, and mobile have all converged on it.


The only one I can think of, literally the only one, is grouped icons.

And even that's only because browsers ended up in a weird "windows but tabs but actually tabs are windows" state.

So yeah, I'd miss the UX of dragging tabs into their own separate windows.

But even that is something that still feels janky in most apps ( windows terminal somehow makes this feel bad, even VS code took a long time to make it feel okay ), and I wouldn't really miss it that much if there were no tabs at all and every tab was forced into a separate window at all times with it's own task bar entry.


It's not like grouped icons wasn't technically infeasible on win95. And honestly, whatever they are more useful is quite debatable. And personally, I don't even have a task panel anymore.

The real stuff not on Win95 that everyone would miss is scalable interfaces/high DPI (not necessary as in HiDPI, just above 640x480). And this one does require A LOT of resources and is still wobbly.


I'm not sure what you mean by "Technically feasible", but it wasn't supported by explorer.

You could have multiple windows, and you could have MDI windows, but you couldn't have shared task bar icons that expand on hover to let you choose which one to go to.

If you mean that someone could write a replacement shell that did that, then maybe, but at that point it's no longer really windows 95.


Could you give a few examples? I'd lean towards adjusting tooling if you can.

My spelling is often horrendous and I know it - but almost every dev I know of prefers to copy and paste anything that might be misspelled just because it's easier than taking the risk.

Similarly - how does does this get anywhere near causing a production outage?

I'd be tempted to view this as a blessing in disguise; this person sounds like they'll trip up more often than the rest, but if one individual can cause a production outage with spelling mistakes something's gone awry with your processes elsewhere. You have an opportunity to fix whatever that is now.


Example:

A string value in a json config needed to be updated.

On one prod instance, typo while updating the config by hand. Config validation of the software caught it, software stopped with the appropriate error message, a few minutes later we were up and running again.

We introduced work reviews on prod instances (similar to code reviews) after that.

Later, he then wrote a patch script to avoid making that mistake again.

In the json schema definition used in the script, the name of the property had a typo (how it came to be... no clue, copy paste should have taken care of that).

The script was part of a MR, the reviewer missed the typo. We noticed it in staging.

We introduced tests for config editing scripts after that.

And so it went on and on... The problem is not that it happens and we then refine our processes. It is the frequency.


What I’m seeing here is that you don’t have mature mechanisms to assure the reliability of your services yet. The second paragraph suggests that a misconfiguration was able to make it into production that arguably should have been caught at an earlier stage of the deployment pipeline. Anyone can make these sorts of mistakes; the fact that a particular colleague is more prone to them really doesn’t matter all that much.

Fortify your delivery pipeline and the problem should resolve itself.


Sure, it is one way to look at it. My caveat would be: Processes aren't ever perfect.


They are not, but think of it like learning to play a guitar: at first, the strings cut into your fingers, but then you build up enough calluses and playing it stops hurting. Or, consider a building code: every rule was written in blood, and new buildings get safer over time.


Is there a way of getting them to store a dozen or so totp secrets? And if so, how do you select which one you want to use?


For that use case get an onlykey rather than a yubikey.


And in case it helps further in the context of the article: traditional rendering pipelines for games don't render fuzzy Gaussian points, but triangles instead.

Having the model trained on how to construct triangles (rather than blobbly points) means that we're closer to a "take photos of a scene, process them automatically, and walk around them in a game engine" style pipeline.


Any insights into why game engines prefer triangles rather than guassians for fast rendering?

Are triangles cheaper for the rasterizer, antialiasing, or something similar?


Cheaper for everything, ultimately.

A triangle by definition is guaranteed to be co-planer; three vertices must describe a single flat plane. This means every triangle has a single normal vector across it, which is useful for calculating angles to lighting or the camera.

It's also very easy to interpolate points on the surface of a triangle, which is good for texture mapping (and many other things).

It's also easy to work out if a line or volume intersects a triangle or not.

Because they're the simplest possible representation of a surface in 3D, the individual calculations per triangle are small (and more parallelisable as a result).


Triangles are the simplest polygons, and simple is good for speed and correctness.

Older GPUs natively supported quadrilaterals (four sided polygons), but these have fundamental problems because they're typically specified using the vertices at the four corners... but these may not be co-planar! Similarly, interpolating texture coordinates smoothly across a quad is more complicated than with triangles.

Similarly, older GPUs had good support for "double-sided" polygons where both sides were rendered. It turned out that 99% of the time you only want one side, because you can only see the outside of a solid object. Rendering the inside back-face is a pointless waste of computer power. This actually simplified rendering algorithms by removing some conditionals in the mathematics.

Eventually, support for anything but single-sided triangles was in practice emulated with a bunch of triangles anyway, so these days we just stopped pretending and use only triangles.


As an aside, a few early 90s games did experiment with spheroid sprites to approximate 3D rendering, including the DOS game Ecstatica [1] and the (unfortunately named) SNES/Genesis game Ballz 3D [2]

[1] https://www.youtube.com/watch?v=nVNxnlgYOyk

[2] https://www.youtube.com/watch?v=JfhiGHM0AoE


>triangles cheaper for the rasterizer

Yes, using triangles simplifies a lot of math, and GPUs were created to be really good at doing the math related to triangles rasterization (affine transformations).


Yes cheaper. Quads are subject to becoming non-planar leading to shading artifacts.

In fact, I belive that under the hood all 3d models are triangulated.


Yes. Triangles are cheap. Ridiculously cheap. For everything.


I'm on mobile; I scrolled to the bottom and clicked the image of the painting and could zoom in to my heart's content - did it ask you for an account?


You can zoom in a lot on the 2490 × 1328 pixels offered. When you hit the download button for the full version, you get nagged.

Edit: you can zoom in, and then it will offer up the painting in slices at a higher resolution. So in theory you could download those and stitch them together if you manage to hit an unscaled version.


Fascinating reading:

> The majority of developers are unacquainted with features such as processing instructions and entity expansions that XML inherited from SGML. At best they know about <!DOCTYPE> from experience with HTML but they are not aware that a document type definition (DTD) can generate an HTTP request or load a file from the file system.

I was one of them!


Developers are even less aware that SGML has (and always had) quantities in the SGML declaration, allowing among other things to restrict the nesting/expansion level of entities (and hence to counter EE attacks without resorting to heuristics).

Regarding DOCTYPE and DTDs, browsers at best made use of those to switch into or out of "quirks mode", on seeing special hardcoded public identifiers but ignored any declarations. WHATWG's cargo cult "<!DOCTYPE html>" is just telling an SGML parser that the "internal and external subset is empty", meaning there are no markup declarations necessary to parse HTML which is of course bogus when HTML makes abundant use of empty elements (aka void/self-closing elements in HTML parlance), tag omission, attribute shortforms, and other features that need per-element declarations for parsing. Btw that's what defines the XML subset of SGML: that XML can always be parsed without a DTD, unlike HTML or other vocabularies making use of above stated features.

Keep in mind SGML is a markup language for text authoring, and it would be pretty lame for a markup language to not have text macros (entities). In fact, the lack of such a basic feature is frequently complained about in browsers. The problems came when people misused XML for service payloads or other generic data exchange. Note SOAP did forbid DTDs, and stacks checked for presence of DTDs in payloads. That said, XML and XML Schema with extensive types for money/decimals, dates, hashes, etc. is heavily used in eg ISO 20022 payments and other financial messages, and to this date, there hasn't evolved a single competitor with the same coverage and scope (with the potential exception of ASN.1 which is even older and certainly more baroque).


> Regarding DOCTYPE and DTDs, browsers at best made use of those to switch into or out of "quirks mode", on seeing special hardcoded public identifiers but ignored any declarations.

Not when processing XML mime types. In modern browsers that mostly means SVG files, but i think XHTML is still possible.

(Modern) HTML is neither SGML nor XML, so it doesn't follow the rules of either.


"Modern" WHATWG HTML is still following SGML rules to the letter in its dealings with tag inference and attribute shortforms ([1]). Which isn't surprising when it's supposed to hold up backward compat. To say that "HTML is not SGML" is a mere political statement so as not be held accountable to SGML specs. But (the loose group of Chrome devs and other individuals financed by Google to write unversioned HTML spec prose that changes all the time, and that you're calling "modern HTML" even though it doesn't refer to a single markup language) WHATWG had actually better used SGML DTDs or other formal methods, since their loose grammar presentation and its inconsistent, redundant procedural specification in the same doc is precisely were they dropped the ball with respect to the explicitly enumerated elements on which to infer start- and end-element tags. This was already the case with what became W3C HTML 5.1 shortly after Ian Hickson's initial HTML 5 spec (which captured SGML very precisely) ([1]). But despite WHATWG's ignorance, even as recent as two or three years ago, backward compatibility was violated [2]. Interestingly, this controversity (hgroup content model) showed up in a discussion about HTML syntax checkers/language servers just the other day ([3]).

Where HTML does violate SGML was when CSS and JS were introduced already, to prevent legacy browsers displaying inline CSS or JS as content. The original sin being to be place these into content rather than attributes or strictly into external resources in the first place.

Regarding SVG and XHTML, note browsers basically ignore most DTD declarations in those.

[1]: XML Prague 2017 proceedings pp. 101 ff. available at <https://archive.xmlprague.cz/2017/files/xmlprague-2017-proce...>

[2]: <https://sgmljs.net/blog.html>

[3]: <https://lobste.rs/s/o9khjn/first_html_lsp_reports_syntax_err...>


> "Modern" WHATWG HTML is still following SGML rules to the letter...To say that "HTML is not SGML" is a mere political statement so as not be held accountable to SGML specs.

That is self-contradictory and makes no sense. If its following sgml to the letter than there is nobody to be held accountable for violating the sgml spec and hence nobody to hide behind "political statements".

You can't have this both ways.

> Regarding SVG and XHTML, note browsers basically ignore most DTD declarations in those.

They listen to dtd's for entity references and default attribute values. I'd hardly call that ignoring.


Most of these exploits are so famous that common xml processors have disabled the underlying features.

So in practise you probably dont have to worry too much as long as you dont enable optional features in your xml library. (There are probably exceptions)


> I was one of them!

I still one of them!


Nope, it's not a coincidence - it's an interesting exploration of the history of the definition of a metre. Read the article.

As it says, at some point there was an attempt to standardise the length of a metre in terms of a pendulum's length; which related it directly to g through Pi.


Perhaps not all that offtopic - Hatetris is what happens when you subvert normal the rules and make the game play against you. Anti-mimetics stories are what happens when you subvert the rules of ideas and make them play against you.

I can imagine a common space of inspiration there.


Most recently, Elixir.

I never really lost my love for programming, but twenty years in the n-th commercial project in the more common languages (plus a front end based in whatever combination of JS frameworks is the new flavour) really ground a lot of the original creative joy out of it for me. The interesting bits got too easy and the hard bits got more uninteresting.

Elixir is a breath of fresh air; it's purely functional so it requires thinking a bit differently, but it's accessible enough to start easily and pretty enough that it's not a soup of parentheses (looking at you, lisps). It's practical and well suported enough to build a wide variety useful things, and very good at concurrency.

It's what I really wanted Ruby to feel like.


The author mentions those at the bottom of the article, but two problems highlighted still remain:

* There's another intermediary concept (kernel density estimation) between the audience and the data

* They're still likely to misrepresent tight groupings and discontinuities, which will be smoothed out


Histograms and box plots are just clunky kernels density estimates too


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: