Hacker Newsnew | past | comments | ask | show | jobs | submit | zerkten's commentslogin

I was a Feed Demon user. There are some videos of the experience which is much closer to a Windows email client than Google Reader was: https://www.youtube.com/watch?v=MIz5u9T94K0. Google Reader was late-stage RSS for me, but it brought some of the benefits of having all of the content download and aggregation being done server-side so the cost of adding new feeds was shared.

A common setup was to have a folder hierarchy similar to email. Blogs were in folders organized by topic using whatever approach you felt best. You'd then dip into parts of the hierarchy. There often wasn't an aggregated feed that you could use but you could see a list of all items per blog. Each blog would then be highlighted or show a count when there was new content.

I said blog instead of feed because social networks had a focus on the single scrolling feed as a list of content aggregated from different authors. Some RSS clients embraced this to a degree, but it didn't start out that way. Twitter was the first social network I really used in 2007 to follow bloggers I subscribed to, and it took a while to adjust to this firehose of interspersed content. That wasn't an uncommon sentiment from devs.


What if you have deleted social media accounts? It's possible to state that you had them with whatever identifiers, but do you have to prove their existence in some way so they can check an archive (assuming it was Twitter)?

If I understand you correctly, there's no obligation to maintain access to deleted social media accounts or to archive them.

Are those "Donald Trump is a rancid, orange Turnip" posts coming back to haunt you? Me too.

I'd brave it out and stick to your guns.

After all, facts are facts.


The original article is about a UK university. Cashflow and revenue generation is a very important topic for UK universities. They have copied the approaches of US universities, and in many cases have created overseas campuses when they have some name recognition. See https://en.wikipedia.org/wiki/International_branch_campus for examples.


Interesting link, thanks for sharing!


Did a JS polyfill ever go anywhere? There is a comment on https://groups.google.com/a/chromium.org/g/blink-dev/c/zIg2K... which suggests that it might be possible, but a lot has changed. I suspect any effort died with continued availability after the first attempt to kill XSLT.


The sites sometimes want to provide some special formatting on top of the RSS without modifying it. For example, you might point people to available RSS readers which may not be installed or provide other directions to end users. RSS feeds are used in places other than reading apps. I've seen people suggest that this transformation could be done server-side, but that would modify the RSS feed which needs to be consumed.


This is probably one of the few things I think works better in an office environment. There was older equipment hanging around with space to set it up in a corner so people could sit down and just go. When mobile came along there would be sustainable lending program for devices.

With more people being remote, this either doesn't happen, or is much more limited. Support teams have to repro issues or walk through scenarios across web, iOS, and Android. Sometimes they only have their own device. Better places will have some kind of program to get them refurb devices. Most times though people have to move the customer to someone who has an iPhone or whatever.


You can nerf network performance in the browser devtools or underprovision a VM relatively easily on these machines. People sometimes choose not to and others are ignorant. Most of the time, it's just the case that they are dealing with too many things that are vague making it difficult to prioritize seemingly less important things.

A number of times I've had to have a framing discussion with a dev that eventually gets to me asking "what kind of computer do your (grand)parents use? How might X perform there" around some customer complaint. Other times, I've heard devs comment negatively after the holidays when they've tried their product on a family computer.


> Other times, I've heard devs comment negatively after the holidays when they've tried their product on a family computer.

I worked for a popular company and went to visit family during the winter holidays. I couldn't believe how many commercials there were for said company's hot consumer product (I haven't had cable or over-air television since well before streaming was a thing, so this was a new experience in the previous five years).

I concluded that if I had cable and didn't work for the company, I'd hate them due to the bajillion loud ads. My family didn't seem to notice. They tuned out all the commercials, as did a friend when I was at his place around a similar time

All it takes is a change in perspective to see something in an entirely new light.


I’ve never had TV, and have used ad blockers as long as they’ve been a thing. (Until 1⅓ years ago I even lived in a rural area where the closest billboard of any description was 40km away, and the second-closest 100km away.) On very odd occasions, I would get exposed to a television, and what I find uncomfortable at the best of times (notably: how do they cut so frequently!?) becomes a wretched experience as soon as it gets to ads, which it does with mindboggling frequency. I’m confident that if I tried actually watching that frenetic, oversaturated, noisy mess on someone’s gigantic, far-too-bright TV, I would be sick to the stomach and head within a few minutes.


More to the point; colour and font rendering are typically "perception" questions and very hard to measure in a deployed system without introducing a significant out of band element.

Network performance can be trivially measured in your users; and most latency/performance/bandwidth issues can be identified clearly.


Chrome devtools allow you to simulate low network and CPU performance, but I'm not aware of any setting that gives you pixelated text and washed-out colors. Maybe that will make a useful plugin, if you can accurately reproduce what Microsoft ClearType does at 96dpi!


Simulating low DPI displays is built in to Safari's dev tools, but it's not of much use, considering the different font rendering between the platforms.


I'm not a FOSS advocate, but I think that's a bit strong. I think it's more a case that they recognized the need for a good user experience, but that never hit a threshold which would move the needle for change to happen with the most popular FOSS. Darktable is probably one of the exceptions here.


I really like Darktable, and it's my go to photo editor, but the user interface really isn't intuitive on first look compared to something like Lightroom. The design choice that editing modules should be ordered by their place in the pixel pipeline is logical and sometimes useful, but it ends up with a lot of the controls being in rather weird places. The customisable quick controls palette would help, if it weren't that simple things like cropping can't be added to it (at least, last time I investigated this - perhaps it's changed now?)


I could have been clearer. I wouldn't say it's the paragon of photo editing, but it's further along in terms of usability. I've seen some normal people who don't want to pay the Adobe tax move to it.

An investigation of FOSS development would highlight a bunch of problems that exist to a lesser extent with other software development. When money is on the table and there is no motivation to keep supporting behaviors that particular contributors favor then feedback shift things. When you're building stuff for "yourself" then that feedback doesn't land the same even if the project owner has aspirations for better UX.


Darktable, to me, and multiple YouTubers who have looked at it...

... falls flat on it's face in the first impression by looking like an unresponsive window, due to the disorientingly light gray color design choices. I also just tried it and of course it's not notarized, meaning that it's almost impossible for anyone to install on macOS, unless they know of the secret button in System Settings. Nope, they aren't there yet.


> I also just tried it and of course it's not notarized, meaning that it's almost impossible for anyone to install on macOS, unless they know of the secret button in System Settings.

I don't understand why you're blaming the Darktable team for that when it's Apple that makes it nearly impossible for anyone to install a program written by someone who doesn't pay them $100/year.


What's the use case for this? It seems to be for situations where you might have a SaaS product, but there is some data required from a customer system. You'd expose the customer data using this relay and integrate into the SaaS. Is that the gist of it? Integration would still likely involve you giving the customer some software to expose a limited API and handle auth, logging, etc.


They are an alternative to the tailscale operated DERP servers, which are cloud relays.

Even with the much touted NAT punching capabilities of tailscale, there are numerous instances where tailscale cannot establish a true p2p connection. The last fallback is the quite slow DERP relay and from experience it gets used very often.

If you have a peer in your tailscale network that has a good connection and that maybe you can even expose to the internet with a port forward on your router, you now have this relay setting that you can enable to avoid using the congested/shared DERP servers. So there is not really a new use-case for this. It's the same, just faster.


The explanation that I think wasn't entirely clear in the post is how it actually works/why that's better.

From what I can tell, the situation is this:

1. You have a host behind NAT

2. That NAT will not allow you to open ports via e.g. uPnP (because it's a corporate firewall or something, for example) so other tailscale nodes cannot connect to it

3. You have another host which has the same configuration, so neither host can open ports for the other to connect in

The solution is to run a peer relay, which seems to be another (or an existing) tailscale node which both of these hosts can connect to via UDP, so in this circumstance it could be a third node you're already running or a new one you configure on a separate network.

When the two NAT'ed hosts can't connect to each other, they can both opt to connect instead to this peer node allowing them to communicate with each other via the peer node.

Previously this was done via Tailscale's hosted DERP nodes; these nodes would facilitate tailscale nodes to find each other but could also proxy traffic in this hard-NAT circumstance. Now you can use your own node to do so, which means you can position it somewhere that is more efficient for these two nodes to connect to and where you have control over the network, the bandwidth, the traffic, etc.


Is there a way to determine if a particular connection is falling back to DERP today?

I have a pretty basic setup with tailscale setup on an Apple TV behind a bunch of UniFi devices and occasionally tunnelled traffic is incredibly slow.

Wondering if it’s worth setting this up on my Plex server which is behind fewer devices and has a lot of unused network and cpu.


tailscale ping <node IP>

It will tell you how each ping has been answered until a direct connection is established.


Tailscale is a few things. It might be fair to say that it is mostly a software platform with a web frontend that allows orgs (and individual users alike) to easily create secure VPNs, so their various systems can have a secure, unfiltered virtual private network on which to communicate with eachother even if they're individually scattered across the four corners of the Internet.

The usual (traditional) way to do VPN stuff is/was hub-and-spoke: Each system connected to a central hub, and through that hub each system had access to the other systems.

But the way that Tailscale operates is different than that: Ideally, each connected system forms a direct UDP/IP connection with every other system on the VPN. There is no hub. In this way: If node A has data to send to node F, then it can send it directly there without traversing through a central hub.

And that's pretty cool -- this peer-to-peer arrangement is gloriously efficient compared to hub-and-spoke. (It's efficient enough that a person can get quite a lot done with Tailscale for free, with no payment expected ever.)

But we don't live in an ideal world. We instead often live in a world of NAT and firewalls -- sometimes even implemented by the ISPs themselves -- that can make it impossible for two nodes to directly send UDP packets to eachother. This results in unreachable nodes, which is not useful.

So Tailscale's workaround to that Internet problem is to provide Designated Encrypted Relays for Packets (DERP). DERP usually works, and end-to-end encryption is maintained.

DERP is also not at all new. It brings back some aspects of hub-and-spoke, but only for nodes that can't communicate directly; DERP behaves in a way akin to a hub, to help these crippled nodes by relaying traffic between them and the rest of the VPN's nodes.

But DERP is a Tailscale-hosted operation. And it can be pretty slow for some applications. And there was no way, previously, for an individual user to improve the performance of DERP: It simply was whatever it was -- with a collection of DERP servers chewing through bandwidth to provide connectivity for a world of badly-connected VPN nodes.

But today's announcement brings forth Tailscale Peer Relay.

> What's the use case for this?

The primary use case for this is simple: It is an alternative to DERP. A user can now provide their own relay service for their network's badly-connected peers to use. So now, rather than being limited to whatever bandwidth DERP has available, relaying can offer as much bandwidth as a user can afford to pay for and host themselves.

And if a user plans it right, then they can put their Peer Relay somewhere on the network where it can help minimize inter-node latency compared to DERP.

(It's not for everyone. Tailscale isn't for everyone, either -- not everyone needs a VPN at all. I'd never expect a random public customer to use it knowingly and directly.)


Yeah, Tailscale is really cool. The only thing I wish is that they didn't tie auth to either a big tech monopoly (Google, github etc) or running your own IDP service. I would love to use Tailscale for some self hosted stuff I have, but hesitate to start exposing something like an identity management tool because that's a high value target. And of course, I don't really want to let Google et al be in control of my VPN setup either.


That's a valid concern.

I've also used ZeroTier with good success.

They're a competitor that offers VPN with similar idealized P2P topology. Unlike Tailscale, ZT is not based on wireguard (ZT predates wireguard), but they do offer the option to use their own local auth without reliance/potential issues with yet-another party.

ZT also allows a person to create and use their own relay (called a "moon"), if that's something useful: https://rayriffy.com/garden/zerotier-moon

(For my own little purposes I don't really have a preference between either Zerotier or Tailscale.)


Thanks for the tip! I'll check that out and see if it would work for my VPN needs, but it certainly sounds promising.


They support Passkeys. This is exactly how I continue using them after moving away from Google Workspaces.


Oh wow, I had totally missed this[0]! Is it possible to migrate an existing SSO account (with associated tailnet) to a passkey one?

[0]: https://tailscale.com/blog/passkeys


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: