But as long as you're using a registrar in your own country and a TLD managed by a legal entity in your own country, you do have a path of legal recourse against both parties.
It might not be successful, but you do have far better options than relying on a third party in a country far away.
Right, and we will be ditching our Google Cloud account this time, but as explained in the post this will come at either a security or a usability cost for our customers, which is why I did not ditch after the first suspension.
I can understand you not ditching after the first suspension but the second suspension should have been the point where you took the choice.
First time is a fluke, second time is a serious wake-up call, third time it's your fault.
Do you really want to reach the point where all your customers have an outage, you have to rush implementing something else (oidc or api keys) AND rush your customers to change your settings?
Second and third suspensions are a week apart. Wouldn’t be enough time to shift customers to a new auth format, specially when most of the burden is on them.
Those are negligible compared to a 100% availability / uptime cost to your business incurred from being a serf to a feudal tyrant with no name or face that enjoys abusing you.
Using GCP, AWS, or Azure is like volunteering to use your own money to rent heavy construction equipment to construct your own jail cell and excavate your own grave.
But hey, at least you get to avoid the capex on the heavy construction equipment, and it's always¹ available!
¹ except for when human error takes it offline for 14 hours straight
You're not just dealing with a massive bureaucracy, you're dealing with a massive automated bureaucracy whose rules aren't explicit, whose algorithms are buggy, and which can destroy your business on a whim with no recourse, without even noticing.
You can set concurrency limits per function on AWS, so you can apply a hard limit on your function to only have a single invocation running at the same time. That should give you a guarantee that data isn't lost without the producer noticing.
I have no idea if this is for a specific language, a framework, a plugin system .. or what. And I'm not able to say anything useful from the page or from the title of the post.
It'd be helpful to at least set the premise for those of us on the outside.
I am kind of awaiting some news about more breakthroughs in DWDM technology and clicked through thinking this might be related. All I could tell immediately was that it was code of some description. I really detest when higher level stuff steals terminology from lower level protocols. I feel like they could have found a better term than fiber.
Maybe, but "thread" is a widely accepted term even though it's closely related to "fiber" both in concurrency (coop/preempt multitask) and general day-to-day sense (textile).
In cases like this it helps to navigate to the homepage of the website. With one click on the header link, you find out that this is about Ruby and some http server.
In my own experience it's not often the elected moderators that are the problem, but those with a golden tag in a specific tag. They're far too eager to close questions because they're the ones culling through a tag often - and then close the question as they quickly think "oh, it's that again".
But it often isn't, they just didn't spend enough time to see nuance.
And neither do they see that even if _they_ understand that the question linked to is the same thing, there is no way the asker can understand what the similarity is from their knowledge point of view (or why the linked duplicate question is the same question).
... Okay, I want to walk back something I said in some other comments here. There is definitely a class of SO questions that get closed as duplicates inappropriately. I tend to forget about the first of the questions because it's not generally a suitable dupe target when it's used: it's a meta question, explaining how to fix your question, rather than actually answering it. But, as you might infer, that means your question should still be closed - it lacks debugging details.
I fought against this trend on meta: https://meta.stackoverflow.com/questions/426205 . Unfortunately, there's another incentive misalignment here: dupe-hammering the question allows users with a gold badge to act more quickly on questions that don't meet site standards but are likely to attract a quick answer that interferes with keeping the site clean.
The second one... honestly probably isn't the best version of the question, but it's attracted good answers and become "canonical". The problem is that thinking in terms of "variable variables" isn't necessarily the right way to think about the problem (dynamically modifying namespaces; or rather, the fact that Python's namespaces are reflected as objects that can in most cases be modified meaningfully) - but it does map pretty well to how a beginner would typically think about the problem. It just tends to overlap with other reasonable questions in a messy way.
On Codidact, I've attempted to address the problem space more proactively, but I think I didn't complete the project I had in mind.
> How much traffic do the questions that get duped to something bring? Especially the (currently) 410 questions linked to the Java NPE question. You get the couple of FGITW answers on it and the answer is over there, and closed to keep more people from trying to answer it (I hope the dup hammer is helping)... but now it's a closed question with 0 score, 100 views after a year... and five answers (one of which was accepted)... and no one will ever find it.
That was in 2014.
---
There are some misaligned incentives. There are probably people who dup vote to try to boost their reputation for some reason.
The problem (as I saw it) was that the tools of moderation and curation had too much friction and limits placed on them.
As the number of questions grew faster than the people who would curate them did, and the tools to curate them were diminished... you've got the problem of "there are two tools to curate and moderate left. One is to close the question. The other is to be a jerk to try to disincentivize the person from doing that again." I wrote about the second bit... a few years ago. Rudeness – the moderation tool of last resort -- https://shagie.net/2016/09/16/rudeness-the-moderation-tool-o...
Things like making it harder to not see low quality questions, or close them, or delete them...
> Thus rudeness and the attempt to drive an individual away because other moderation tools have run out or are ineffective. Rudeness is the moderation tool of last resort. When one sees the umteenth “how do I draw a pyramid with *” in the first week of classes on a programming site – how does one make it go away when the moderation tools have been fully exhausted? Be rude and hope that the next person seeing it won’t post the umteenth+1 one.
With respect to Stack Overflow, I believe that they've exhausted the people capable of doing moderating without rudeness and are now employees trying to moderate the core group rather than the core group empowered to moderate the site. Eventually, there will be no more left in the core group.
Other sites, with a narrower focus (e.g. GitHub discussions) are more able to handle the better focused questions and smaller user bases.
Because we're trying to build a searchable reference, such that if you try to look for an existing question, you a) find it; b) find the right question; c) find the best possible version of that question; d) can readily tell that you found what you want.
And because we are explicitly not trying to build a discussion forum, social media, "HN but specifically for programming questions", or anything else like that.
You might as well ask: why delete newly created pages on Wikipedia, or revert edits to existing pages?
> In my own experience it's not often the elected moderators that are the problem, but those with a golden tag in a specific tag. They're far too eager to close questions because they're the ones culling through a tag often - and then close the question as they quickly think "oh, it's that again".
> But it often isn't, they just didn't spend enough time to see nuance.
As a gold badge holder (for Python and a few other things), I see this complaint constantly. It is without merit ~90% of the time. The simple fact is that the "nuance" seen by the person asking the question is just not relevant to us, because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone. This entails collecting useful answers together so that people with fundamentally the same question can all find them, instead of it depending on how lucky their search engine of choice is feeling today.
The meta site has historically been flooded with people trying to reopen blatant duplicates based on trivial distinctions, at the level of "no, I want to get the Nth item of a list, not a tuple". That isn't a direct quote, but it's not an exaggeration either. I wish it were.
We do make mistakes, in part because there's pressure to act quickly. It's much harder to keep the site clean when answers get posted where they shouldn't be. Closing questions prevents answers from coming in.
> there is no way the asker can understand what the similarity is from their knowledge point of view (or why the linked duplicate question is the same question).
I try to leave a comment to explain the connection when it isn't obvious. (Another common thing that happens is that the problem someone wants to solve involves an obvious two- or three-step procedure, and each step is a matter of fundamental technique that's already been explained countless times.) But overall, it isn't our goal to teach. We answer very simple questions, and very difficult questions; but we aren't designed to teach. Sometimes it's hard to ask a simple question, because you have to figure out what the question is first. It's unfortunate that people who need the question answered often don't have that skill. But if we have a high quality version of that question already, we can direct people there.
Sometimes the linked duplicate isn't the best choice. You can help by finding and promoting a better choice - on the meta site and in the chat rooms. You can also help by editing common duplicate targets - both questions and answers - so that it becomes more clear to people who would actually have the question, that they're in the right place (and so that the information in answers is more readily applicable to them).
> because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone
This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.
To emphasize the difference: Personalized answers would be about having a single question and giving different answers to different audiences. This is not at all the same as having two different _questions_.
>This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.
What you're missing: when a question is closed as a duplicate, the link to the duplicate target is automatically put at the top; furthermore, if there are no answers to the current question, logged-out users are automatically redirected to the target.
The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.
It's important here to keep in mind that the site's own search doesn't work very well, and external search doesn't understand the site's voting system. It happens all the time that poorly asked, hard-to-understand versions of a question nevertheless accidentally have better SEO. I know this because of years of experience trying to use external search to find a duplicate target for the N+1th iteration of the same basic question.
It is, in the common case, about personalized answers when people reject duplicates - because objectively the answers on the target answer their question and the OP is generally either refusing to accept this fact, refusing to accept that closing duplicates is part of our policy, or else is struggling to connect the answer to the question because of a failure to do the expected investigative work first (https://meta.stackoverflow.com/questions/261592).
> The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.
Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.
> ... because objectively the answers on the target answer their question ...
> ... because of a failure to do the expected investigative work first ...
Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.
The expected investigative work HAS been done; they explained why the other question is NOT a duplicate. The key point is that all of this has been ignored by the person closing the question.
> Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.
Here, for reference, is the entire sentence which kicked off the subthread where you objected to what I was saying:
> It is without merit ~90% of the time. The simple fact is that the "nuance" seen by the person asking the question is just not relevant to us, because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone.
In other words: I am defending "preventing answers to the question" for the exact reason that it probably actually really is a duplicate, according to how we view duplicates. As a reminder, this is in terms of what future users of the site will find the most useful. It is not simply in terms of what the question author thinks.
And in my years-long experience seeing appeals, in a large majority of cases it really is a duplicate; it really is clearly a duplicate; and the only apparent reason the OP is objecting is because it takes additional effort to adapt the answers to the exact situation motivating the original question. And I absolutely have seen this sort of "effort" boil down to things like a need to rename the variables instead of just literally copying and pasting the code. Quite often.
> Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.
No, they do not. They describe the experience of believing that the other question is different. They don't even mention the answers on the other question. And there is nowhere near enough detail in the description to evaluate the reasoning out of context.
This is, as I described in other comments, why there is a meta site.
And this is HN. The average result elsewhere on the Internet has been worse.
From the article: "- adjust timeout settings to prevent crashes". Include the details of why the timeout setting lead to crashes; what were the inputs and the cases that caused this.
This lets us decide whether the fix stays or goes the next time there is an issue in the same piece of code, or your commit broke something unrelated - the person fixing it needs to know _why_ you changed the code.
There might be alternatives that are better designed for that use case these days; pass and KeePassXC are popular ones, depending on the interface you want (pass is made for the cli as the primary interface).
I'm not sure if having a broken navigation menu at the top because of Disconnect or uBlock is a good sign, but their product seems like a decent alternative.
I've been using it for about a year now and for personal stuff its fine. Just some sheets and the occasional document and there haven't been any issues with formatting and such that tend to pop up when moving between MS and others.
It runs with a single window too, where all docs and sheets are open, I like that.
You'll find them readily available on Ebay, but there are also multiple companies that specialize in refurbishing servers (which usually will allow you to configure your actual needs - but this will be slightly more expensive in my experience).
Someone just moving decommissioned servers from a data center to new users without doing anything with the equipment in between allows you to find decent deals if you're looking for something to put in a rack.
Be aware that rack servers are usually rather power hungry, so they might be expensive to run over time.
I found eBay to be strictly an USA thing, plus Canada at best for those who are OK with driving for half a day, or even two.
I have the app installed and the shipping costs to Eastern Europe where I live often surpass 50% of the price of the tech itself.
I'd love to reuse. A lot of us out there who are still oldschool-ish and can work miracles with older tech. But I am not about to spend the same money I'd spend on simply building a PC with EATX case and the ability to shove 12 HDDs in there. I'd still end up spending more on the local market, mind you, but we're looking at 10-15% maximum and I don't find that a worthy difference to wait 3 weeks for an older server, especially with a very high likelihood of also having to pay 20% of the value of it to customs.
Similar region here, and there are unaffiliated "outlet" stores with their own websites which you can search-engine for, at least for workstations: I've never tried with servers. Also the discounts might be more in the decent than jump-to-buy territory. If you live in the Allegro land (i.e. Visegrad countries) or can import, you could also look here.
I expect there would be more of a glut of this stuff where there were lots of server farms and web tech businesses. If the scale was much smaller in your area compared to the US, then of course less servers are discarded.
You'll find vendors in Germany, Spain, etc. within the EU. Not much I can say about customs - so you might want to check local recyclers. There's usually some sort of recycling program for old hardware that gets cleaned out and re-sold, but it'll all depend on your country's incentives.
Whether it's worth it will depend on what you're looking for.
Yeah, I came to the conclusion that I should familiarize myself with the local market as well. Not very easy but not that difficult either.
Still not at all important for me, not until I move in in my own place which is due in 2-3 years. But after that happens I'll definitely want a few servers in a closet.
Its all about how apparent the issue is if you're running Wireshark - it does not stand out, so you have to do a lot more work to discover what is actually happening. The request is also hidden in plain sight along other requests, and those requests are what you'd expect (you'd normally expect a motd request, so this isn't out of the place).
Given that the way of circumventing the issue at hand is to delete a single local file, which is far simpler than finding the actual request and setting up fiddler or burp suite, this worked good enough.
It might not be successful, but you do have far better options than relying on a third party in a country far away.
It's always a varying grade, not either/or.
reply