Hacker Newsnew | past | comments | ask | show | jobs | submit | more jorge-d's commentslogin

If you can pay your plane ticket online I'm sure you can do so with the ETIAS as well.


It's a nice looking map, I'm personally a big fan of https://app.electricitymaps.com/

I'm sure you could use the same datapoint to cover more zones (cf their Github repo https://github.com/electricitymaps/electricitymaps-contrib/b...)


I am too!

We focus on the United States, so we can have the deepest coverage for each of the regions. In some cases that means were the same as Electricity Maps. In other case, you'll see our data is more real time without relying on estimations.

We have a lot more than generation mix data. For the wholesale energy markets, we also have all the pricing data[1]!

In case you're curious we also have an open source library: https://github.com/kmax12/gridstatus

We'd love to expand, but every new region is hard to support in the same depth as what we currently have in the US

[1] https://www.gridstatus.io/datasets?filter=lmp


Ha yes the EU, that famous authoritarian regime !


A few years ago most people would have said the same about fingerprint reading or face scanning, and yet we're living in a world where it is completely standard now


> we're living in a world where it is completely standard now

No it isn't. Ew. If an employer asked to fingerprint me I'd tell them where to stick their scanner.

You know what is "completely standard" now? Governments and corporations leaking terabytes of private information, with barely a shred of accountability.

We need to be pulling this in the exact opposite direction, not normalizing it; and not adding retina scans to the list of insecure biometric data.

Sam Altman needs a swift reality slap.


> If an employer asked to fingerprint me I'd tell them where to stick their scanner.

There are some situations where this may be ok. Working for the police themselves as an example. ;)


>If an employer asked to fingerprint me I'd tell them where to stick their scanner.

They don't need to, because your employer probably asked you for a form of biometric ID.


I've never once had an employer ask me for such a thing.


Weird. Every job I've had has asked for my passport (UK) or work permit (US).


I'm in the US, and every employer has asked me for proof that I can legally work in the US. But none of the proof I provide involves biometric data (unless you count the photo on my driver's license as "biometric data"). But I'm a citizen, and I could easily imagine that the requirements might be more strict for noncitizens.


Keeping the data offline, on device. That's the big difference. I don't consent to using such online. No, not for captchas or payment either. Because 2FA/MFA and passport copy is suffice for opening bank account and authenticating with it. For payment, IBAN transfer is also suffice.

Altman is banking on the hype of those who are cynical on AI/ML hype dystopia.


I still say the same about those things, and I don't willingly allow others to read my fingerprints or scan my face. Fortunately, I'm almost never asked -- so it's not exactly "standard".


Mass collection of fingerprints or face scans from people is not standard.


It is probably more standard than you realize? Quick search shows over 14 million are enrolled in the airport program that requires iris scanning, as well as fingerprints.


Are you talking about the US? 14 million out of a population of 340 million doesn't sound very standard to me.


Yes, US. And agreed it isn't majority, is why I gave the number. Should have made that clearer.

Probably better to look at passport, if we are only talking about fingerprints. And that is about half of the US? Still not a majority, but I assume a lot of "standard" things aren't majority. I was pointing out that it is getting more widespread.


I know the US hates IDs, rightfully, but fingerprints and photo ID is common in less free parts of the world. Wait until you hear about mandatory DNA sampling!


That isn't a success; it indicates a sick society. One that is profitable for Y combinator though.


Same issue here, I don't know what Salesforce is doing but it's the second issue in a week.


Yeah I also managed to run the web process without any issue:

```

heroku run bash -a myapp

$ bundle exec puma -C config/puma.rb

[All fine]

```

It's definitely coming from their router. Opened a ticket 1h ago without reply so far, and it makes you wonder why they even bother to have a status page.


Same here, both Staging and Production are completely down. Yet their statuspage shows no error at all.


Well Sidekiq is free to use. It's only the pro version that he charges and the free version code is open source.

I don't see the problem in having that kind of business model, it still allows the community to thrive and offers entreprises a way to have premium support.

Plus it allows him to invest more time in maintaining the free version.


I have no problem paying for the Pro version, but one if its marketing pitches is "enhanced reliability", which is a wild marketing spin on "the free version will lose jobs in fairly common scenarios".

In sidekiq without super_fetch (a paid feature), any jobs in progress when a worker crashes are lost forever. If a worker merely encounters an exception the job will be put back on the queue and retried but a crash means the job is lost.

Again, no problem paying for Pro, but I would prefer a little more transparency on how big a gap that is.


I wish this was prominently documented. Most people new to Sidekiq have no idea that the job will be lost forever if you simply hard kill the worker. I have seen a couple of instances where the team had Sidekiq Pro, but they had not enabled reliable fetch because they were unaware of this problem


The free version acts exactly like Resque, the previous market leader in Ruby background jobs. If it was good enough reliability for GitHub and Shopify to use for years, it was good enough for Sidekiq OSS too.

Here's Resque literally using `lpop` which is destructive and will lose jobs.

https://github.com/resque/resque/blob/7623b8dfbdd0a07eb04b19...


> If it was good enough reliability for GitHub and Shopify to use for years, it was good enough for Sidekiq OSS too.

Great point, and thanks for chiming in. I wonder if containerization has made this more painful (due to cgroups and OOMs). The comments here are basically some people saying it's never been a problem for them and some people saying they encounter it a lot (in containerized environments) and have had to add mitigations.

Either way, my observation is a lot of people not paying for Sidekiq Pro should. I hope you can agree with that.


When we used Sidekiq in production, not only did I never see crashes that lost us jobs, but there are also ways to protect yourself from that. I highly recommend writing your jobs to be idempotent.


Idempotence doesn't solve this problem. The jobs are all idempotent. The problem is that jobs will never be retried if a crash occurs.

This doesn't happen at a high rate, but it happens more than zero times per week for us. We pay for Sidekiq Pro and have superfetch enabled so we are protected. If we didn't do so we'd need to create some additional infra to detect jobs that were never properly run and re-run them.


Or install an opensource gem[1] that recreates the functionality using the same redis rpoplpush[2] command

[1] https://gitlab.com/gitlab-org/ruby/gems/sidekiq-reliable-fet...

[2] https://redis.io/commands/rpoplpush/#pattern-reliable-queue


Fair enough about idempotence.

I'm still confused about what you're saying though. You're saying that the language of "enhanced reliability" doesn't reflect losing 2 jobs over about 50*7 million (from your other comment)?

And that if you didn't pay for the service, you'd have to add some checks to make up for this?

That all seems incredibly reasonable to me.


Crashes are under your control though. They’re not caused by sidekiq. And you could always add your own crash recovery logic, as you say. To me that makes it a reasonable candidate for a pro feature.

It’s hard to get this right though. No matter where the line gets drawn, free users will complain that they don’t get everything for free.


How are crashes under your control? Again they aren't talking about uncaught exceptions, but crashes. So maybe the server gets unplugged, the network disconnects, etc.


To me 'crash' means any unexpected termination, whether it's caused by an uncaught exception, OOM, or hardware/network issues.

I guess you can say that hardware issues on your host aren't under your control, but it's under your control to find a host that doesn't have these issues. And not even a full-on ACID database is going to be 100% reliable if you yank the power cord at the wrong moment.


I hope my tone doesn't come across as rude or too argumentative, but I think your understanding is a bit inaccurate.

> it's under your control to find a host that doesn't have these issues

All hosts will have these issues, the only question is how often. If you need 100% consistency, then you can't use the free Sidekiq. Personally, I've never needed Sidekiq pro (as these kinds of crashes are extremely rare). But this will depend on your scale and use case.

> And not even a full-on ACID database is going to be 100% reliable if you yank the power cord at the wrong moment

This is only true if there's bugs in the DB, or some underlying disk corruption happens. The whole point of an ACID database is that they're atomic, durable, and consistent, even in the worst case scenario. If a power failure corrupted my SQL database I would feel very betrayed by the database.


It wouldn’t be corrupted, but in-flight transactions could fail to commit, just like queued jobs can be lost with sidekiq. The failure modes are similar.

I take your point that at a certain scale, hardware failure is inevitable, but if you’re running that many servers, you can afford sidekiq’s enterprise plan. It’s not something that will realistically happen if you’re just running like 20 instances on AWS. It’s perfectly reasonable to charge extra for something only large organizations with huge infrastructure budgets need.


For sure, I agree with you.

I would say that queued jobs being lost is different from an in-flight transaction being auto-rolled-back, but it's not a super important distinction. Like others have said, I think Sidekiq really nailed the free vs premium features and its success is evidence of that.


Jobs may crash due to VM issues or OOM problems. The more common cause of "orphans" is when the VM restarts and jobs can't finish during the shutdown period.


how often do your workers crash? i rely heavily on sidekiq and don't think I see this very often, if ever.


We process around 50M sidekiq jobs a day across a few hundred workers on a heavily autoscaled infrastructure.

Over the past week there were 2 jobs that would have been lost if not for superfetch.

It's not a ton, but it's not zero. And when it comes to data durability the difference between zero and not zero is usually all that matters.

Edit for additional color: One of the most common crashes we'll see is OutOfMemory. We run in a containerized environment and if a rogue job uses too much memory (or a deploy drastically changes our memory footprint) the container will be killed. In that scenario, the job is not placed back into the queue. SuperFetch is able to recover them, albeit with really lose guarantees around "when".


Let me get this straight, you're complaining about eight 9s of reliability?

50,000,000 * 7 = 350,000,000

2 / 350,000,000 = 0.000000005714286

1 - (2 / 350,000,000) = 0.999999994285714 = 99.999999%

> It's not a ton, but it's not zero. And when it comes to data durability the difference between zero and not zero is usually all that matters.

If your system isn't resilient to 2 in 350,000,000 jobs failing I think there is something wrong with your system.


This isn't about 2 in 350,000,000 jobs failing. It's about 2 jobs disappearing entirely.

It's not reliability we're talking about, it's about durability. For reference, S3 has eleven 9s of durability.

Every major queuing system solves this problem. RabbitMQ uses unacknowledged messages which are pinned to a tcp connection, so when that connection drops before acknowledging them they get picked up by another worker. SQS uses visibility timeouts, where if the message hasn't been successfully processed within a time frame it's made available to other workers. Sidekiq free edition chooses not to solve it. And that's a fine stance for a free product, but just one I wish was made clearer.


If you want to focus on durability then I think your complaint makes even less sense. Somehow I doubt S3 is primarily backed by Redis.

I think it's fair to assume that something backed by Redis is not durable by default because that's not what Redis is known for, whereas the other options you listed are known for their resiliency and durability. I wouldn't view Sidekiq as a similar product to RabbitMQ and SQS.

Also, Sidekiq Pro uses more advanced Redis features to enable super_fetch lending to the assumption that by default Redis is not durable: https://www.bigbinary.com/blog/increase-reliability-of-backg....


it’s not uncommon to lose jobs in sidekiq if you heavily rely on it and have a lot of jobs running. If using the free version for mission critical jobs, I usually run that task as a cron job to ensure that it will re-try if the job is lost.

I have in the past monitored how many jobs were lost and, although a small percentage, it was still recurring thing.


In containerized environments it may happen more often due to OOM kills or if you leverage autoscalers and have long running sidekiq jobs that have a runtime that exceeds your configured grace period for shutting down a container during a downscale and the process is eventually terminated without prejudice.

OOM kills are particularly pernicious as they can get into a vicious cycle of retry-killed-retry loops. The individual job causing the OOM isn't that important (we will identify it, log it and noop it), it's the blast radius effect on other sidekiq threads (we use up to 20 threads on some of our workers), so you want to be able to recover and re-run any jobs that are innocent victims of a misbehaving job.


Exactly why we refuse to use Sidekiq. “Hey, you have to pay to guarantee your jobs won’t just vanish”.

No thanks.


+1 been using Spectacle for years and it work perfectly.

However I just saw that the project is no longer maintained as of... two days ago ! I guess I'll have to find an alternative down the road.


Echoing what umberthreat34g said, check out Rectangle (https://github.com/rxhanson/Rectangle).


The reason it's not mainstream is most probably that the energy ratio between production and the output is terrible.

Also, technically, all hydrocarbons are made from Carbon sucked from the air.


The energy ratio and cost being terrible doesn't matter for some PR usecases.

Imagine being able to sell Formula 1 this fuel so the whole industry can claim to be green and try and try to re-attract young crowds who are turned away by un-greenness?

The fuel could cost 100x as much, and it still wouldn't be a big issue.


"Imagine being able to sell Formula 1 this fuel so the whole industry can claim to be green and try and try to re-attract young crowds who are turned away by un-greenness?"

I loled. They could use ground unicorn for fuel, and I still wouldn't be interested.


I'm thinking that the demographic allegedly being turned away for "un-greenness" would be downright appalled at using ground up unicorn for fuel...


Formula 1 is going to move to “sustainable fuel” in 2026, so you’re exactly right


There are other ways besides biology that hydrocarbons can be produced. Pure water placed between diamond anvils at high temperature and enormous pressure will spontaneously produce hydrocarbons. Titan's atmosphere is mostly hydrocarbons that didn't come from living things.

"Diamond dissolution and the production of methane and other carbon-bearing species in hydrothermal diamond-anvil cells" https://www.sciencedirect.com/science/article/abs/pii/S00167...


This


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: