Hacker Newsnew | past | comments | ask | show | jobs | submit | gnfargbl's commentslogin

So: OP wants to grow, but at his own pace and in his own way. He values transparency and autonomy. He doesn't mention salary as being particularly important, but does want a good work/life balance.

I wonder if he's considered a job as a developer in the Dutch government?


Be aware of threat actors, too: you're giving them an easy data exfil route without the hassle and risk of them having to set up their own infrastructure.

Back in the day you could have stood up something like this and worried about abuse later. Unfortunately, now, a decent proportion early users of services like this do tend to be those looking to misuse it.


What's a "data exfil route"?

I'm not who you asked, but essentially, when you write malware that infects someone's PC, that in itself doesn't really help you much. You usually want to get out passwords and other data that you might have stolen.

This is where an exfil (exfiltration) route is needed. You could just send the data to a server you own, but you have to make sure that there are fallbacks once that one gets taken down. You also need to ensure that your exfiltration won't be noticed by a firewall and blocked.

Hosting a server locally, easily, on the infected PC, that can expose data under a specific address is (to my understanding) the holy grail of exfiltration; you just connect to it and it gives you the data, instead of having to worry much about hosting your own infrastructure.


Thanks!

Though the public address is going to be random here so how will the hacker figure out which tunnl.gg subdomain to gobble up?


That's actually a fair defence against this kind of abuse. If the attacker has to get some information (the tunnel ID) out of the victim's machine before they can abuse this service, then it is less useful to them because getting the tunnel ID out is about as hard as just getting the actual data out.

However, if "No signup required for random subdomains" implies that stable subdomains can be obtained with a signup, then the bad guys are just going to sign up.


I've seen lots of weird tricks malware authors use, people are creative. My favorite is that they'd load up a text file with a modified base64 table from Dropbox which points to the URL to exfiltrate to. When you report it to Dropbox, they typically ignore the report because it just seems like random nonsense instead of being actually malicious.

> Hosting a server locally, easily, on the infected PC, that can expose data under a specific address is (to my understanding) the holy grail of exfiltration; you just connect to it and it gives you the data, instead of having to worry much about hosting your own infrastructure.

A permanent SSH connection is not exactly discreet, though...


The real kicker is in point 1.13:

> website activity logs show the earliest request on the server for the URL https://obr.uk/docs/dlm_uploads/OBR_Economic_and_fiscal_outl.... This request was unsuccessful, as the document had not been uploaded yet. Between this time and 11:30, a total of 44 unsuccessful requests to this URL were made from seven unique IP addresses.

In other words, someone was guessing the correct staging URL before the OBR had even uploaded the file to the staging area. This suggests that the downloader knew that the OBR was going to make this mistake, and they were polling the server waiting for the file to appear.

The report acknowledges this at 2.11:

> In the course of reviewing last week’s events, it has become clear that the OBR publication process was essentially technically unchanged from EFOs in the recent past. This gives rise to the question as to whether the problem was a pre-existing one that had gone unnoticed.


> In other words, someone was guessing the correct staging URL before the OBR had even uploaded the file to the staging area. This suggests that the downloader knew that the OBR was going to make this mistake, and they were polling the server waiting for the file to appear.

The URLS are predictable. Hedge-funds would want to get the file as soon as it would be available - I imagine someone set up a cron-job to try the URL every few minutes.


I used to do this for BOE / Fed minutes, company earnings etc on the off chance they published it before the official release time.

2025-Q1-earnings.pdf - smash it every 5 seconds - rarely worked out, generally a few seconds head start at best. By the time you pull up the pdf and parse the number from it the number was on the wires anyway. Very occasionally you get a better result however.


This is so incompetent.

Given the market significance of the report it's damn obvious that this would happen. They should have assumed that security via obscurity was simply not enough, and the OBR should have been taking active steps to ensure the data was only available at the correct time.

> Hedge-funds would want to get the file as soon as it would be available - I imagine someone set up a cron-job to try the URL every few minutes.

It's not even just hedge-funds that do this. This is something individual traders do frequently. This practise is common place because a small edge like this with the right strategy is all you need to make serious profits.


They weren't in any way attempting to rely on security by obscurity.

They didn't assume nobody would guess the URL.

They did take active steps to ensure the data was only available at the correct time.

But they didn't check that their access control was working, and it wasn't.


This setup was not initially approved, see 1.7 in the document:

> 1.7 Unlike all other IT systems and services, the OBR’s website is locally managed and outside the gov.uk network. This is the result of an exemption granted by the Cabinet Office in 2013. After initially rejecting an exemption request, the Cabinet Office judged that the OBR should be granted an exemption from gov.uk in order to meet the requirements of the Budget Responsibility and National Audit Act. The case for exemption that the OBR made at the time centred on the need for both real and perceived independence from the Treasury in the production and delivery of forecasts and other analysis, in particular in relation to the need to publish information at the right time.


Gov.uk does not use some random wordpress plugin to protect information of national significance, doco at https://docs.publishing.service.gov.uk/repos/whitehall/asset...


Part of this is a product of the UK's political culture where expenses for stuff like this are ruthlessly scrutinised from within and without.

The idea of the site hosting such an important document running independently on WordPress, being maintained by a single external developer and a tiny in-house team would seem really strange to many other countries.

Everyone is so terrified of headlines like "OBR spends £2m upgrading website" that you get stuff like this.


It's not an easy call. Sometimes, one or two dedicated and competent people can vastly outperform large and bureaucratic consulting firms, for a fraction of the price. And sometimes, somebody's cousin "who knows that internet stuff" is trousering inflated rates at the taxpayer's expense, while credentialed and competent professionals are shut out from old boys' networks. One rule does not fit all.


It would work if old boys' networks were not the de facto pool that the establishment hired from. The one time where UK GOV did go out and hire the best of the best in the private sector regardless of what Uni they went to we got GDS and it worked very well, but it seems like an exception to usual practice.


> This suggests that the downloader knew that the OBR was going to make this mistake, and they were polling the server waiting for the file to appear.

I think most of the tech world heard about the Nobel Peace Prize award so it doesn't seem that suspicious to me that somebody would just poll urls.

Especially since before the peace prize there have been issues with people polling US economic data.

My point is strictly, knowledge that they should poll a url is not evidence of insider activity.


How does the Nobel Peace Prize figure into this? I seem to be on the other side that didn't hear about the award. Which is not surprising as I don't follow it, but also I haven't worked out query terms to connect it with OBR.


Somebody monitored the metadata on files to figure out who the winner of the nobel prize was prior to the official announcements by the candidate that was modified. Which they used to financially profit in betting markets.

It relates to OBR because it's another scenario where people just by polling the site can figure out information that wasn't supposed to be released yet. And then use that information to profit.

Since a recent event of polling was in the news the idea of polling isn't really evidence of an insider trying to leak data versus somebody just cargo-culting a technique. Plus polling of financial data was already common.


Thank you for answering that person’s question so clearly. I was also in the dark and this really helped.

Because it was insider traded on Polymarket many hours before it was publicly announced.


The report also says a previous report was also accessed 30 mins early.


Could this be a problem not with AI, but with our understanding of how modern economies work?

The assumption here is that employees are already tuned so be efficient, so if you help them complete tasks more quickly then productivity improves. A slightly cynical alternate hypothesis could be that employees are generally already massively over-provisioned, because an individual leader's organisational power is proportional to the number of people working under them.

If most workers are already spending most of their time doing busy-work to pad the day, then reducing the amount of time spent on actual work won't change the overall output levels.


You describe the "fake email jobs" theory of employment. Given that there are way fewer email jobs in China does this imply that China will benefit more from AI? I think it might.


Are there fewer busy-work jobs in China? If so, why? It's an interesting assertion, but human nature tends to be universal.


It could be a side effect of China pursuing more markets, having more industry, and not financializing/profit-optimizing everything. Their economy isn't universally better but in a broad sense they seem more focused on tangible material results, less on rent-seeking.


Could argue there are more. Lots of loss making SOEs in China.


less money, less adult daycare


As China’s population gets older and more middle class is this shifting to be more like America?

I really don’t know and am curious.


This is a part of it indeed. Most people (and even a significant number of economists) assume that the economy is somehow supply-limited (and it doesn't help that most 101 econ class will introduce the markets as a way of managing scarcity), but in reality demand is the limit in 90-ish% of the case.

And when it's not, the supply generally don't increase as much as it could, became supplier expect to be demand-limited again at some point and don't want to invest in overcapacity.


Agreed. If you "create demand", it usually just means people are spending on the thing you provide, and consequently less on something else. Ultimately it goes back to a few basic needs, something like Maslow's hierarchy of needs.

And then there's followup needs, such as "if I need to get somewhere to have a social life, I have a need for transportation following from that". A long chain of such follow-up needs gives us agile consultants and what not, but one can usually follow it back to the source need by following the money.

Startup folks like to highlight how they "create value", they added something to the world that wasn't there before and they get to collect the cash for it.

But assuming that population growth will eventually stagnate, I find it hard to not ultimately see it all as a zero sum game. Limited people with limited time and money, that's limited demand. What companies ultimately do, is fight for each other for that. And when the winners emerge and the dust settles, supply can go down to meet the demand.


It's not a zero sum game. Think, an agronomist visits a farm, instructs to cut a certain plant for the animals to eat at a certain height instead of whenever, the plant then provides more food for the animals to eat exclusively due to that, no other input in the system, now the animals are cheaper to feed, so more profit to the farmer and cheaper food to people.

How would this be zero sum?


It would be if demand was limited. Let's assume the people already have enough food, and the population is not growing - that was my premise. Through innovation, one farmer can grow more than all the others.

Since there already was enough food, the market is saturated, so it would effectively reduce the price of all food. This would change the ratio so that the farmer who grows more gets more money in total, and every other farmer gets a bit less.

As long as there is any sort of growth involved - more people, more appetite, whatever, it would be value creation. But without growth, it's not.

At least not in the economical sense. Saving resources and effort that goes into producing things is great for society, on paper. But with the economic system that got us this far, we have no real mechanism for distributing the gains. So we get over supplying producers fighting over limited demand.

The world is several orders of magnitude more complex than that example, of course. But that's the basic idea.

That said, I'm not exactly an economist, and considering it's a bleak opinion to hold, I'd like to learn something based on which I could change it.


Late comment but if technology brought down the price of food then people could spend less on food, more on other good and services. Or the same on higher quality food. You don't need an increasing population for that. The improvement in agriculture could mean some farmers would have to find other work. So you can have economic growth with a stagnant or falling population. And you can rather easily have economic growth on a per-capita basis with no overall GDP growth, like is common in Japan today.

About the farmer needing to change jobs, in the interview that is the subject of this thread Ilya Sutskever speaks with wonder about humans' ability to generalize their intelligence across different domains with very little training. Cheaper food prices could mean people eat out or order-in more and then some ex-farmers might enter restaurant or food preparation businesses. People would still be getting wealthier, even without the tailwind of a growing population.


Who will eat the extra meat if the population has reached parity?


Varies depending on the field and company. Sounds like you may be speaking from your own experiences?

In medicine, we're already seeing productivity gains from AI charting leading to an expectation that providers will see more patients per hour.


> In medicine, we're already seeing productivity gains from AI charting leading to an expectation that providers will see more patients per hour.

And not, of course, an expectation of more minutes of contact per patient, which would be the better outcome optimization for both provider and patient. Gotta pump those numbers until everyone but the execs are an assembly line worker in activity and pay.


I don't think that more minutes of contact is better for anybody.

As a patient, I want to spend as little time with a doctor as possible and still receive maximally useful treatment.

As a doctor, I would want to extract maximal comp from insurance which I don't think is tied time spent with the patient, rather to a number of different treatments given.

Also please note that in most western world medical personnel is currently massively overprovisioned and so reducing their overall workload would likely lead to better result per treatment given.


> leading to an expectation that providers will see more patients per hour

> reducing their overall workload

what?


It is the delusion of the Homo Economicus religion.

I think the problem is a strong tie network of inefficiency that is so vast across economic activity that it will take a long time to erode and replace.

The reason it feels like it is moving slow is because of the delusion the economy is made up a network of Homo Economicus agents who would instantaneously adopt the efficiencies of automated intelligence.

As opposed to the actual network of human beings who care about their lives because of a finite existence who don't have much to gain from economic activity changing at that speed.

That is different though than the David Graeber argument. A fun thought experiment that goes way too far and has little to do with reality.


Let's invert that thinking. Imagine you're the "security area director" referenced. You know that DJB's starting point is assumed bad faith on your part, and that because of that starting point DJB appears bound in all cases to assume that you're a malicious liar.

Given that starting point, you believe that anything other than complete capitulation to DJB is going to be rejected. How are you supposed to negotiate with DJB? Should you try?


To start with, you could not lie about what the results were.


Your response focuses entirely on the people involved, rather than the substance of the concerns raised by one party and upheld by 6 others. I don't care if 1 of the 7 parties regularly drives busloads of orphans off a cliff, if the concerns have merit, they must be addressed. The job of the director is to capitulate to truth, no matter who voices it.

Any personal insults one of the parties lobs at others can be addressed separately from the concerns. An official must perform their duties without bias, even concerning somebody who thinks them the worst person in the world, and makes it known.

tl;dr: sometimes the rude, loud, angry constituent at the town hall meeting is right


I'm a huge Go proponent but I don't know if I can see much about Go's module system which would really prevent supply-chain attacks in practice. The Go maintainers point [1] at the strong dependency pinning approach, the sumdb system and the module proxy as mitigations, and yes, those are good. However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.

The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.

This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?

[1] https://go.dev/blog/supply-chain


A big thing is that Go does not install the latest version of transitive dependencies. Instead it uses Minimal version selection (MVS), see https://go.dev/ref/mod#minimal-version-selection. I highly recommend reading the article by Russ Cox mentioned in the ref. This greatly decreases your chances of being hit by malware released after a package is taken over.

In Go, access to the os and exec require certain imports, imports that must occur at the beginning of the file, this helps when scanning for malicious code. Compare this JavaScript where one could require("child_process") or import() at any time.

Personally, I started to vendor my dependencies using go mod vendor and diff after dependency updates. In the end, you are responsible for the effect of your dependencies.


In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)

I don’t believe I can do the same with Rust.


You absolutely can, both systems are practically identical in this respect.

> In Go you know exactly what code you’re building thanks to gosum

Cargo.lock

> just create vendor dirs before and after updating packages and diff them [...] I don’t believe I can do the same with Rust.

cargo vendor


cargo vendor


The Go standard library is a lot more comprehensive and usable than Node, so you need less dependencies to begin with.


Aside from other security features already mentioned Go also doesn't execute code at compile time by design.

There is no airtight technical solution, for any language, for preventing malicious dependencies making it into your application. You can have manual or automated review using heuristics but things will still slip through. Malicious code doesn't necessarily look obvious, like decoding some base64 and piping it into bash, it can be an extremely subtle vulnerability sprinkled in that nobody will find until it's too late.

RE dependency cooldowns I'm hoping Go will get support for this. There's a project called Athens for running your own Go module proxy - maybe it could be implemented there.


> However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.

You can't, really, aside from full on code audits. By definition, if you trust a maintainer and they get compromised, you get compromised too.

Requiring GPG signing of releases (even by just git commit signing) would help but that's more work for people to distribute their stuff, and inevitably someone will make insecure but convenient way to automate that away from the developer


If I understand TFA then the defendant is arguing that his message about owning a gun was made less glib by the verbatim inclusion of a tears-of-joy emoji plus a smiling-devil-horns emoji at the end.

That is... an unusual argument to make.


The recent Azure DDoS used 500k botnet IPs. These will have been widely distributed across subnets and countries, so your blocking approach would not have been an effective mitigation.

Identifying and dynamically blocking the 500k offending IPs would certainly be possible technically -- 500k /32s is not a hard filtering problem -- but I seriously question the operational ability of internet providers to perform such granular blocking in real-time against dynamic targets.

I also have concerns that automated blocking protocols would be widely abused by bad actors who are able to engineer their way into the network at a carrier level (i.e. certain governments).


> 500k /32s is not a hard filtering problem

Is this really true? What device in the network are you loading that filter into? Is it even capable of handling the packet throughput of that many clients while also handling such a large block list?


But this is not one subnet. It is a large number of IPs distributed across a bunch of providers, and handled possibly by dozens if not hundreds of routers along the way. Each of these routers won't have trouble blocking a dozen or two IPs that would be currently involved in a DDoS attack.

But this would require a service like DNSBL / RBL which email providers use. Mutually trusting big players would exchange lists of IPs currently involved in DDoS attacks, and block them way downstream in their networks, a few hops from the originating machines. They could even notify the affected customers.

But this would require a lot of work to build, and a serious amount of care to operate correctly and efficiently. ISPs don't seem to have a monetary incentive to do that.


It also completely overlooks the fact that some of the traffic has spoofed source IP addresses and a bad actor could use automated black holing to knock a legitimate site offline.


> a bad actor could use automated black holing to knock a legitimate site offline.

No, in my concept the host can only manage the traffic targeted at it and not at other hosts.


That already exists… that's part of cloudflare and other vendors mitigation strategy. There’s absolutely no chance ISPs are going to extend that functionality to random individuals on the internet.


Nah, that's delight in someone else's misfortune. This is delight that the misfortune wasn't yours, which is slightly different.


4 years of German and I still don't quite "get" it :^) TY!


We have a saying:

You know how you measure eternity?

When you finish learning German.


Katastrophenverursachererleichterung


Katastrophenverursacherverlagerungserleichterung


Even better


I don't think it's being looked at by the UK government through the lens of "right" or "wrong" but simply as a matter of the rule of law. If a course of action is illegal, they have to avoid it.


The concept of "law" becomes foggy when you're dealing with state-backed criminals. I'm confident that the US intelligence apparatus has properly identified the perps, what they were transporting, and the cooperation they got from their "government."


Just like the IC story about Iraqi uranium refining was a "slam dunk"?

That's not actually to impugn the US IC, exactly. It's more to call out that the IC can do their job thoroughly and correctly and the powers that be will misuse or misrepresent their work product for their own purposes. Unless you know otherwise, we have to consider (among other things) that the US IC has nothing showing these boats are implicated, but the admin proceeded anyway.

You're assuming a level of adherence to norms, best practices, and laws that the current administration has demonstrated they do not do. They're not even bothering to present weak evidence.


Remember that Saddam was not cooperating with UNMOVIC, and not denying that he was building nukes. It seems crazy that he would do this until you recognize that his power depended upon being seen as strong and defiant of "The Great Satan."

Yeah, it turned out that he wasn't building nukes, but he provably did have WMD (chemical weapons), and had used them.

I don't doubt that GWB wanted "to finish the job" that his father started, and may have influenced the IC into producing "evidence" to support his goals. Obama did the same thing with the "Russia Collusion" hoax.

Most civil servants are stand up people who would never go along with anything illegal or unethical. The politicians are a different breed.


Most civil servants are stand up people

I will agree with this from personal experience. I've worked with several gov'ts on various projects and found almost everyone to be simply interested in doing their job well.

The story of the Iraq War and how faulty intelligence played into it is very different from that view. You have George Tenet, head of the CIA, telling GWB that the intel was a slam dunk for Iraqi attempts to build nukes when there was no such intel. Colin Powell, the day before his presentation to the UN on the Iraqi nuke program, went to Langley and demanded to review the evidence himself. When shown the paltry shreds they'd collected, he blew up at Tenet, saying "this is all you've got?"

Cheney set up his own mini-intel operation in the White House, headed by Douglas Feith, to look at the "raw" intel and construct their own case because the CIA analysts were unwilling to produce a National Security Assessment saying the same. It was 100% a case of the admin claiming that the US IC supported their policies when they did not (and the IC wasn't free to publicly dispute it).

The integrity of the IC is not a reason to believe that any admin has their work product to justify their actions... especially when they won't reveal that evidence.


The problem with revealing the evidence is the risk it poses to the "methods and sources." For this reason, it's highly unusual for CIA to release "raw" intel to anybody. They always want to "process" it. This feature can be exploited by politicians, as it was by President Obama and John Brennan against their political adversaries.

South American governments that refuse to stop the cartels are in effect supporting them. The cartels are powerful, and use any and every means to get what they want. The US recently offered to help Claudia Sheinbaum, and that offer was rejected. Nicolás Maduro is most likely supportive of the cartels because they pay him, and their actions are destructive to his enemies (namely us).

https://www.reuters.com/world/americas/trump-confirms-he-off...


Would you like to buy a bridge?


Having spent over 40 years working with the US IC, I'm very much aware of the extent of their capabilities.


So then you're undoubtedly aware the executed are just lowly mules and nobody of any significance was/is turned into fish food.


If they have the cooperation/encouragement of their government, how is this any different from a military attack? How should we respond to a military attack? Should we try to arrest and prosecute the attackers? If we adopt that attitude, we may just as well eliminate our entire military. What do you suppose would happen then?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: