Hacker Newsnew | past | comments | ask | show | jobs | submit | recipe19's commentslogin

I've heard that a number of times, but the vast majority of people who get into CS do it because they want a high-paying tech job, for which most of what they'll learn at the university is borderline useless (and to the extent that cutting-edge CS research happens, the academia is nowadays usually trailing the industry).

The problem, arguably, is that we don't have reputable trade schools that would actually teach what the students need. But if that changes, I think some CS departments will be in for a rude awakening.


There is such a thing as a Software Engineering degree.

The fact that people keep buying the wrong product (Computer Science degrees) and universities keep selling it, doesn't mean that there's something wrong with Computer Science.


Sort of, but if the objective is to get hired for a hardware design job, I think that even the font aside, the overall aesthetics of the PCB aren't great. There are several places where component text overlaps other markings, some components are slightly offset from others for no reason, the pattern of stitching vias is pretty chaotic... I think it's actually the software part of it that's most worthwhile.


These aren't annual pay packages. It's some "can't retire on that" base salary plus a promise of gradually vesting equity on a multi-year schedule. For public companies, you'll get that amount if you hang around for x years and there is no sudden decline in market price. For non-public companies (OpenAI), the equity is more pie-in-the-sky.


I'd expect a few millions of annual pay, anyone could retire on that. Whether someone wants to, it's different but having anything beyond $1M available is definitely sufficient, especially if you're sub-40.


Most sources confirmed that yearly effective pay would be 100M level, unlike the normal tech breakdown.

OpenAI doesn’t do the usual equity for employees, they famously do profit sharing.


> an outdated relic kept alive by the electoral college.

And yet, farmers are a vocal and critical political bloc in every other EU country, too.

Farming is just important. Not as much because it employs a large portion of the population, but because it keeps a large portion of the population alive. It is the original industry that's "too big to fail" - if you let it, you get famine.


Very well said. There is no alternative to having a successful farming industry.


I get the broader point, but the infosec framing here is weird. It's a naive and dangerous view that the defense efforts are only as strong as the weakest link. If you're building your security program that way, you're going to lose. The idea is to have multiple layers of defense because you can never really, consistently get 100% with any single layer: people will make mistakes, there will be systems you don't know about, etc.

In that respect, the attack and defense sides are not hugely different. The main difference is that many attackers are shielded from the consequences of their mistakes, whereas corporate defenders mostly aren't. But you also have the advantage of playing on your home turf, while the attackers are comparatively in the dark. If you squander that... yeah, things get rough.


Well, I think the his example (locked door + opened window) makes sense, and the multiple LAYERS concept applies to things an attacker has to do or go through to reach the jackpot. But doors and windows are on the same layer, and there the weakest link totally defines how strong the chain is. A similar example in the web world would be that you have your main login endpoint very well protected, audited, using only strong authentication method, and the you have a `/v1/legacy/external_backoffice` endpoint completely open with no authentication and giving you access to a forgotten machine in the same production LAN. That would be the weakest link. Then you might have other internal layers to mitigate/stop an attacker that got access to that machine, and that would be the point of "multiple layer of defense".


Or a single logging jar that will execute some of its message contents. Inside all your DMZ layers in the app content.


Poor log4j...


> It's a naive and dangerous view that the defense efforts are only as strong as the weakest link.

Well, to be fair, you added some words that are not there in the post

> The output of a blue team is only as strong as its weakest link: a security system that consists of a strong component and a weak component [...] will be insecure (and in fact worse, because the strong component may convey a false sense of security).

You added "defense efforts". But that doesn't invalidate the claim in the article, in fact it builds upon it.

What Terence is saying is true, factually correct. It's a golden rule in security. That is why your "efforts" should focus on overlaying different methods, strategies and measures. You build layers upon layers, so that if one weak link gets broken there are other things in place to detect, limit and fix the damage. But it's still true that often the weakest link will be an "in".

Take the recent example of cognizant desk people resetting passwords for their clients without any check whatsoever. The clients had "proper security", with VPNs and 2FA, and so on. But the recovery mechanism was outsourced to a helpdesk that turned out to be the weakest link. The attackers (allegedly) simply called, asked for credentials, and got them. That was the weakest link, and that got broken. According to their complaint, the attackers then gained access to internal systems, and managed to gather enough data to call the helpdesk again and reset the 2FA for an "IT security" account (different than the first one). And that worked as well. They say they detected the attackers in 3 hours and terminated their access, but that's "detection, mitigation" not "prevention". The attackers were already in, rummaging through their systems.

The fact that they had VPNs and 2FA gave them "a false sense of security", while their weakest link was "account recovery". (Terence is right). The fact that they had more internal layers, that detected the 2nd account access and removed it after ~3 hours is what you are saying (and you're right) that defense in depth also works.

So both are right.

In recent years the infosec world has moved from selling "prevention" to promoting "mitigation". Because it became apparent that there are some things you simply can't prevent. You then focus on mitigating the risk, limiting the surfaces, lowering trust wherever you can, treating everything as ephemeral, and so on.


I'm not a security person at all. But this comments reads against the best practices which I've heard. Like that the best defense is using open source & well-tested protocols with extremely small attack surface to minimize the space of possible exploits. Curious what I'm not understanding here.


Just because it’s open source doesn’t mean it’s well tested, or well pen tested, or whatever the applicable security aspect is.

It could also mean that attacks against it are high value (because of high distribution).

Point is, license isn’t a great security parameter in and of itself IMO.


This area of security always feels a bit weird because ideally, you should think about your assumptions being subverted.

For example, our development teams are using modern, stable libraries in current versions, have systems like Sonar and Snyk around, blocking pipelines for many of them, images are scanned before deployment.

I can assume this layer to be well-secured to the best of their ability. It is most likely difficult to find an exploit here.

But once I step a layer downwards, I have to ask myself: Alright, what happens IF a container gets popped and an attacker can run code in there? Some data will be exfiltrated and accessible, sure, but this application server should not be able to access more than the data it needs to access to function. The data of a different application should stay inaccessible.

As a physical example - a guest in a hotel room should only have access to their own fuse box at most, not the fuse box of their neighbours. A normal person (aka not a youtuber with big eye brows) wouldn't mess with it anyway, but even if they start messing around, they should not be able to mess with their neighbour.

And this continues: What if the database is not configured correctly to isolate access? We have, for example, isolated certain critical application databases into separate database clusters - lateral movement within a database cluster requires some configuration errors, but lateral movement onto a different database cluster requires a lot more effort. And we could even further. Currently we have one production cluster, but we could isolate that into multiple production clusters which share zero trust between them. An even bigger hurdle putting up boundaries an attacker has to overcome.


But "defense in depth" is a security best practice. I'm not following exactly how the gp post is reading against any best practices.


Defense in depth is a security best practice because adding shit to a mess is more feasible than maintaining a simple stack. "There are always systems you don't know about" reflects an environment where one person doesn't maintain everything


No, defense in depth is a best practice because you assume that each layer can fall. It is more practical to have many layers that are very secure than to have one layer that has to be perfectly secure.


I think you are confusing “security through obscurity” and “defense in depth”.

You can add layers of high quality simple systems to increase your overall security exponentially, think using a VPN behind TOR etc.


Security person here. Open sourcing your entire stack is NOT best practices. The best defense is defense in depth, with some proprietary layers unknown to the attacker.


Who have you been listening to?


It should be possible to add layers without increasing attack surface.


I think it's just a poorly chosen analogy. When I read it, I understood "weakest link" to be the easiest path to penetrate the system, which will be harder if it requires penetrating multiple layers. But you're right that it's ambiguous and could be interpreted as a vulnerability in a single layer.


Isn't offense just another layer of defense? As they say, the best defense is a good offense.


They say this about sports, which is (usually) a zero-sum game: If I'm attacking, no matter how badly, my opponent cannot attack at all. Therefore, it is preferable to be attacking.

In cyber security, there is no reason the opponent cannot attack as well. So, my red team is attacking is not a reason that I do not need defense, because my opponent can also attack.


My post was really was in the context of real-time strategy games. It's very, very possible to attack and defend at the same time no matter the skill of either side. Offense and defense aren't mutually exclusive, which is kinda the point of my post.


I hate to say this, but it seems like a pretty clear case of using the wrong tool for the job.

There's no conceivable reason to cut something this simple on a large-format CNC mill. It's literally just a couple of straight cuts. It's not going to be faster (not with standard endmills), not going to be easier, and it's not going to be cheaper unless you're making them by the thousands.

You can likely buy S4S lumber for less than an oversized sheet of furniture-grade 1" plywood.


If the machine is available and it’s a hobby project it’s ok, but you’re right this is not a cheap or easy way to do it.

But as a mechanical engineer, the whole project is such a “software” approach to me, starting from turning the selected tool into a requirement to use. Then, rather than just powering through in solidworks for a one-off design, the author spends a lot of time looking for automation tools like they are a library, points out this approach feels like coding in rust, gives up, and even blogs about it.


Craftsmanship has been replaced by 3d printing.


Not at all, if anything good craftsmanship is commanding higher prices to those that appreciate it as it becomes relatively scarce.

Some guy makes a wooden bed: https://www.youtube.com/watch?v=sL96mw1uCmA

Lino Tagliapietra still makes a packet at 90 just for orchestrating others making non functional glassware.


I guess I should add the qualifier: for the middle class.

I grew up in homes that were well built with well made appliances etc, but it wasn’t fancy luxury, it was just well built.

Now it’s either junk or unaffordable luxury priced (and maybe not even craftsman made!)


We grew with overbuilt furniture, trailers, radio masts, gas fittings, plumbing, etc. that my father built. A whole lot of knitted jumpers, wool socks, bed covers, etc. from my mother and the aunts with a freezer full of home cooked meals scaled for shearing teams and put away for later.

Most of that carries on, and we've got good relations with a slew of people that are solid craftspeople who put their best work in galleries for the extra big bucks.

The good stuff is as accessible to us normies as good software is to the HN crowd .. it costs less or nothing if you're in the maker circles swapping food, yarns, and other goods.

FWiW I was a silent partner and occasional assistant in a glass studio / wood shop for two decades, that helps - that was some money (way back when you could get a lot of land for a lot less than today) and a lot of sweat equity modifying buildings and landscaping, etc.


You can make some plausible arguments against glass. It scratches more easily and doesn't shimmer as much. But synthetic sapphire is the same league and costs a lot less.

The modern-day aesthetic of diamonds is just that they are expensive. They're not distinguished by utility, quality, or appearance from cheaper products. The ultimate status symbol, but also obviously a bit of an issue...


Huh? Chinese citizens are free to apply to jobs in the Western world, and most companies are happy to hire them. Also, while the Chinese intelligence apparatus undoubtedly has easier access to Chinese nationals, the vast majority of these workers are not a part of a state-run syndicate to circumvent sanctions (or worse).

The NK thing is a fundamentally different scenario: you have people you're not allowed to hire lying to you and stealing identities to get hired. That's an obvious problem in itself, and the fact that it's orchestrated by the NK government to benefit the regime is only making it worse.

There are other parties that probably do the same, but NK is the industry leader, so to speak.


In principle. But in practice, the industry doesn't need nearly as many mathematicians as it does software engineers, and almost no one is getting into CS out of the love of math. CS coursework reflects that. Here are some important algorithms and data structures, here's how you write Python, good luck at big tech!


My CS program (at Purdue) was from the math department. We didn't even start designing real programs until the 4th semester (and that was in Forth or C).

At that time, if you wanted to do application programming, you took software engineering (OO Pascal and C++) or computer technology (Java) from either tech or engineering schools.


What you're describing is the domain of a very, very small number of hobbyists with very deep pockets (plus various govt-funded entities).

The vast majority of hobby astrophotography is done pretty much as the webpage describes it, with a single camera. You can even buy high-end Canon cameras with IR filters factory-removed specifically for astrophotography. It's big enough of a market that the camera manufacturer accommodates it.


> What you're describing is the domain of a very, very small number of hobbyists with very deep pockets

Sort of. The telescope used for the Dumbbell nebula captures featured in the article was at worth around $1000 and his mount is probably $500. A beginner cooled monochrome astrophotography camera is around $700 and if you want filters and a controller another $500.

There are quite a few people in the world doing this, upwards of 100K:

https://app.astrobin.com/search

Various PixInsight videos have +100K views: https://youtu.be/XCotRiUIWtg?si=RpkU-sECLusPM1j-&utm_source=...

Intro to narrowband also has 100K+ views: https://youtu.be/0Fp2SlhlprU?si=oqWrATDDwhmMguIl&utm_source=...


Some even scratch of the bayer pattern of old cameras.


You don't need very big pockets for that.

Today you can find very affordable monochromatic astrophotography cameras, and you can also modify cheap DSLR cameras or even compact cameras to remove its IR/UV/low pass filters. You can even insert a different semi permanent internal filter after that (like a IR or UV band pass)

I've done a Nikon D70 DSLR and a Canon Ixus/Elph compact.

Some cameras are very easy, some very difficult, so better check first some tutorials before buying a camera. And there are companies doing the conversion for you for a bunch of hundred dollars (probably 300 or 400).


You can even do the conversion diy.


Yep. I did both myself, as I was using old cameras that I had hanging around and if I sent them for conversion it would be more expensive than the cost of the camera.

Conversions done in places like Kolari or Spencer run about $300-500 depending on the camera model.

If I were to buy a brand new A7 IV or something like that, I would of course ask one of those shops to do it for me.


And the entire earth observation industry, which doesn't look the same way but uses the same base tech stack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: