> Being liberal in what you accept comes at _huge_ costs to the entire ecosyste
Why do you believe that?
Being liberal in what you accept doesn't mean you can't do input validation or you're forced to pass through unsupported parameters.
It's pretty obvious you validate the input that is relevant to your own case, you do not throw errors if you stumble upon input parameters you don't support, and then you ignore the irrelevant fields.
The law is "be conservative in what you send, be liberal in what you accept". The first one is pretty obvious.
How do you add cost to the entire ecosystem by only using the fields you need to use?
The problem with Postel's law is that people apply it to interpreting Postel's law. They read it as encouraging you to accept any input, and trying to continue in the face of nonsense. They accept malformed input & attempt to make sense of it, instead of rejecting it because the fields they care about are malformed. Then the users depend on that behavior, and it ossifies. The system becomes brittle & difficult to change.
I like to call it the "hardness principle". It makes your system take longer to break, but when it does it's more damaging than it would have been if you'd rejected malformed input in the first place.
> They accept malformed input & attempt to make sense of it, instead of rejecting it because the fields they care about are malformed.
I don't think that's true at all. The whole point of the law is that your interfaces should be robust, and still accept input that might be nonconforming in some way but still be possible to validate.
The principle still states that if you cannot validate input, you should not accept it.
The state of HTML parsing should convince you that if you follow postel's law in one browser then every other browser has to follow it in the same way.
That's a truism in general. If you're liberal in what you accept, then the allowances you make effectively become part of your protocol specification; and if you hope for interoperability, then everyone has to be follow the same protocol specification which now has to include all of those unofficial allowances you (and other implementors) have paved the road to hell with. If that's not the case, then you don't really have compatible services, you just have services that coincidentally happen to work the same way sometimes, and fail other times in possibly spectacular ways.
I have always been a proponent for the exact opposite of Postel's law: If it's important for a service to be accommodating in what it accepts, then those accommodations should be explicit in the written spec. Services MUST NOT be liberal in what they accept; they should start from the position of accepting nothing at all, and then only begrudgingly accept inputs the spec tells them they have to, and never more than that.
HTML eventually found its way there after wandering blindly in the wilderness for a decade and dragging all of us behind it kicking and screaming the entire time; but at least it got there in the end.
> The state of HTML parsing should convince you that if you follow postel's law in one browser then every other browser has to follow it in the same way.
No. Your claim expresses a critical misunderstanding of the principle. It's desirable that a browser should be robust to support broken but still perfectly parceable HTML. Otherwise, it fails to be even useable when dealing with anything but perfectly compliant documents, which mind you means absolutely none whatsoever.
But just because a browser supports broken documents, that doesn't make them less broken. It just means that the severity of the issue is downgraded, and users of said browser have one less reason to migrate.
The reason the internet consists of 99% broken html is that all browsers accept that broken html.
If browsers had conformed to a rigid specification and only accepted valid input from the start, then people wouldn't have produced all that broken html and we wouldn't be in this mess that we are in now.
> These aren't "job losses", these are "firings". They aren't unfortunate accidents of external origin that happened to them, they are conscious internal decisions to let people go.
This. They also make it their point to send the message this particlar firing round is completely arbitrary and based on a vague hope that they somehow can automate their way out of the expected productivity hit, and that they enforce this cut in spite of stronger sales.
> Probably many reasons for this, but what I've seen often is that once the code base has been degraded, it's a slippery slope downhill after that.
Another factor, and perhaps the key factor, is that contrary to OP's extraordinary claim there is no such thing as objectively good code, or one single and true way of writing good code.
The crispest definition of "good code" is that it's not obviously bad code from a specific point of view. But points of view are also subjective.
Take for example domain-driven design. There are a myriad of books claiming it's an effective way to generate "good code". However, DDD has a strong object-oriented core, to the extent it's nearly a purist OO approach. But here we are, seeing claims that the core must be functional.
If OP's strong opinion on "good code" is so clear and obvious, why are there such critical disagreements at such a fundamental levels? Is everyone in the world wrong, and OP is the poor martyr that is cursed with being the only soul in the whole world who even knows what "good code" is?
Let's face it: the reason there is no such thing as "good code" is that opinionated people making claims such as OP's are actually passing off "good code" claims as proxy's for their own subjective and unverified personal taste. In a room full of developers, if you throw a rock at a random direction you're bound to hit one or two of these messiahs, and neither of them agrees on what good code is.
Hearing people like OP comment on "good code" is like hearing people comment on how their regional cuisine is the true definition of "good food".
The original 2003 DDD book is very 2003 in that it is mired in object orientation to the point of frequently referencing object databases¹ as a state-of-the-art storage layer.
However, the underlying ideas are not strongly married to object orientation and they fit quite nicely in a functional paradigm. In fact, ideas like the entity/value object distinction are rather functional in and of themselves, and well-suited to FCIS.
> The original 2003 DDD book is very 2003 in that it is mired in object orientation to the point of frequently referencing object databases¹ as a state-of-the-art storage layer.
Irrelevant, as a) that's just your own personal and very subjective opinion, b) DDD is extensively documented as the one true way to write "good code", which means that by posting your comment you are unwittingly proving the point.
> However, the underlying ideas are not strongly married to object orientation and they fit quite nicely in a functional paradigm.
"Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
The criticism on anemic domain models, which are elevated to the status of anti-pattern, is more than enough to reject any claim on how functional programming is compatible with DDD.
And that's perfectly fine. Not being DDD is not a flaw or a problem. It just means it's something other than DDD.
But the point that this proves is that there is no one true way of producing "good code". There is no single recipe. Anyone who makes this sort of claim is either both very naive and clueless, or is invested in enforcing personal tastes and opinions as laws of nature.
> "Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
Yes, that is how terminology evolves to not meet a rigid definition that was defined in a different era of best-practice coding beliefs. I'll admit I had trouble mapping the DDD OO concepts from the original book(s) to systems I work on now, but there are more recent resources that use the spirit of DDD, Domain Separation, and Domain Modeling outside of OO contexts. You're right in that there is no single recipe - take the good ideas and practices from DDD and apply it as appropriate.
And if the response is "that's not DDD", well you're fighting uphill against others that have co-opted the buzzword as well.
> Irrelevant, as a) that's just your own personal and very subjective opinion
Yes? And it's just your personal, subjective opinion that this is irrelevant. Most meaningful judgments are subjective. Get used to it.
> DDD is extensively documented as the one true way to write "good code"
Who said this? I've seen it described as a good way to write code, and as a way of avoiding problems that can crop up in other styles. But never as the only way to write good code.
> "Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
No it doesn't. What? The only way I can make sense of what you're saying is if you're cynical toward the very concept of analyzing ideas, which is perhaps the most anti-intellectual stance I can imagine.
> The criticism on anemic domain models [...] is more than enough to reject any claim on how functional programming is compatible with DDD.
Why would an author's criticism of a certain style of OOP make a methodology they have written about incompatible with non-OOP paradigms? That's like saying that it's impossible to make strawberry ice cream because the person who invented ice cream hates strawberries.
> But the point that this proves is that there is no one true way of producing "good code".
There's no "one true way" to build a "good bridge," but that doesn't mean bridge design is all a matter of taste. Suspension bridges can carry a lot more than beam bridges; if you want to drive 18-wheelers across a wide river, a beam bridge will collapse, while a suspension bridge will probably be "good."
> They might be bitter but evangelize Amazon products are their most marketable skills.
I think you are talking out of ignorance and spite. Most of the services used by Amazon employees are internal services that may or may not be on par with the state of the art. Apparently a big chunk of Amazon doesn't even use AWS at all, and instead use proto-cloud computer services that are a throwback from the 90s take on cloud computing.
> Apparently a big chunk of Amazon doesn't even use AWS at all, and instead use proto-cloud computer services that are a throwback from the 90s take on cloud computing.
Is there more information on this somewhere. I had leadership telling me and a few others that we needed to replicate something on-par with AWS for internal use (with about 10 devs and less than a year timeline). I thought this sounded crazy, and it would be interesting if Amazon themselves didn’t even have what was being asked of us.
> I thought this sounded crazy, and it would be interesting if Amazon themselves didn’t even have what was being asked of us.
Amazon has multiple incantations of this. As legend would have it, AWS was an offshoot of Amazon's internal cloud infrastructure designed to monetize it to amortize their investment on bare metal infrastructure. They partitioned their networks for security reasons and for a few years their infrastructure evolved independently. Then AWS was a huge success and took a life of its own. Only relatively recently did Amazon started to push to drop their internal infrastructure to put all their eggs on AWS in general but serverless solutions in particular.
Well, There's not going to be much because it would violate NDA, but, nothing is 'elastic'.
Somewhere, someone, has to buy a set amount of servers, based on a running capacity projection and build those into usable machines. The basis of a datacenter, is an inventory system, a dhcp server, a tftp server, and a DNS server that get used to manage the lifecycle of hardware servers. That's what everyone did at one point, and the best of them build themselves tooling.
What amazon has is built on what was available at the time both for tooling and existing systems that they'd have to integrate with. You almost certainly don't have to build anything that complex. Additionally, you can get an off the shelf DCIM that integrates with your DHCP and DNS servers and trigger ansible runners in your boot sequences that handle the lifecycle steps. It's considerably easier to do now than it was 15 years ago.
While they don't use AWS specifically for a lot of stuff, the internal tooling can still build thousands of boxes an hour though they don't really pay for UI work for that stuff.
You can put a host(s) in a fleet, tell it the various software sets you want installed and click go and you'll have a fleet when you come back, so don't think that what you're being asked to build is impossible or not being used under every single major cloud provider or VPS provider.
The slightly harder part is deciding what you're going to give to devs for a front end. Are you providing raw hosts, VMs, container fleets, all of it? how are you handling multi-zone or multi-region . . ., how are you billing or throttling resources between teams.
The beauty of this is you get a lot of stuff for free these days. You can build out a fleet, provide a few build scripts that can be pulled into some CI/CD pipeline in your code forge of choice and you don't really need to build a UI.
Provisioning tooling is hard, but it's a lot easier now that it was 15/20 years ago and all the parts are there. I've built it several times on very small teams. I would have loved to have 10 devs to build something like that, but the reality is that you can get 80% with a little glue code and a few open source servers.
> Not surprising given folks have been saying Amazon/AWS has been a bloated mess for a while now.
Who exactly do you think is saying this? Because from what I'm understanding, so far Amazon has been decimating teams at the expense of overworking them even more, and by cutting projects at the expense of cancelling maintenance and feature work.
Like a lot of big tech companies Amazon is a small number of teams with profitable products and a whole bunch of other things that don’t make money. Events like this are when the teams not contributing to the bottom line are cleaned up.
Reminder 'cleaned up' means lives ruined, sick people losing the ability to afford insurance (COBRA is insanely expensive especially considering you just lost your job), homes lost, families forced to move and children losing their friends/forced to new schools, and in some cases suicide.
> Like a lot of big tech companies Amazon is a small number of teams with profitable products and a whole bunch of other things that don’t make money.
I think that's a simplistic view of the issue. At Amazon, each team owns at best specific features embedded in products. Some projects such as e-readers are there as loss leaders to support cash cows such as it's ebook market. From your simplistic opinion, Amazon would have cut zero employees from it's books organization as it's business is booming and it's a profit center. But that doesn't match reality.
Also note that you are making that unfounded claim while commenting on news that Amazon is going to focus it's firing round on HR. Is HR a profit center now?
> Events like this are when the teams not contributing to the bottom line are cleaned up.
Except that's bullshit. Amazon decimated teams by firing new arrivals and by transferring projects out of the US into Europe and Asia. This hasn't anything to do with efficiency or performance in mind.
What do you think "Kindle" is? Is it a specific device? Is it Kindle for Web? Is it the Android or iPhone apps? Is it Kindle for Windows or Kindle for Mac? Among these, can you count how many are paid?
> The cuts beginning this week may impact a variety of divisions within Amazon, including human resources, known as People Experience and Technology, devices and services and operations, among others, the people said.
They should be completely separate. If they were two independent companies, a low margin distribution and logistics company on one side and a high margin software services company on the other then nobody would suggest merging the two together.
> Which cloud company can casually find 100B cash in a year?
AWS. Because AWS reports close to $11B/quarter, which is over half Amazon's entire revenue, and AWS owns the cloud computing market, on which the whole world runs.
> Amazon includes AWS. They’re not “separate companies.”
Actually, they are. Perhaps what is causing your confusion is that other parts of Amazon, such as Ring or Rivian, are also separate companies, whereas parts such as Alexa and Amazon Music aren't.
By your definition then every little part of “Amazon” is technically a separate “company” including every geo. For the purposes of the discussion at hand they’re all the same. Amazon PXT and finance is the same team as AWS PXT and finance.
> By your definition then every little part of “Amazon” is technically a separate “company” including every geo.
No. My definition is Amazon's actual organization chart as a holding. AWS is an independent first-level branch of direct reports of Andy Jassy, who was AWS's CEO before replacing Bezos. A similar branch is Worldwide Consumer, which groups what you think Amazon actually is, which means the online store, prime, books, devices, etc.
> It doesn't solve the problem- its the end of the day when solar has ramped down that the crises happens. Its the duck curve. Where its still hot and air conditioning is still running hard.
Isn't that scenario a problem only when the output from solar is insufficient to meet the aggregate demand?
From a naive point of view, it looks like this issue would be easily mitigated if supply from solar was increased enough to allow energy to be stored during peak hours so that it could be introduced back in the grid during sunset. Why is this scenario being ignored in a thread on how California is investing in battery energy storage?
> I tried this and it is never as smooth as described.
I think your comment shows some confusion that it's either the result or cause of some negative experiences.
Starting with GitHub. The primary reason it "just works" is because GitHub, like any SaaS offering, is taking care of basic things like managing servers, authorization, access control, etc.
Obviously, if you have to setup your own ssh server, things won't be as streamlined as clicking a button.
But that's obviously not the point of this post.
The point is that the work you need to do to setup a Git server is way less than you might expect because you already have most of the things already set, and the ones that aren't are actually low-hanging fruit.
This should not come as a surprise. Git was designed as a distributed version control system. Being able to easily setup a stand-alone repository was a design goal. This blog post covers providing access through ssh, but you can also create repositories in any mount point of your file system, including in USB pens.
And, yes, "it just works".
> The official Git documentation for example has its own documentation that I failed to get work. (it is vastly different from what OP is suggesting)
I'm sorry, the inability to go through the how-to guide that you cited has nothing to do with Git. The guide only does three things: create a user account, setup ssh access to that account, and create a Git repository. If you fail to create a user account and setup ssh, your problems are not related to Git. If you created a user account and successfully setup ssh access, all that is missing is checking out the repo/adding a remote repo. If you struggle with this step, your issues are not related to Git.
> How do you concern about strategy all day? Just sit down and think about it?
I'm not OP, but whenever your product is implemented by more than one team you will also have the need to coordinate and set strategic goals as well as accompany and steer each team towards where it's infrastructure/tech stack/systems architecture need to be.
If you do not offer guidance and determine technical directions, each engineer working in each team will happily fill the void and do their best to scratch their itch at the expense of the whole company becoming an unmaintainable big ball of mud.
Let's put things differently: what do you expect will be the output of a company if no one is responsible for things like directing the company's R&D effort, coordinate and specify the company's tech roadmap, even oversee product development.
Why do you believe that?
Being liberal in what you accept doesn't mean you can't do input validation or you're forced to pass through unsupported parameters.
It's pretty obvious you validate the input that is relevant to your own case, you do not throw errors if you stumble upon input parameters you don't support, and then you ignore the irrelevant fields.
The law is "be conservative in what you send, be liberal in what you accept". The first one is pretty obvious.
How do you add cost to the entire ecosystem by only using the fields you need to use?