You sound like someone who hasnt worked very long in the industry. Being realistic anout failure allows one to build better products. Simplicity and maintainability and most important replacability. Allows a software to fail and not the business
Lead dev here. A significant portion of the JDK and the JVM is written in Java. There is much that happens in the JVM were gc is a good thing. Classes are constantly being instantiated and then discarded. Having those discarded classes GC'd is a benefit indeed.
This is only because of the nature of dynamic linking. Have a statically linked executable and you should be fine. Not that it should be an issue to get old software to run you simply need to download the dependent lib versions.
Anyone who sais this i feel hasnt worked very much with software, not that you should need to thats up to the one distributing the executable
What you mean is: the only stable ABI in Linux is the Linux kernel’s itself.
Windows is the opposite: the only stable ABI is the dynamically linked user space ABI. So yes, it’s perfectly possible to have a stable dynamic ABI across a dynamically linked boundary.
That is completely irrelevant to the discussion. The point is that both systems have very stable ABIs. Windows has a somewhat higher-level stable ABI, but as a result it is a much wider surface to keep compatible, it breaks much more often, it requires a lot more hacks to keep it stable over time (program-specific hacks kept around for decades), etc.
This is the point of difference: the layer at which each is stable. NOT whether Linux is stable.
The nature of stability literally is the discussion; I replied to a post that blamed the lack of a Linux equivalent to Win32 on dynamic linking. That the stable ABI you get on Linux requires you to bundle literally every dependency you have as if distros don't exist... And then still have issues because the kernel ABIs for graphics are entirely GPU dependent...
The two approaches are definitively not the same (as you claim), and the significant shortcomings of Linux's approach are why Win32 is becoming the ABI devs target even on Linux.
>That the stable ABI you get on Linux requires you to bundle literally every dependency you have as if distros don't exist...
This is not true. If your target platform is a distribution then you can assume that distribution's guarantees hold true.
What you cannot do is target "GNU/Linux" broadly and assume every glibc-based system running on top of the Linux kernel has all the libraries you want to depend on.
Pick a platform and target it. That can be the kernel. That can be glibc+the kernel (hey look GNU/Linux really is useful terminology), that can be RHEL9 or it can be Debian xyz. But don't pick "Linux" then make assumptions about userspace.
Your example doesn't make sense. Linux distributions have always had this trade off: Binaries provided through the package manager work with the libraries that are provided by the same package manager. I have a bunch of older gog or Humble Bundle Linux releases of games that still work fine on my system because (Windows style) they carry around all of their libraries with them. Linking Xorg doesn't make sense and applications linked statically against libX11 will work perfectly fine even with Xwayland.
https://www.x.org/wiki/Releases/https://www.gtk.org/docs/installations/linux/
I mean it will probably not be painless and other applications u run might break* but xorg is relativly stable.
Liba are out there are free to get. Usually people are arguing that the conveniences isnt there not that its not possible.
* if u dont sandbox this a bit with custom lib paths
What I am disputing is how this comes off to a game developer; 5 years from now, heck, 2 years from now when their games require library surgery to keep running... that’s just an awful experience.
That is not what a developer would consider a stable ABI. They could look into Flatpak - but look at what’s trending on Hacker News today - a rant against Flatpak.
Win32 over Proton is the winner for them; all other proposed solutions are hilariously naive and optimistic to what game development requires. No game developer is ever going to individually package, and consistently repackage, their game for 20 distributions. That’s never going to happen.
Well u sure made an effort to exclaim how hard it would be. If a developer had an install guide with links to dependencies or mirrors to those dependencies it wouldnt be very hard as they should have internally for their dev/ testing. Do windows devs not track their dependencies? Relying only on Win32 ? Whos the naive one ?
my memory might be wrong but I had issues with native linux games that had level editor based on GTK and python, could not get them to run after 3 years since launch, I do not claim it is impossible just that I could not do it with some a few hours effort.
I dont undestand your example, plenty of languages add new keywords without breaking backwards compatibility, its removing a keyword that would cause such and issue.
I guess, some languages get around this by having a destinction between functions and keyword functions not having the () braces in the syntax. But really if ur defning functions as keywords u should just put it in the standard library
Has python really though? still my company has a bunch of 2.7 lying around that not one is touching.
I would like to flip your question on its head and ask why does any language need a breaking change ever? Might as well create a new language in that case
While I can't say for sure, one or the reasons I seem to recall from the Python 3 transition was that the Python 2 design was pretty much a dead end. There where so many limitation and wrong design choices that it would keep the language from moving forward. That does seem a little aggressive, but it does feel like Python picked up a lot of steam once Python 3 was viable (something that happened way earlier than many care to admit).
Our code base wasn't huge at the time, a few 100.000 lines of code. Getting everything running was a week of work for three people. Sure many had way more complicated code, and some depended on some idiosyncrasies of Python 2 making thing more complex, but a lot of people acted like the Python developers shoot their dog. Mostly these people either simply didn't want to do the work or their particular niche was made ever so slightly more complex... or they depended on unmaintained libraries which is bad in it's own way. Python 3 was always going to be able to do everything Python 2 could, yet a large groups of people acted as if that wasn't the case.
Still not the best transition ever devised, we had to wait until 3.2 to get the speed to a point where it's wasn't an issue for all but the largest users.
The Python 3 upgrade process for many projects was incredibly painful. "Mercurial’s journey to and reflections on Python 3" should be required reading for anybody with rose-tinted glasses of the migration.
There was, of course, a Hacker News thread discussing the article, and a fair few people decided to blame the Mercurial developers for handling the migration inelegantly. Because that's how you win over an audience of developers - reassuring them that if Python has a backwards-compatibility break, Python fans will go out of your way to try and blame you for writing bad code. And not, perhaps, the fact that Python was missing things like a u string prefix and % bytestring formatting until 3.3 (2012) and 3.5 (2015!!!) respectively.
If I sound peeved, I really loved Python in the 2.x days, and the way the 3.x transition was handled broke my heart and prevented me from using the language for pretty much an entire decade. There are lessons to be learned from the transition, but not if we ignore the real problems that the transition caused. More importantly, we need to recognize that Python is not the Python we know today because of how "well" the transition was handled, but because Numpy and Matplotlib swooped in and gave Python new niches to excel in at just the right time.
All well and good when u have an active dev team who knows the code. Have fun walking into a code base that has just been running for the last 5 years and all the consultants that created it have left.
> why does any language need a breaking change ever
that's easy - because it's impossible to design everything right right away, and for many things also impossible to make it right later without breaking compatibility, while those improvements are valuable
New language for each breaking change also doesn't make sense when there is a lot of continuity
It’s more a radius of curvature effect. People are born with sharp personalities and views on how the world should be.
Eventually, you run up against the weathering affects of reality, your sharp edges smooth to a gentler polish, and you become far more stable and content.
Meh…. Idk babies, toddlers, children all have expectations, many of them unreasonable.
Not saying nurture doesn’t give them different ones or modify existing ones, but I find it highly suspect and in direct contradiction of my empirical experience to assert that children aren’t born with expectations.
Having three, I still think that the tabula rasa might be the proper was to look at it. The expectations of infants follow from their interactions with the world. And those are firstly the interactions with their parents and family. Outside of the very basic needs (food, drink, emotional comfort) you get the children you make. Not to say that all children react the same way to the same parenting, or have the same expectations. But you do mostly create their value system that governs their expectations.
Fair enough. I guess I was prompted to post a reply because of the claim that "the suicide rate is not that high". I had heard the 50k figure in another recent story and found it quite shocking.
The point was just that it's not high enough to explain "Why, beyond middle age, people get happier as they get older" or to support the claim "those who are happy enough not to off themselves in one given way or another will scew the statistics".
Id like to specify that i never stated that people “off themselves” purely from suicide. Drugs, alocohol, suicide, social isolation are all fair gain when combating the strugles of humanity
I suggest you take a second look at that article you reference. They haven't calculated the rate for this year so you have no evidence for your claim.
Straight from the article:
The Centers for Disease Control and Prevention, which posted the numbers, has not yet calculated a suicide rate for the year, but available data suggests suicides are more common in the U.S. than at any time since the dawn of World War II.
> Last year, according to the new data, the number jumped by more than 1,000, to 49,449 — about a 3% increase vs. the year before. The provisional data comes from U.S. death certificates and is considered almost complete, but it may change slightly as death information is reviewed in the months ahead.
The CDC hasn't calculated an exact final rate yet, but we already know the suicides are approximately 50k and that the US population is approximately 30M, so yes, I have a lot of evidence for my claim.
The data may change "slightly". You're making a mountain out of a molehill with the conceit that the rate may somehow magically become huge when the data is all counted.
He's right, include deaths of despair by overdoses and alcohol. It's way over the 'norm'.
68k in 1995. 158k in 2018.
Measuring a societies' success by the suicide rate has sent the wrong signal. Even the authority whispers disbelief in it's own populace into the headlines.
> He's right, include deaths of despair by overdoses and alcohol. It's way over the 'norm'.
> 68k in 1995. 158k in 2018.
Yet it's still not high enough to explain "Why, beyond middle age, people get happier as they get older" or to support the claim "those who are happy enough not to off themselves in one given way or another will scew the statistics".
Since the vast majority of people don't "off themselves" at any given age, with being past middle age included, that's one hell of a broad survivor bias you're talking about. Even among the most absolutely suicide-prone age cohort in the U.S.A, (those 45 to 54) the rate is no higher than 20 per 100,000. Like I said, survivor bias where the survivors represent 99.995% of the group?
I'm not sure survivorship bias applies when the topic of discussion is, literally, all people who have survived beyond a certain age. How else are we to make age-based observations?
My experience is consulting falls into three categories, and this is specific to higher education.
1. We can't afford the expert we need forever so we consult temporarily. Like major IT infrastructure. Colleges and universities tend to not attract the massively high quality tech workers that private industry does.
2. We know what is wrong, but internal politics or external politics keep us from saying and doing things we need to do. In this instance, consultants come in to tell us what we already know, but lets managers avoid responsibility for damaging relationships/politics.
3. We got a grant. Let's spend it on consultants because the prof that wrote it left last semester and his "notes" are like the necronomicon.
I usually work with number 2 when I consult (there is a Midwest school that still refers to me as the grim reaper because of all the firings that happened after I completed my findings report). But I would argue that number 1 is more common, in my experience.
Accurate. I forgot that. And in the higher education realm, these are always related to discrimination of some kind by an employee or department that is a known issue that hadn't been addressed.
Nothing against the project or talk itself. But kinda funny when a talk starts with “i bet many of you thought X was a solved problem, well im here to tell u it isnt.”
Git was populare not because its somehow revolutionary, its popular because the previous options where so increadibly shit. Any alternative to git is gonna have a hard time without that advantage
I'm not disagreeing, though. I think you're right (for the "foreseeable future"). CVS, SVN, etc., really became serious obstacles once you moved to larger projects and more distributed teams etc. Hell, they caused problems even with small colocated teams.
I think it's clear that git (and its forerunners, esp. "BitKeeper") meets a solid "good enough" standard. Potential competitors from that era, including, in particular, "Mercurial", largely fall into categories of "trade-offs".
But, I am, personally, very happy to see the work that's been done with "Pijul", including completing a "theory of merging".
I don't think Pijul has much of a chance in even 10+ years of replacing git (and "GitHub", part of the success story of git). But, I do think it'll see some use, become a solid foundation for some projects, and, there's a reasonable chance it will influence or become the foundation of some "next git" and/or future versions of git.
With the caveat, of course, that forecasting anything on those kinds of timeframes is even more of a fools errand now than it was even ~15+ years ago (about when git was first developed).
> Potential competitors from that era, including, in particular, "Mercurial", largely fall into categories of "trade-offs".
Which "trade-offs" are you referring to?
Mercurial was, and still is, a solid DVCS. It's not often used today because it lost the popularity contest, due to several reasons[0], but technically it's as good as Git, and functionally it's even better. It has a much saner and friendlier UI, which is the main complaint about Git.
Git itself might not have been revolutionary, but the concept of distributed version control certainly was. Git wasn't the only tool from that era to adopt this model, but it's fair to say that it has won the popularity contest, and is the modern standard in most projects.
Mercurial, Darcs and Fossil are also interesting, and in some ways better than Git, but Git won because it had the persona of Linus behind it, one of the most popular and influential OSS projects using it as proving ground, and a successful and user friendly commercial service built directly around it, that included it even in its name. All of this was enough for Git to gain traction and pull ahead of other DVCSs, even though in the early days Mercurial and Bitbucket were also solid and popular choices.
I used and preferred Mercurial for a long time, but ultimately Git was more prevalent, and it felt like swimming against the current. I feel like that also happened with Docker (Swarm) and Kubernetes, where k8s is now the de facto container orchestration standard, much to my own chagrin.
I entirely disagree. Git was not an okayish solution to a problem that previously didn't have any okayish solutions. Git was trying to solve a new problem, and, as it turns out, it's a problem that a majority of git users don't have, and don't intend to have in the future.
I use SVN to this day, because I seriously believe that it's a better solution to the centralized version control problem than GIT. After SVN, centralized version control was basically a solved problem (or, at least, we had an okayish solution), so the next generation of tools (GIT, BZR, HG, FOSSIL) tried to solve a different problem, namely distributed version control.
But they made a complete mess of it (at least git did, I don't know the other distributed ones particularly well). A majority of git projects use centralized workflow and are subsetting the use of git features to only the ones that straightforwardly correspond to things that svn can do as well, and can do more easily. And the cost was a much more complex/convoluted mental model that a majority of git users don't truly understand in full detail, which gets them in trouble if edge cases turn up. Hence this joke [1]. With things like [2], you're basically seeing git becoming a parody on itself.
Edit: I have heard this before but it is never specific, in what way is it easier? I never needed the big selling points of SVN, binary files (not good enough for me at the time), controll etc.
> Edit: I have heard this before but it is never specific, in what way is it easier?
In SVN, any subdirectory of any repo looks pretty much exactly as if it was the root of its own repo.
Given that basic idea, you can accomplish things like branches, tags, links, and sparse checkouts through operations on directories, so there's no need for any special treatment of these concepts in the software (or your brain).
If your main branch is `myproject/trunk`, and you want to make a tag, just `svn cp myproject/trunk myproject/tags/v0_1`, done. If you never touch the copy, the tag will always keep "pointing" to what it is you want it to point to.
Want to start a feature branch? Just `svn cp myproject/trunk myproject/branches/my_feature`. Then check out that subdirectory and apply commits there instead of to `trunk`. When you're done, `svn merge myproject/branches/my_feature myproject/trunk`, done.
Want a special subdirectory `theirproject` that exists under `myproject` and points to a particular commit of `theirproject`? Assuming both are in the same repo just `svn cp theirproject/trunk myproject/trunk/theirproject`, done. Want to move the pointer to the latest version of `theirproject`? Just `svn merge theirproject/trunk myproject/trunk/theirproject`, done.
You can check out the directory tree at the point where you're actually working on it, and not have to consume bandwith to transfer the rest of the repo, not even when you check it out for the first time. This is super-useful for when repos get too large for their own good, or you want to do a monorepo covering many different projects. In GIT, trying to do "sparse checkouts" to accomplish this has driven me crazy on numerous occasions, in SVN it's totally natural.
An SVN monorepo thus allows one to make project structures a lot more fluent, adapting layouts as projects evolve. Permissioning is done at directory-level too. You can easily work out (and adapt, as needs change) your project layouts to fit your permissioning needs, or apply fine-grained (sub-repo-level) permissioning that respects your project layouts. Permissions aren't a big deal in open source projects, but they sure are when managing proprietary code.
I could go on, but the above is just an attempt to jot down some specifics that quickly come to mind.
Git works well enough for projects that are essentially similar to the Linux kernel, i.e. it's designed to be used by programmers working primarily on text files. For projects that have non-technical people collaborating with programmers, or frequently changing binary assets, there is plenty of room for improvement over git.