Beat me to it. A lot of the value in choosing a specific shell lies in its popularity, so I think you really need to have a specific reason to choose something outside of bash/zsh/fish.
> you really need to have a specific reason to choose something outside of bash/zsh/fish
The reason in question is that not that long ago, people said "you really need to have a specific reason to choose something outside of bash", and people choosing to go off the beaten path lead to zsh and fish becoming powerful and way more popular/well-supported than they were before.
"Popularity" probably has more to do with Apple moving to Zsh than anything else. Zsh has been more powerful than Bash for literally the entirety of the existence of both. It surely was back in like 1993 when I first looked at them. The "emacs of shells" might not be the worst summary.
Fish is a more recent addition, but I hate its `for loop` syntax, seemingly copied from BSD C Shell, which this Ion shell seems to have copied (or maybe Matlab or Julia?). Baffles me to impose a need for `end` statements in 2025. In Zsh, for a simple single command, I need only say `for i in *;echo $i` - about as concise as Python or Nim. In the minimalism aesthetic, Plan 9 rc was nicer even before POSIX even really got going (technically POSIX was the year before Plan 9 rc) for quoting rules if nothing else.
I think it's more insightful to introspect the origins of the "choosing something outside bash" rule you mentioned. I think that comes from generic "stick to POSIX" minimalism where Bash was just the most commonly installed attempt to do only (mostly) POSIX shell.. maybe with a dash of "crotchety sysadmins not wanting to install new shells for users".
Speaking of, the dash shell has been the default on Debian for a long while. So, I think really the rule has always been something "outside POSIX shell", and its origins are simply portability and all those bashisms are still kind of a PITA.
> "Popularity" probably has more to do with Apple moving to Zsh than anything else. Zsh has been more powerful than Bash for literally the entirety of the existence of both. It surely was back in like 1993 when I first looked at them. The "emacs of shells" might not be the worst summary.
It's my impression that Apple switched to zsh because it's permissively licensed, so they could replace the now-ancient last version of bash to use GPLv2 (instead of v3). Obviously it helped that they could replace it with something even more feature-rich, but I expect they would have taken the exact same functionality under a more permissive license.
I recommended fish to some my younger coworkers recently only for somebody very senior to point out that they will be very confused copy-pasting commands meant for bash from the internet and them not working. He is right, I will hold off recommending fish. You have to know you are very ready for a new shell.
About the only common case for single line commands is that fish uses (cmd) instead of $(cmd) for subshells. Anything longer than that you should probably be pasting it into a file and executing that.
Replying to myself: I don’t get the downvotes here. One-liner Bash commands I stumble across almost always work as-is in Fish. A while back they added support for
FOO=bar cmd
to run cmd with the env var FOO set to bar, and that was the single biggest incompatibility I routinely stumbled across. Most commands you find in random docs tend to be that simple, and most work just as if you’d run them under Bash. But if it’s a large, complex command with if statements and for loops, etc., you’re better off pasting it into a file, then tweaking it to run under Fish or just running it directly via Bash.
I think the error messages fish gives out in these cases (usually related to quotations) explain the problem pretty well.
I would probably recommend it like this: “I like using fish as my shell, if you want to try it out make sure you read the tutorial and generally understand that it’s not designed to be 100% bash/zsh compatible.”
I picked up fish as a junior level engineer as well, it wasn’t very hard to adapt.
I think that osh is valuable precisely because of that, since it's bash compatible. The project also has ysh which is not bash compatible, but fixes a lot of shell brokeness, including the #1 source of shell bugs, the need to quote almost 99% of variables and subshell invocations (and not quote them in the rare case you actually want splatting)
I feel like there's a pretty big difference between recommending Zsh and a shell without compatible syntax. The latter assumes you'll spend so much time running ad hoc complex commands in your shell, without opting for a proper scripting language instead, that you'll offset the pains of translating any existing commands to the new shell syntax.
Fish is great. NuShell is amazing. But once I start doing such data pipelining, I'd much rather open a Jupyter notebook.
As for why you might use it on Linux, it seems like it's meant to be "friendly" like Fish, but with more emphasis on scripting rather than on interactive use. It looks like a very comfortable scripting language. Something that visually resembles Lua but also has all of the familiar shell constructs would be an excellent scripting language IMO. And that's what this seems to be.
I’d like to see a real shell script written in ion very early in the readme or manual. Something that wgets a tarbal, extracts some part if the filename, checks cpu usage, draws a simple progress bar to the terminal, checks a folder for old files, some script that implements prompt completions for some cli tool…
I really don’t care that a new shell is written in rust. I care to see examples of how it actually would be better than bash.
You get memory safety. That's about it for Security. Quality is in the eye of the beholder. maybe it's quality code? Maybe it's junk, who knows. Rust isn't magic that forces code to be quality code or anything. That said, the code in the Redox system is generally good, so it's probably fine, but that's not because it's written in Rust, it's because of the developers.
Any GC'd language(Python, JS, Ruby, etc) gives you the same memory safety guarantees that Rust gives you. Of course GC'd languages tend to go a fair bit slower(sometimes very drastically slower) than Rust. So really the memory safety bit of Rust is that the memory safety happens at develop and compile time, instead of at runtime, like a GC language does. So you get the speed of other "systems" languages, like C, C++, etc AND memory safety.
> Rust isn't magic that forces code to be quality code or anything.
It is not, but the language and ecosystem are generally very well-designed and encourage you to "do the right thing," as it were. Many of the APIs you'd use in your day-to-day are designed to make it much harder to hold them wrong. On balance, outside of Haskell, it's probably the implementation language that fills me with the most confidence for any given particular piece of software.
While I generally agree, the latest Android report suggests that rust developers get fewer reverts and code reviews are faster. This could mean that better developers tend to adopt rust or it could mean that rust really is a language where quality is easier to attain.
There’s some reason to believe that given how easy it is to integrate testing into the rust code base and in general the trait and class system is also a bit higher quality and it encourage better isolation and things like const by default and not allowing API that could misuse the data structure in some unsafe way. And it has a rich ecosystem that’s easy to integrate 3p dependencies so that you’re not solving the same problem repeatedly like you tend to do in c++
So there is some reason to believe Rust does actually encourage slightly higher quality code out of developers.
Or there are "reverts and code reviews are faster" because no one wants to actually read through the perl-level line-noise type annotations, and just lgtm.
> Or there are "reverts and code reviews are faster"
This seems like a slight misreading of the comment you're responding to. The claim is not that reverts are "faster", whatever that would mean; the claim is that the revert rate is lower.
Also, how would skimping on reviews lead to a lower revert rate? If anything, I'd imagine the normal assumption would be precisely the opposite - that skimping on reviews should lead to a higher revert rate, which is contrary to what the Android team found.
What type annotations? In Rust almost all the types are inferred outside of function and struct declarations. In terms of type verbosity (in the code) it is roughly on the same level as TypeScript and Python.
Most of the performance penalty for the languages you mentioned is because they're dynamically typed and interpreted. The GC is a much smaller slice of the performance pie.
In native-compiled languages (Nim, D, etc), the penalty for GC can be astoundingly low. With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr. Nim and D are very much performance-competitive with C++ in more data-oriented scenarios that don't have hard real-time constraints, and that's with D having a stop-the-world mark-and-sweep GC.
The issue then becomes compatibility with other binary interfaces, especially C and C++ libraries.
Definitely true! Probably add Swift to that list as well. Apple has been pushing to use Swift in WebKit in addition to C++.
Actually Nim2 and Swift both use automatic reference counting which is very similar to using C++’s SharedPointer or Rusts RC/ARC. If I couldn’t use Nim I’d go for Swift probably. Rust gives me a headache mostly. However Nim is fully open source and independent.
Though Nim2 does default to RC + Cycle collector memory management mode. You can turn off the cycle collector with mm:arc or atomic reference counting with mm:atomicArc. Perfect for most system applications or embedded!
IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
> IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
I do not think so. My personal experience is that you can go far in Rust without cloning/Rc/Arc while not opting for unsafe. It is good to have it as default and use Rc/Arc only when (and especially where) needed.
Being curious I ran some basic grepping and wc on the Ion Shell project. It has about 2.19% of function declarations that use Rc or Arc in the definition. That is pretty low.
Naive grepping for `&` assuming most are borrows seems (excluding &&) to be 1135 lines. Clone occurs in 62 lines for a ratio of 5.4%. Though including RC and ARC with clones and that you get about 10.30% vs `&` or borrows borrows. That's assuming a rough surrogate Rc/Arc lines to usages of Rc/Arc's.
For context doing a grep for `ref object` vs `object` in my companies Nim project and its deps gives a rate of 2.92% ref objects vs value objects. Nim will use pointers to value objects in many functions. Actually seems much lower than I'd would've guessed.
Overall 2.19% of Rust funcs in Ion using Rc/Arc vs 2.92% of my Nim project types using refs vs object types. So not unreasonable to hold they have similar usage of reference counting vs value types.
> With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr.
Did you mean shared_ptr? With unique_ptr there's no reference-counting overhead at all. When the reference count is atomic (as it must be in the general case), it can have a significant and measurable impact on performance.
You might be right. Though with the way I design software, I'm rarely passing managed objects via moves to unrelated scopes. So usually the scope that calls the destructor is my original initializing scope. It's a very functional, pyramidal program style.
It's true for new projects.
For rewrites (such as a shell) it can mean a lot of regressions.
The rust-replacements for coreutils are a good negative example.
The new programs do not reach feature-parity, added regressions, and in some cases also had security vulnerabilities.
So for battle-proved software I wouldn't say so per-se (if your goal is to replace them).
Nonetheless, if you add truly new rust-code to a new or existing codebase when it's without much of hassle with the interop it should hold.
Not necessarily. "Quality" and "Security" can be tricky subjects when it comes to a shell. Rust itself is pretty great, but its HN community is made of cringe and zealotry - don't let them dissuade you from trying the language :P
I might be ignorant, but it looks to me like a slightly Rust-ified Bash, not sure if there's any standout features here - if there are could somebody point those out?
I kind of hate to admit, but in many ways Powershell is stil king of the shell game (haha) - the fact that it's object based (with autocomplete!) and has a proper JIT, meaning it's fast enough in processing pretty much anything just with native shell scripts (certainly not true for Bash or Python!) gives it a very different feel. Afaik there are object based shells, but none are fast enough to be generally faster or as fast as the disk is, meaning you need to resort to tricks for heavy-duty processing.
Too bad Microsoft messed up the ergonomics, and using it feels like pulling teeth.
I personally have not used it but the syntax looks nice enough - but it still looks interpreted, not compiled - I can't stress how important it is that with Powershell, you can just straight up write PS and it'll still run at a decent speed.
In my experience, half the CLI utils used by bash scripters do things you could do with Bash, but they're much faster on account of being written in C, but you have to suffer through learning their quirks.
An example I remember was when I needed to parse tens of gigs of JSON into a pandas dataframe (CSV on disk) - the python and bash versions ran at like 2-3MB/s, while the Powershell version did 50-100MB (which is still not great, but certainly good enough for what we did)
I'm clearly not as experienced as you with powershell, but I have used both nushell and powershell and like both. I definitely prefer nushells syntax. I personally haven't come across anything that was significantly slower with nushell over powershell. I my experience nushells out of the box functionality far out stripes powershells and it's object model is much easier to reason about.
I see a bunch of folks recommending this, but I have to wonder where this game ends. Always one more new tweak to the local environment. Just one more dotfile, bro, I promise this time your environment will be perfect. Just one more little supply-chain-attack vulnerable component running with the same access as you, the user. But look, you can save 20 microseconds on your shell history search or whatever!
Is there some actual reason to use this? I got sold on `zsh` as it became the standard on the Mac and was packaged by all major distros, but honestly I'm still fine with just plain bash, though I miss the pretty prompts. What is one really getting out of nushell / ion / whatever new tweaked out shell comes out next week?
Why should the game end at all? Why shouldn't people continue developing better and better shells that people can use to interact with their computer, or maybe different ones for different use cases? Supply-chain attacks are bad and worth mitigating for any kind of software, not just your shell; but the possibility of such an attack doesn't mean it's inherently unwise to try out new pieces of software. Saving time on your shell history search is good and declaring it unimportant merely because the amount of time saved sounds small, is how we wind up after many iterations with software that is noticeably laggy to the end user. But the real value of new shells I think is the new features you didn't know you would find useful at the time.
A: ...in many ways Powershell is stil king of the shell game [but]...
B: Have you tried nushell?
Anyway... nushell is more similar to Powershell (but AFAIK there is no JIT). My default is zsh (as you have mantioned, because of mac) but I use nushell for few things - it is pretty different from bash/zsh/ion/fish. It is more like data pipeline.
Too bad README doesn't show sample commands, discuss design or list differences from existing shells. So... it's just a shell, in Rust? Like fish, but still WIP? And only for redox-os?
Well, there're samples... in the screenshots. For user info* gotta check the linked manual, which also has a comparison page to POSIX shell[0]. It's basically a slightly different, no doubt improved, POSIX shell. On one hand it's easier to transition to. On other hand it's incompatible enough to create headaches. (And the big competitor is Bash/Zsh not sh.) Considering there're packages for Ubuntu, it is not Redox-only.
Nushell is certainly a better fit to be called modern shell but, even not considering the structured design, is much more different.
*Kinda makes sense as source repos are meant for devs; end-users should be checking manuals/sites instead. We're just used for repos' READMEs to be functionally similar to projects homepages.
Its worth mentioning that this seems to be purpose made to work with an entire custom rust OS Redox-OS. I didn't know that at first glance since I never heard of it. It also answered my question of whats wrong with NuShell?
Have they considered nushell though? It works on both POSIX-like OSes and Windows so it's not platform specific. I'd like to know if it wasn't ready in time (then why not work together instead of starting a new shell, pun not intended) or if there are some technical reasons.
I don't see a single mention of nushell in their readme or mdBook which is a shame.
In case you guys missed it: the popular fish shell is also now written in Rust. :)
https://github.com/fish-shell/fish-shell/releases/tag/4.0.0
Also nushell.
Beat me to it. A lot of the value in choosing a specific shell lies in its popularity, so I think you really need to have a specific reason to choose something outside of bash/zsh/fish.
> you really need to have a specific reason to choose something outside of bash/zsh/fish
The reason in question is that not that long ago, people said "you really need to have a specific reason to choose something outside of bash", and people choosing to go off the beaten path lead to zsh and fish becoming powerful and way more popular/well-supported than they were before.
"Popularity" probably has more to do with Apple moving to Zsh than anything else. Zsh has been more powerful than Bash for literally the entirety of the existence of both. It surely was back in like 1993 when I first looked at them. The "emacs of shells" might not be the worst summary.
Fish is a more recent addition, but I hate its `for loop` syntax, seemingly copied from BSD C Shell, which this Ion shell seems to have copied (or maybe Matlab or Julia?). Baffles me to impose a need for `end` statements in 2025. In Zsh, for a simple single command, I need only say `for i in *;echo $i` - about as concise as Python or Nim. In the minimalism aesthetic, Plan 9 rc was nicer even before POSIX even really got going (technically POSIX was the year before Plan 9 rc) for quoting rules if nothing else.
I think it's more insightful to introspect the origins of the "choosing something outside bash" rule you mentioned. I think that comes from generic "stick to POSIX" minimalism where Bash was just the most commonly installed attempt to do only (mostly) POSIX shell.. maybe with a dash of "crotchety sysadmins not wanting to install new shells for users".
Speaking of, the dash shell has been the default on Debian for a long while. So, I think really the rule has always been something "outside POSIX shell", and its origins are simply portability and all those bashisms are still kind of a PITA.
> "Popularity" probably has more to do with Apple moving to Zsh than anything else. Zsh has been more powerful than Bash for literally the entirety of the existence of both. It surely was back in like 1993 when I first looked at them. The "emacs of shells" might not be the worst summary.
It's my impression that Apple switched to zsh because it's permissively licensed, so they could replace the now-ancient last version of bash to use GPLv2 (instead of v3). Obviously it helped that they could replace it with something even more feature-rich, but I expect they would have taken the exact same functionality under a more permissive license.
Being adventurous can be part of your reason.
I recommended fish to some my younger coworkers recently only for somebody very senior to point out that they will be very confused copy-pasting commands meant for bash from the internet and them not working. He is right, I will hold off recommending fish. You have to know you are very ready for a new shell.
Someone should tell the very senior dev that nothing stops you from running “bash” if you need to paste scripts.
About the only common case for single line commands is that fish uses (cmd) instead of $(cmd) for subshells. Anything longer than that you should probably be pasting it into a file and executing that.
Replying to myself: I don’t get the downvotes here. One-liner Bash commands I stumble across almost always work as-is in Fish. A while back they added support for
to run cmd with the env var FOO set to bar, and that was the single biggest incompatibility I routinely stumbled across. Most commands you find in random docs tend to be that simple, and most work just as if you’d run them under Bash. But if it’s a large, complex command with if statements and for loops, etc., you’re better off pasting it into a file, then tweaking it to run under Fish or just running it directly via Bash.Mmm, I don’t love this advice.
I think the error messages fish gives out in these cases (usually related to quotations) explain the problem pretty well.
I would probably recommend it like this: “I like using fish as my shell, if you want to try it out make sure you read the tutorial and generally understand that it’s not designed to be 100% bash/zsh compatible.”
I picked up fish as a junior level engineer as well, it wasn’t very hard to adapt.
I think that osh is valuable precisely because of that, since it's bash compatible. The project also has ysh which is not bash compatible, but fixes a lot of shell brokeness, including the #1 source of shell bugs, the need to quote almost 99% of variables and subshell invocations (and not quote them in the rare case you actually want splatting)
https://oils.pub/osh.html
https://oils.pub/ysh.html
I feel like there's a pretty big difference between recommending Zsh and a shell without compatible syntax. The latter assumes you'll spend so much time running ad hoc complex commands in your shell, without opting for a proper scripting language instead, that you'll offset the pains of translating any existing commands to the new shell syntax.
Fish is great. NuShell is amazing. But once I start doing such data pipelining, I'd much rather open a Jupyter notebook.
I don't know if this applies to RedoxOS users.
As for why you might use it on Linux, it seems like it's meant to be "friendly" like Fish, but with more emphasis on scripting rather than on interactive use. It looks like a very comfortable scripting language. Something that visually resembles Lua but also has all of the familiar shell constructs would be an excellent scripting language IMO. And that's what this seems to be.
So they should say that.
I’d like to see a real shell script written in ion very early in the readme or manual. Something that wgets a tarbal, extracts some part if the filename, checks cpu usage, draws a simple progress bar to the terminal, checks a folder for old files, some script that implements prompt completions for some cli tool…
I really don’t care that a new shell is written in rust. I care to see examples of how it actually would be better than bash.
https://gitlab.redox-os.org/redox-os/ion/-/blob/master/examp...
But I assume it's for redox, so you can't use it on a regular linux.
"It is written entirely in Rust, which greatly increases the overall quality and security of the shell."
Is this true? I don't know Rust so I'm probably missing context. Obvious kudos to OP for writing a shell.
You get memory safety. That's about it for Security. Quality is in the eye of the beholder. maybe it's quality code? Maybe it's junk, who knows. Rust isn't magic that forces code to be quality code or anything. That said, the code in the Redox system is generally good, so it's probably fine, but that's not because it's written in Rust, it's because of the developers.
Any GC'd language(Python, JS, Ruby, etc) gives you the same memory safety guarantees that Rust gives you. Of course GC'd languages tend to go a fair bit slower(sometimes very drastically slower) than Rust. So really the memory safety bit of Rust is that the memory safety happens at develop and compile time, instead of at runtime, like a GC language does. So you get the speed of other "systems" languages, like C, C++, etc AND memory safety.
> Rust isn't magic that forces code to be quality code or anything.
It is not, but the language and ecosystem are generally very well-designed and encourage you to "do the right thing," as it were. Many of the APIs you'd use in your day-to-day are designed to make it much harder to hold them wrong. On balance, outside of Haskell, it's probably the implementation language that fills me with the most confidence for any given particular piece of software.
I do not think that using dependencies like it is npm is "doing the right thing".
While I generally agree, the latest Android report suggests that rust developers get fewer reverts and code reviews are faster. This could mean that better developers tend to adopt rust or it could mean that rust really is a language where quality is easier to attain.
There’s some reason to believe that given how easy it is to integrate testing into the rust code base and in general the trait and class system is also a bit higher quality and it encourage better isolation and things like const by default and not allowing API that could misuse the data structure in some unsafe way. And it has a rich ecosystem that’s easy to integrate 3p dependencies so that you’re not solving the same problem repeatedly like you tend to do in c++
So there is some reason to believe Rust does actually encourage slightly higher quality code out of developers.
Or there are "reverts and code reviews are faster" because no one wants to actually read through the perl-level line-noise type annotations, and just lgtm.
> Or there are "reverts and code reviews are faster"
This seems like a slight misreading of the comment you're responding to. The claim is not that reverts are "faster", whatever that would mean; the claim is that the revert rate is lower.
Also, how would skimping on reviews lead to a lower revert rate? If anything, I'd imagine the normal assumption would be precisely the opposite - that skimping on reviews should lead to a higher revert rate, which is contrary to what the Android team found.
What type annotations? In Rust almost all the types are inferred outside of function and struct declarations. In terms of type verbosity (in the code) it is roughly on the same level as TypeScript and Python.
I'm precisely referring to function and struct definitions. It's 10x worse when you add in async. 20x if you add in macros.
It's write only code, just like Perl but no where near as productive. Minor refactors become a game of Jenga.
This is not really a serious issue for any practicing Rust programmers.
They all have Stockholm syndrome then.
What's more likely: every single Rust programmer is wrong, or you're just not seeing the forest for the trees?
Beauty is in the eye of the beholder, and so I don't mind saying rust is butt ugly.
That’s just, like, your opinion, man.
You also get ADTs and it's harder to write race conditions
There is a lot more to rust than just memory safety. A lot of concurrency errors are prevented too, for example.
Most of the performance penalty for the languages you mentioned is because they're dynamically typed and interpreted. The GC is a much smaller slice of the performance pie.
In native-compiled languages (Nim, D, etc), the penalty for GC can be astoundingly low. With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr. Nim and D are very much performance-competitive with C++ in more data-oriented scenarios that don't have hard real-time constraints, and that's with D having a stop-the-world mark-and-sweep GC.
The issue then becomes compatibility with other binary interfaces, especially C and C++ libraries.
Definitely true! Probably add Swift to that list as well. Apple has been pushing to use Swift in WebKit in addition to C++.
Actually Nim2 and Swift both use automatic reference counting which is very similar to using C++’s SharedPointer or Rusts RC/ARC. If I couldn’t use Nim I’d go for Swift probably. Rust gives me a headache mostly. However Nim is fully open source and independent.
Though Nim2 does default to RC + Cycle collector memory management mode. You can turn off the cycle collector with mm:arc or atomic reference counting with mm:atomicArc. Perfect for most system applications or embedded!
IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
> IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
I do not think so. My personal experience is that you can go far in Rust without cloning/Rc/Arc while not opting for unsafe. It is good to have it as default and use Rc/Arc only when (and especially where) needed.
Being curious I ran some basic grepping and wc on the Ion Shell project. It has about 2.19% of function declarations that use Rc or Arc in the definition. That is pretty low.
Naive grepping for `&` assuming most are borrows seems (excluding &&) to be 1135 lines. Clone occurs in 62 lines for a ratio of 5.4%. Though including RC and ARC with clones and that you get about 10.30% vs `&` or borrows borrows. That's assuming a rough surrogate Rc/Arc lines to usages of Rc/Arc's.
For context doing a grep for `ref object` vs `object` in my companies Nim project and its deps gives a rate of 2.92% ref objects vs value objects. Nim will use pointers to value objects in many functions. Actually seems much lower than I'd would've guessed.
Overall 2.19% of Rust funcs in Ion using Rc/Arc vs 2.92% of my Nim project types using refs vs object types. So not unreasonable to hold they have similar usage of reference counting vs value types.
> With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr.
Did you mean shared_ptr? With unique_ptr there's no reference-counting overhead at all. When the reference count is atomic (as it must be in the general case), it can have a significant and measurable impact on performance.
You might be right. Though with the way I design software, I'm rarely passing managed objects via moves to unrelated scopes. So usually the scope that calls the destructor is my original initializing scope. It's a very functional, pyramidal program style.
Agreed, but I didn't want to get to far into the details. Thanks for sharing some more details though!
It's true for new projects. For rewrites (such as a shell) it can mean a lot of regressions. The rust-replacements for coreutils are a good negative example. The new programs do not reach feature-parity, added regressions, and in some cases also had security vulnerabilities.
So for battle-proved software I wouldn't say so per-se (if your goal is to replace them).
Nonetheless, if you add truly new rust-code to a new or existing codebase when it's without much of hassle with the interop it should hold.
Yeah, that’s true — Microsoft’s report (https://www.microsoft.com/en-us/msrc/blog/2019/07/why-rust-f...) says the same thing, and Google’s recent post on Rust in Android (https://security.googleblog.com/2025/11/rust-in-android-move...) backs it up too.
We’ve been using Rust for about seven years now, and as long as you stay away from fancy unsafe tricks, you really can avoid most memory safety bugs.
Not necessarily. "Quality" and "Security" can be tricky subjects when it comes to a shell. Rust itself is pretty great, but its HN community is made of cringe and zealotry - don't let them dissuade you from trying the language :P
" It is still quite a ways from becoming stabilized, but we are getting very close " haha
I love how one of the screenshots appears to be using the ion window manager, I guess they're very aware of the name collision :D
I feel like an opportunity was missed by one letteR.
Why is this link to mirror instead of actual repo?
Github vs Gitlab I guess?
I might be ignorant, but it looks to me like a slightly Rust-ified Bash, not sure if there's any standout features here - if there are could somebody point those out?
I kind of hate to admit, but in many ways Powershell is stil king of the shell game (haha) - the fact that it's object based (with autocomplete!) and has a proper JIT, meaning it's fast enough in processing pretty much anything just with native shell scripts (certainly not true for Bash or Python!) gives it a very different feel. Afaik there are object based shells, but none are fast enough to be generally faster or as fast as the disk is, meaning you need to resort to tricks for heavy-duty processing.
Too bad Microsoft messed up the ergonomics, and using it feels like pulling teeth.
Have you tried nushell?
I personally have not used it but the syntax looks nice enough - but it still looks interpreted, not compiled - I can't stress how important it is that with Powershell, you can just straight up write PS and it'll still run at a decent speed.
In my experience, half the CLI utils used by bash scripters do things you could do with Bash, but they're much faster on account of being written in C, but you have to suffer through learning their quirks.
An example I remember was when I needed to parse tens of gigs of JSON into a pandas dataframe (CSV on disk) - the python and bash versions ran at like 2-3MB/s, while the Powershell version did 50-100MB (which is still not great, but certainly good enough for what we did)
I'm clearly not as experienced as you with powershell, but I have used both nushell and powershell and like both. I definitely prefer nushells syntax. I personally haven't come across anything that was significantly slower with nushell over powershell. I my experience nushells out of the box functionality far out stripes powershells and it's object model is much easier to reason about.
I see a bunch of folks recommending this, but I have to wonder where this game ends. Always one more new tweak to the local environment. Just one more dotfile, bro, I promise this time your environment will be perfect. Just one more little supply-chain-attack vulnerable component running with the same access as you, the user. But look, you can save 20 microseconds on your shell history search or whatever!
Is there some actual reason to use this? I got sold on `zsh` as it became the standard on the Mac and was packaged by all major distros, but honestly I'm still fine with just plain bash, though I miss the pretty prompts. What is one really getting out of nushell / ion / whatever new tweaked out shell comes out next week?
Why should the game end at all? Why shouldn't people continue developing better and better shells that people can use to interact with their computer, or maybe different ones for different use cases? Supply-chain attacks are bad and worth mitigating for any kind of software, not just your shell; but the possibility of such an attack doesn't mean it's inherently unwise to try out new pieces of software. Saving time on your shell history search is good and declaring it unimportant merely because the amount of time saved sounds small, is how we wind up after many iterations with software that is noticeably laggy to the end user. But the real value of new shells I think is the new features you didn't know you would find useful at the time.
How is this related to discussion:
A: ...in many ways Powershell is stil king of the shell game [but]...
B: Have you tried nushell?
Anyway... nushell is more similar to Powershell (but AFAIK there is no JIT). My default is zsh (as you have mantioned, because of mac) but I use nushell for few things - it is pretty different from bash/zsh/ion/fish. It is more like data pipeline.
PowerShell is incredibly well designed by Jeffrey Snover
everyone suggesting features and making comparisons: think ash not zsh
Too bad README doesn't show sample commands, discuss design or list differences from existing shells. So... it's just a shell, in Rust? Like fish, but still WIP? And only for redox-os?
In which case, why not just go with nushell https://www.nushell.sh/
Well, there're samples... in the screenshots. For user info* gotta check the linked manual, which also has a comparison page to POSIX shell[0]. It's basically a slightly different, no doubt improved, POSIX shell. On one hand it's easier to transition to. On other hand it's incompatible enough to create headaches. (And the big competitor is Bash/Zsh not sh.) Considering there're packages for Ubuntu, it is not Redox-only.
Nushell is certainly a better fit to be called modern shell but, even not considering the structured design, is much more different.
*Kinda makes sense as source repos are meant for devs; end-users should be checking manuals/sites instead. We're just used for repos' READMEs to be functionally similar to projects homepages.
[0]: https://doc.redox-os.org/ion-manual/migrating.html
Its worth mentioning that this seems to be purpose made to work with an entire custom rust OS Redox-OS. I didn't know that at first glance since I never heard of it. It also answered my question of whats wrong with NuShell?
Have they considered nushell though? It works on both POSIX-like OSes and Windows so it's not platform specific. I'd like to know if it wasn't ready in time (then why not work together instead of starting a new shell, pun not intended) or if there are some technical reasons.
I don't see a single mention of nushell in their readme or mdBook which is a shame.
>if it wasn't ready in time
Ion predates Nushell by few years.