This allows me to rapidly iterate on shell pipelines. The main goal is to minimize my development latency, but it also has positive effects on dependencies (avoiding redundant RPC calls). The classic way of doing this is storing something in temporary files:
up(1) looks really cool, I think I'll add it to my toolbox.
It looks like up(1) and memo(1) have similar use cases (or goals). I'll give it a try to see if I can appreciate its ergonomics. I suspect memo(1) will remain my mainstay:
1. After executing a pipeline, I like to press the up arrow (heh) and edit. Surprisingly often I need to edit something that's *not* the last part, but somewhere in the middle. I find this cumbersome in default line editing mode, so I will often drop into my editor (^X^E) to edit the command.
2. Up seems to create a shell command after completion. Avoiding the creation of extra files was one of my goals for memo(1). I'm sure some smart zsh/bash integration could be made that just returns the completed command after completing.
Another thing I built into memo(1) which I forgot to mention: automatic compression. memo(1) will use available (de)compressors (in order of preference: zstd, lz4, xz, gzip) to (de)compress stored contents. It's surprising how much disk space and IOPS can be saved this way due to redundancy.
I currently only have two memoized commands:
$ for f in /tmp/memo/aktau/* ; do
ls -lh "$f" =(zstd -d < $f)
done
-rw-r----- 1 aktau aktau 33K /tmp/memo/aktau/0742a9d8a34c37c0b5659f7a876833b6dad9ec689f8f5c6065d05f8a27d993c7bbcbfdc3a7337c3dba17886d6f6002e95a434e4629.zst
-rw------- 1 aktau aktau 335K /tmp/zshSQRwR9
-rw-r----- 1 aktau aktau 827 /tmp/memo/aktau/8373b3af893222f928447acd410779182882087c6f4e7a19605f5308174f523f8b3feecbc14e1295447f45b49d3f06da5da7e8d7a6.zst
-rw------- 1 aktau aktau 7.4K /tmp/zshlpMMdo
#!/usr/bin/env bash
#
# memo(1), memoizes the output of your command-line, so you can do:
#
# $ memo <some long running command> | ...
#
# Instead of
#
# $ <some long running command> > tmpfile
# $ cat tmpfile | ...
# $ rm tmpfile
to save output, sed can be used in the pipeline instead of tee
for example,
x=$(mktemp -u);
test -p $x||mkfifo $x;
zstd -19 < $x > tmpfile.zst &
<long running command>|sed w$x|<rest of pipeline>;
# You can even use it in the middle of a pipe if you know that the input is not
# extremely long. Just supply the -s switch:
#
# $ cat sitelist | memo -s parallel curl | grep "server:"
grep can be replaced with sed and search results sent to stderr
< sitelist curl ...|sed '/server:/w/dev/stderr'|zstd -19 >tmpfile.zst;
or send search results to stderr and to some other file
sed can save output to multiple files at a time
< sitelist curl ...|sed -e '/server:/w/dev/stderr' -e "/server:/wresults.txt"|zstd -19 >tmpfile.zst;
If provide sample showing (a) input format of text and (b) desired output format of text, then perhaps can provide an example of how to do the text processing
In general, I wonder if we're at the point where an LLM watching you interact with your computer for twenty minutes can improve your workflow, suggest tools, etc. I imagine so, because when I think to ask how to do something, I often get an answer that is very useful, so I've automated/fixed far more things than in the past.
I've been using bkt (https://github.com/dimo414/bkt) for subprocess caching. It has some nice features, like providing a ttl for cache expiration. In-pipeline memoization looks nice, I'm not sure it supports that
I was not aware of bkt. Thanks for the link. It seems very similar to memo, and has more features:
- Explicit TTL
- Ability to include working directory et al. as context for the cache key.
There do appear to be downsides (from my PoV) as well:
- It's a rust program, so it needs to be compiled (memo is a bash/zsh script and runs as-is).
- There's no mention of transparent compression, either in the README or through simple source code search. I did find https://github.com/dimo414/bkt/issues/62 which mentions swappable backends. The fact that it uses some type of database instead of just the filesystem is not a positive for me, I prefer the state to be easy to introspect with common tools. I will often memo commands that output gigabytes of data, which is usually highly compressible. Transparent compression fixes that up. One could argue this could be avoided with a filesystem-level feature, like ZFS transparent compression. But I don't know how to detect that in a cross-FS fashion.
I use Warp terminal for couple of years, and recently they embeeded AI into it. At first I was irritated, disabled it, but AI Agent is built in as an optional mode (Cmd-I to toggle). And I found myself using it more and more often for commands that I have no capacity or will to remember or dig through the man pages (from "figure out my IP address on wifi interface" to "make ffmpeg do this or that"). It's fast and can iterate over own errors, and now I can't resist using it regularly. Removes the need for "tools to memorize commands" entirely.
The default storage location for memo(1) output is /tmp/memo/${USER}. Most distributions either have some automatic periodic cleanup, and/or wipe it on restart.
Separately from that:
- The invocation contains *memo* right in there, so you (the user) knows that it might memoize.
- One uses memo(1) for commands that are generally slow. Rerunning your command that has a slow part and having it return in a millisecond while you weren't expecting it should make the spider-sense tingle.
In practice, this has never been a problem for me, and I've used this hacked together command for years.
Yes, I know. I should've taken a different example. But it's also realistic in a way. When I'm doing one-offs, I will sometimes take shortcuts like this. I know awk fairly well, and I know enough of jq that I know invoking jq . pretty prints the inbound json on multiple lines. While I know I could create a proper jq expression, the combo will get me there quicker. Similarly I'll sometimes do:
$ awk '...' | grep | ...
Because I'm too lazy to go back to the start of the awk invocation and add a match condition there. If I'm going to save it to a script, I'll clean it up. (And for jq, I gotta be honest that my starting point these days would probably be to show my contraption to an LLM and use its answer as a starting point, I don't use jq nearly enough to learn its language by memory.)
i see no way to name the memo in your examples, so how do you refer to them later?
also, this seems a lot like an automated way to write shell scripts that you can pipe to and from. so why not use a shell script that won't surprise anyone instead of this, which might?
In this invocation, a hash (sha512) is taken of "my-complex-command --some-flag my-positional-arg-1", which is then stored in /tmp/memo/${USER}/{sha512hash}.zst (if you've got zstd installed, other compression extensions otherwise).
> trash a.txt b.png moves `a.txt` and `b.png` to the trash. Supports macOS and Linux.
The way you’re doing it trashes files sequentially, meaning you hear the trashing sound once per file and ⌘Z in the Finder will only restore the last one. You can improve that (I did it for years) but consider just using the `trash` commands which ships with macOS. Doesn’t use the Finder, so no sound and no ⌘Z, but it’s fast, official, and still allows “Put Back”.
> jsonformat takes JSON at stdin and pretty-prints it to stdout.
Why prioritise node instead of jq? The latter is considerably less code and even comes preinstalled with macOS, now.
> uuid prints a v4 UUID. I use this about once a month.
Any reason to not simply use `uuidgen`, which ships with macOS and likely your Linux distro?
> The best part about sharing your config or knowledge is that someone will always light up your blind spots.
Yes! I will take this as a chance to thank every people who shared their knowledge on the Internet. You guys are so freaking awesome! You are always appreciated.
A big chunk of my whole life's learning came from all the forums that I used to scour through, hours after hour! Because these awesome people always sharing their knowledge, and someone adding more. That's what made Internet, Internet. And all is now almost brink of loss, because of greedy corporates.
This habit also helped me with doom-scrolling. I sometimes, do doomscroll, but I can catch it quickly and snap out of it. Because, my whole life, I always jumped in to the rabbit holes, and actually read those big blog posts, where you had those `A-ha` moments, "Oohh, I can use that", "Ahh, that's clever!".
When, browsing, do not give me that, by brain actually triggers, "What are you doing?"
Later, I got lazy, which I am still paying for. But I am going to get out of it.
Never stop jumping into those rabbit holes!! Well, obviously, not always it's a good rabbit hole, but you'll probably come out wiser.
That seems to be especially true on HN. Other forums there is some of that as well, but HN it seems nearly every single comment section is like 75% (random number) pointing out faults in the posted article.
Although I normally loathe pedantic assholes, I've found the ones on HN seem to be more tolerable because they typically know they'll have to back up what they're saying with facts (and ideally citations).
I've found that pedantic conversations here seem to actually have a greater potential for me to learn something from them than other forums/social platforms. On other platforms, I see someone providing a pedantic response and I'll just keep moving on, but on HN, I get curious to not only see who wins the nerd fight, but also that I might learn at least one thing along the way. I like that it's had an effect on how I engage with comment sections.
I have showdead on, and almost every single flagged post I've seen definitely deserves it. Every time it wasn't "deserved", the person simply took an overly aggressive tone for no real reason.
In short, I've never seen somebody flagged simply for having the wrong opinion. Even controversial opinions tend to stay unflagged, unless they're incredibly dangerous or unhinged.
I've seen a few dead posts where there was an innocent misunderstanding or wrong assumption. In those cases it would have been beneficial to keep the post visible and post a response, so that readers with similarly mistaken assumptions could have seen a correction. Small minority of dead posts though. They can be vouched for actually but of course this is unlikely to happen.
I agree that most dead posts would be a distraction and good to have been kept out.
It’s a blunt tool, but quite useful for posts. I read most dead posts I come across and I don’t think I ever saw one that was not obviously in violation of several guidelines.
OTOH I don’t like flagging stories because good ones get buried regularly. But then HN is not a great place for peaceful, nuanced discussion and these threads often descend into mindless flame wars, which would bury the stories even without flagging.
So, meh. I think flagging is a moderately good thing overall but it really lacks in subtlety.
Agreed, flagging for comments seems to function pretty well for the most part, and the vouch option provided a recourse for those that shouldn't have been killed.
On stories however, I think the flag system is pretty broken. I've seen so many stories that get flagged because people find them annoying (especially AI-related things) or people assume it will turn into a flame war, but it ends up burying important tech news. Even if the flags are reversed, the damage is usually done because the story fell off the front page (or further) and gets very little traction after that.
Just imagine this comment of yours would get flagged. Was it something very valuable and now the discussion is lacking something important? Surely not, but how would you feel? So what that you have some not so mild and not so "pleasant" opinion on something - why flag the comment? Just let people downvote it!
> I've found the ones on HN seem to be more tolerable because they typically know they'll have to back up what they're saying with facts (and ideally citations).
Can you back this up with data? ;-)
I see citations and links to sources about as little as on reddit around here.
The difference I see is in the top 1% comments, which exist in the first place, and are better on average (but that depends on what other forums or subreddits you compare it to, /r/AskHistorians is pretty good for serious history answers for example), but not in the rest of the comments. Also, less distractions, more staying on topic, the joke replies are punished more often and are less frequent.
That's a sampling bias. You're not seeing the opinions of every single person who has viewed an article, just the opinions of those who have bothered to comment.
People who agree with an article will most likely just upvote. Hardly anyone ever bothers to comment to offer praise, so most comments that you end up seeing are criticisms.
True true, one of my favorite things is watching the shorts on home improvement or 'hacks' and sure enough there is always multiple comments saying why it won't work and why its not the right way. Just as entertaining as the video.
also possible (even though I've seen the author's response to not knowing) is that the scripts were written before native was included. at that point, the muscle memory is just there. I know I have a few scripts like that myself
Other examples where native features are better than these self-made scripts...
> vim [...] I select a region and then run :'<,'>!markdownquote
Just select the first column with ctrl-v, then "i> " then escape. That's 4 keys after the selection, instead of 20.
> u+ 2025 returns ñ, LATIN SMALL LETTER N WITH TILDE
`unicode` is widely available, has a good default search, and many options.
BTW, I wonder why "2025" matched "ñ".
unicode ñ
U+00F1 LATIN SMALL LETTER N WITH TILDE
UTF-8: c3 b1 UTF-16BE: 00f1 Decimal: ñ Octal: \0361
> catbin foo is basically cat "$(which foo)"
Since the author is using zsh, `cat =foo` is shorter and more powerful. It's also much less error-prone with long commands, since zsh can smartly complete after =.
I use it often, e.g. `file =firefox` or `vim =myscript.sh`.
`trash` is good to know, thanks! I'd been doing: "tell app \"Finder\" to move {%s} to trash" where %s is a comma separated list of "the POSIX file <path-to-file>".
> Why prioritise node instead of jq? The latter is considerably less code and even comes preinstalled with macOS, now.
That was my thought. I use jq to pretty print json.
What I have found useful is j2p and p2j to convert to/from python dict format to json format (and pretty print the output). I also have j2p_clip and p2j_clip, which read from and then write to the system clipboard so I don't have to manually pipe in and out.
> Any reason to not simply use `uuidgen`, which ships with macOS and likely your Linux distro?
I also made a uuid, which just runs uuidgen, but then trims the \n. (And maybe copied to clipboard? It was at my old job, and I don't seem to have saved it to my personal computer.)
The trash command for macOS that's being talked about above is native in the OS now, since v14 according to its manpage, though I see it may have really been v15[1]
Instead of trash, reimplementing rm (to only really delete after some time or depending on resource usage or to shred of you are paranoid if the goal is to really delete something) or using zfs makes much more sense.
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Instead of being rude to a fellow human making an inoffensive remark, you could’ve spent your words being kind and describing the scenario you claim exists. For all you know, maybe they did ask ChatGPT and were unconvinced by the answer.
As a side note, I don’t even understand how your swipe would make sense. If anything, needing ChatGPT is what demonstrates a lack of imagination (having the latter you don’t need the former).
I believe it would be possible to execute an applescript to tell the finder to delete the files in one go. It would theoretically be possible to construct/run the applescript directly in a shell script. It would be easier (but still not trivial) to write an applescript file to take the file list as an argument to then delete when calling from the shell.
It’s not theoretical, and it is trivial. Like I said, I did exactly that for years. Specifically, I had a function in my `.zshrc` to expand all inputs to their full paths, verify and exclude invalid arguments, trash the rest in one swoop, then show me an error with the invalid arguments, if any.
This is exactly the kind of stuff I'm most interested in finding on HN. How do other developers work, and how can I get better at my work from it?
What's always interesting to me is how many of these I'll see and initially think, "I don't really need that." Because I'm well aware of the effect (which I'm sure has a name - I suppose it's similar to induced demand) of "make $uncommon_task much cheaper" -> "$uncommon_task becomes the basis of an entirely new workflow/skill". So I'm going to try out most of them and see what sticks!
Also: really love the style of the post. It's very clear but also includes super valuable information about how often the author actually uses each script, to get a sense ahead of time for which ones are more likely to trigger the effect described above.
A final aside about my own workflows which betrays my origins... for some of these operations and for others i occasionally need, I'll just open a browser dev tools window and use JS to do it, for example lowercasing a string :)
One of the very few things I like about macOS is that it rebinds the CUA key from Ctrl to Cmd, freeing up Ctrl for these Emacs-style text navigation keybinds. It's odd to me that seemingly zero Linux distros/DEs do this by default.
This is one of the things I miss the most about hacker conferences. The sharing of tools, scripts, tips and tricks. It was, and still is, just as fun as trading cards.
My favorite recent one was Handmade Seattle, but that one's kaput as of this year, and it seems everything else along similar lines is overseas and/or invite-only.
I love this kind of stuff too, but too many times over the years I've found myself in environments without some of these higher level and more niche tools (including my own dot files), or the tool ends up abandoned, and I struggle to remember how to use the basics/builtins. I've gotten a lot more conservative about adopting them because of that.
Pretty much my take as well. I imagine spending a few hours a month customizing your shell and text editor (hello vim/Emacs folks) to be more efficient and powerful is _great_ for developers who rarely leave their own workstation. But I spend much of my day logging into random hosts that don't have my custom shell scripts and aliases, so I'm actively careful not to fill my muscle memory with custom shortcuts and the like.
Of course, I _do_ have some custom shell scripts and aliases, but these are only for things I will ever do locally.
I'd love to see a cost benefit analysis of the author's approach vs yours, which includes the time it took the author to create the scripts, remember/learn to use them/reference them when forgetting syntax, plus time spent migrating whenever changing systems.
Not all time is created equal. I’ll happily invest more time than I’ll ever get back in refining a script or vim config or whatever, so that later, when I’m busy and don’t have time to muck around, I can stay in the flow and not be annoyed by distractions.
Sometimes it's rather matter of sanity than time management. I once created systemd service which goes to company web page and downloads some files which I sometimes need. This script was pretty hacky, and writing it took me a lot of time - probably more than clicking manually on this page in the long run. But clicking it so annoying, that I feel it was totally worth.
If you write these sorts of things in Python, argparse is worth investigating: https://docs.python.org/3/library/argparse.html - it's pretty easy to use, makes it easy to separate the command line handling from the rest of the code, and, importantly, will generate a --help page for you. And if you want something it can't do, you can still always write the code yourself!
I don’t like Python in general, but even so I’ll say that argparse is indeed very nice. When I was writing ruby, I always felt that OptionParser¹ wasn’t as good. Swift has Argument Parser², officially from Apple, which is quite featureful. For shell, I have a a couple of bespoke patterns I have been reusing in every script for many years.
Regarding other ports, I've also been pretty happy with https://github.com/nodeca/argparse, which works nicely from Typescript. Looks like it hasn't been updated for a while, but it's not like there's a great deal wrong with it.
https://github.com/p-ranav/argparse is a single-file argparse for Modern C++, which means it's typically straightforward, if baffling in places and a bit annoying to step through in the debugger.
The nice thing about the argparse ports is that provided they take their job seriously, your programs will all end up with a vaguely consistent command line UX in terms of longopt syntax, and, importantly, a --help page.
Many years ago I wrote a library I called “Ruby on Bales”¹ specifically due to my frustrations with the state of command-line argument parsing for Ruby scripts. I haven't touched it in a long while; maybe I should revisit it.
why is this interesting to you? the whole point of doing all of this is to be more efficient in the long run. of course there is an initial setup cost and learning curve after which you will hopefully feel quite efficient with your development environment. you are making it sound like it is not worth the effort because you have to potentially spend time learning "it"? i do not believe that it takes long to "learning" it, but of course it can differ a lot from person to person. your remarks seem like non-issues to me.
Because some of them OP said they use a few times a year. This means they'll probably use it like 150 times in their life. If it saves a minute each time, but it takes 5 hours to create it and 5 hours to maintain it over the years, then it's not really a win.
It's interesting because there's a significant chance one wastes more time tinkering around with custom scripts than saving in the long run. See https://xkcd.com/1205/
For example. The "saves 5 seconds task that I do once a month" from the blog post. Hopefully the author did not spend more than 5 minutes writing said script and maintaining it, or they're losing time in the long run.
1. even if it costs more time, it could also save more annoyance which could be a benefit
2. by publishing the scripts, anyone else who comes across them can use them and save time without the initial cost. similarly, making and sharing these can encourage others to share their own scripts, some of which the author could save time with
In my experience, it's not "maybe" but "almost certainly" which is why I stopped doing this. Every time I get a new system I would have to set everything up again, it's not cross platform, doesn't work when using someone else's computer, suddenly breaks for some reason or another, or you forget it exists...
The annoyance of all these factors for outweighs the benefits, in my experience. It's just that the scripts feel good at first and the annoyance doesn't come until later and eventually you abandon them.
Not all time is created equally though, so I disagree with that xkcd.
If something is time sensitive it is worth spending a disproportionate amount of time to speed things up at some later time. For example if you’re debugging something live, in a live presentation, working on something with a tight deadline etc.
Also you don’t necessarily know how often you’ll do something anyways.
The title of the comic is “ Is It Worth the Time?”.
To take a concrete example, if I spend 30 minutes on a task every six months, over 5 years that’s 5 hours of “work” hours. So the implication is that it’s not worth automating if it takes more than 5 hours to automate.
But if those are 5 hours of application downtime, it’s pretty clearly worth it even if I have to spend way more than 5 hours to reduce downtime.
Time saved also ain't the only factor here. I'll often automate something not because it actually saves a lot of time, but rather because it codifies an error-prone process and having it scripted out reduces the risk of human error by enough of a degree to be worth spending more time on it than I'd save.
I find that now with AI, you can make scripts very quickly, reducing the time to write them by a lot. There is still some time needed for prompting and testing but still.
One thing which is often ignored in these discussions is the experience you gain. The time you “wasted” on your previous scripts by taking longer to write them compounds in time saved in the future because you can now write more complex tasks faster.
The problem is, to really internalize that benefit, one would need to have an open mind to trying things out, and many folks seem to resist that. Oh well, more brain connections for me I suppose.
>YOU DON'T UNDERSTAND. I NEED TO BE CONSTANTLY OPTIMIZING MY UPTIME. THE SCIENCE DEMANDS IT. TIMEMAXXING. I CAN'T FREELY EXPLORE OR BRAINSTORM, IT'S NOT XKCD 1205 COMPLIANT. I MUST EVALUATE EVERY PROPOSED ACTIVITY AGAINST THE TIME-OPTIMIZATION-PIVOT-TABLE.
It's weird how the circle of life progresses for a developer or whatever.
- When I was a fresh engineer I used a pretty vanilla shell environment
- When I got a year or two of experience, I wrote tons of scripts and bash aliases and had a 1k+ line .bashrc the same as OP
- Now, as a more tenured engineer (15 years of experience), I basically just want a vanilla shell with zero distractions, aliases or scripts and use native UNIX implementations. If it's more complicated than that, I'll code it in Python or Go.
I think it's more likely to say that this comes from a place of laziness than some enlightened peak. (I say this as someone who does the same, and is lazy).
When I watch the work of coworkers or friends who have gone these rabbit holes of customization I always learn some interesting new tools to use - lately I've added atuin, fzf, and a few others to my linux install
I went through a similar cycle. Going back to simplicity wasn't about laziness for me, it was because i started working across a bunch more systems and didn't want to do my whole custom setup on all of them, especially ephemeral stuff like containers allocated on a cluster for a single job. So rather than using my fancy setup sometimes and fumbling through the defaults at other times, i just got used to operating more efficiently with the defaults.
You can apply your dotfiles to servers you SSH into rather easily. I'm not sure what your workflow is like but frameworks like zsh4humans have this built in, and there are tools like sshrc that handle it as well. Just automate the sync on SSH connection. This also applies to containers if you ssh into them.
Do you have experience with these tools? Some such as sshrc only apply temporarily per session and don't persist or affect other users. I keep plain 'ssh' separate from shell functions that apply dotfiles and use each where appropriate. You can also set up temporary application yourself pretty easily.
If, in the year 2025, you are still using a shared account called "root" (password: "password"), and it's not a hardware switch or something (and even they support user accounts these days), I'm sorry, but you need to do better. If you're the vendor, you need to do better, if you're the client, you need to make it an issue with the vendor and tell them they need to do better. I know, it's easy for me to say from the safety of my armchair at 127.0.0.1. I've got some friends in IT doing support that have some truly horrifying stories. But holy shit why does some stuff suck so fucking much still. Sorry, I'm not mad at you or calling you names, it's the state of the industry. If there were more pushback on broken busted ass shit where this would be a problem, I could sleep better at night, knowing that there's somebody else that isn't being tortured.
Sometimes we need to use service accounts, so while you do have your own account all the interesting things happen in svc_foo which you cannot add your .files.
You said you were already using someone else's environment.
You can't later say that you don't.
Whether or not shell access makes sense depends on what you are doing, but a well written application server running in a cloud environment doesn't need any remote shell account.
It's just that approximately zero typical monolithic web applications meet that level of quality and given that 90% of "developers" are clueless, often they can convince management that being stupid is OK.
They do get to work on someone else's server, they do not get a separate account on that server. There client would be not happy to have them mess around with the environment.
They specifically mentioned service accounts. If they’re given an user account to login as, they still might have to get into and use the service account, and its environment, from there. If the whole purpose was to get into the service account, and the service account is already setup for remote debug, then the client might prefer to skip the creation of the practically useless user account.
Could you help me understand what assumptions about the access method you have in place that make this seem unprofessional?
Let's assume they need access to the full service account environment for the work, which means they need to login or run commands as the service account.
This is a bit outside my domain, so this is a genuine question. I've worked on single user and embedded systems where this isn't possible, so I find the "unprofessional" statement very naive.
The defaults are unbearable. I prefer using chezmoi to feel at home anywhere. There's no reason I can't at least have my aliases.
I'd rather take the pain of writing scripts to automate this for multiple environments than suffer the death by a thousand cuts which are the defaults.
chezmoi is the right direction, but I don't want to have to install something on the other server, I should just be able to ssh to a new place and have everything already set up, via LocalCommand and Host * in my ~/.ssh/config
I gave it a try a few months ago, but did not work for me. My main issue is that atuin broke my workflow with fzf (If I remember correctly, pressing ctrl + r to lookup my shell history did not work well after installing atuin).
For anyone else reading this comment who was confused because this seems like the opposite of what you'd expect about Nix: Hacker News ate the asterisks and turned them into italics.
Besides many nix computers I also have wife, dog, children, chores, shopping to be done. Unlike when I was young engineer I could stay all night fiddling with bash scripts and environments.
What does your wife, dog, children, chores, and shopping have to do with custom configuration and scripts? Just set up a Git repo online, put your files there, and take a couple of minutes to improve it incrementally when you encounter inconveniences. And just like that, you made your life easier for a marginal effort.
Having a wife increases the opportunity costs of the time you spend on maintaining the scripts and also increases the costs while writing these (when the wife is nagging).
Some are desktops, some laptops, some servers. Different packages installed, different hardware. Three more variants.
Yes, I do have a script to set up my environment, but it already has a lot of conditional behavior to handle these five total variants. And I don't want to have to re-test the scripts and re-sync often.
I've heard this often, but I'm going on ~25 years of using Linux, and I would be lost without my dotfiles. They represent years of carefully crafting my environment to suit my preferences, and without them it would be like working on someone else's machine. Not impossible, just very cumbersome.
Admittedly, I've toned down the configs of some programs, as my usage of them has evolved or diminished, but many are still highly tailored to my preferences. For example, you can't really use Emacs without a considerable amount of tweaking. I mean, you technically could, but such programs are a blank slate made to be configured (and Emacs is awful OOB...). Similarly for zsh, which is my main shell, although I keep bash more vanilla. Practically the entire command-line environment and the choices you make about which programs to use can be considered configuration. If you use NixOS or Guix, then that extends to the entire system.
If you're willing to allow someone else to tell you how you should use your computer, then you might as well use macOS or Windows. :)
Yeah - been there, done that, too. I feel like the time I gain from having a shortcut is often less that what I wound need to maintain it or to remember the real syntax when I'm on a machine where it's not available (which happens quite often in my case). I try to go with system defaults as much as possible nowadays.
I am going through a phase of working with younger engineers who have many dotfiles, and I just think "Oh, yeh, I remember having lots of dotfiles. What a hassle that was."
Nowadays I just try to be quite selective with my tooling and learn to change with it - "like water", so to speak.
(I say this with no shade to those who like maintaining their dotfiles - it takes all sorts :))
as a person who loves their computer, my ~/bin is full. i definitely (not that you said this) do not think "everything i do has to be possible on every computer i am ever shelled into"
being a person on a computer for decades, i have tuned how i want to do things that are incredibly common for me
though perhaps you're referring to work and not hobby/life
> When I was a fresh engineer I used a pretty vanilla shell environment. When I got a year or two of experience, I wrote tons of scripts
Does this mean that you learned to code to earn a paycheck? I'm asking because I had written hundreds of scripts and Emacs Lisp functions to optimize my PC before I got my first job.
I can't say I relate at all (5 years of experience).
They'll have to pry my 1000-line .zshrc from my cold, dead hands.
For example, zsh-autosuggestions improves my quality of life so ridiculously much it's not even funny.
I moved away from 1000 lines .zshrc when I had to do stuff on linux VMs/dockers and I was lost a lot. But you zsh-autosuggestions, and fzf-tab is not going anywhere.
Prepare to swing back again. With nearly 30 years experience I find the shell to be the best integration point for so many things due to its ability to adapt to whatever is needed and its universal availability. My use of a vanilla shell has been reduced to scripting cases only.
On the other hand, the author seems to have a lot of experience as well.
Personally I tend to agree... there is a very small subset of things I find worth aliasing. I have a very small amount and probably only use half of them regularly. Frankly I wonder how my use case is so different.
In my case i'd start typing it in my browser then just click something i've visited 100 times before. There is something to be said about reducing that redundant network call but I dont think it makes much practical difference and the mental mapping/discoverability of aliases isnt nothing.
The moment of true enlightenment is when you finally decide to once and for all memorize all the arguments and their order for those command line utilities that you use at an interval that's just at the edge of your memory: xargs, find, curl, rsync, etc.
That, plus knowing how to parse a man file to actually understand how to use a command (a skill that takes years to master) pretty much removes the need for most aliases and scripts.
I already have limited space for long term memory, bash commands are very far down the list of things I'd want to append to my long term storage.
I use ctrl-R with a fuzzy matching program, and let my terminal remember it for me.
And before it's asked: yes that means I'd have more trouble working in a different/someone else's environment. But as it barely ever happens for me, it's hardly an important enough scenario to optimize for.
Why would I even attempt to do that? Life is too short to try to remember something like that. Maybe 20 years ago when internet was not that common. Or maybe if you are a hacker, hacking other peoples machines. Me? Just some dev trying yo make some money to feed my family? I prefer to have a walk to the woods.
I just use the autocd zsh shell option for this. And I also use `hash -d` to define shortcuts for common directories. Then just “executing” something like `~gh/apache/kafka` will cd to the right place.
I use a dotfile with aliases and functions, mostly to document / remember commands I find useful. It's been a handy way to build a living document of the utils I use regularly, and is easy to migrate to each new workstation.
Given the nature of current operating systems and applications, do you think the idea of “one tool doing one job well” has been abandoned? If so, do you think a return to this model would help bring some innovation back to software development?
Rob Pike: Those days are dead and gone and the eulogy was delivered by Perl.
But was the eulogy written in Perl poetry? I see it everywhere, but I don't know who this JAPH guy is. It's a strange way of spelling Jeff, and it's odd that he types his name in all caps, but he has published a remarkable quantity of works and he's even more famous than the anonymous hacker known as 4chan.
Oh I hate that paradigm. Well, maybe chmod and ls rsync and curl all do they OWN thing very well but every time I am using one of those tools I have to remember if i.e. more detailed response is -v or maybe -vvv or --verbose or -x for some reason because maintainer felt like it at 2:32 in the morning 17 years ago... Some consistency would help, but... Probably it is impossible the flame war over -R being recursive or read-only would never end.
For the Infra Engineers out there who still manage fleets of pets, this is double true. You may not have access or be able to use all your shortcut scripts so you better know the raw commands on that unsupported RHEL6 host.
I prefer using kubectl than any other method so i have plenty of functions to help with that. I'd never consider using python or go for this although I do have plenty of python and go "scripts" on my path too.
If you come through the other side, you set up LocalCommand in your .ssh/config which copies your config to every server you ssh to, and get your setup everywhere.
I've written on this before, but I have an extensive collection of "at" scripts. This started 25+ years ago when I dragged a PC tower running BSD to a friend's house, and their network differed from mine. So I wrote an @friend script which did a bunch of ifconfig foo.
Over time that's grown to an @foo script for every project I work on, every place I frequent that has some kind of specific setup. They are prefixed with an @ because that only rarely conflicts with anything, and tab-complete helps me remember the less frequently used ones.
The @project scripts setup the whole environment, alias the appropriate build tools and versions of those tools, prepare the correct IDE config if needed, drop me in the project's directory, etc. Some start a VPN connection because some of my clients only have git access over VPN etc.
Because I've worked on many things over many years, most of these scripts also output some "help" output so I can remember how shit works for a given project.
Edit: a word on aliases, I frequently alias tools like maven or ansible to include config files that are specific to that project. That way I can have a .m2 folder for every project that doesn't get polluted by other projects, I don't have to remember to tell ansible which inventory file to use, etc. I'm lazy and my memory is for shit.
Slightly related but mise, a tool you can use instead of eg make, has “on enter directory” hooks that can reconfigure your system quite a bit whenever you enter the project directory in the terminal. Initially I was horrified by this idea but I have to admit it’s been quite nice to enter into a directory and everything is set up just right, also for new people joining. It has built in version management of just about every command line tool you could imagine, so that an entire team can be on a consistent setup of Python, Node, Go, etc.
I see other people mentioning env and mise does this too, with additional support to add on extra env overrides with a dedicated file such as for example .mise.testing.toml config and running something like:
MISE_ENV=testing bun run test
(“testing” in this example can be whatever you like)
I'm stealing the top comment here because you probably know what I'm asking.
I've always wanted a linux directory hook that runs some action. Say I have a scripts dir filled with 10 different shells scripts. I could easily have a readme or something to remember what they all do.
What I want is some hook in a dir that every time I cd into that dir it runs the hook. Most of the time it would be a simple 'cat usage.txt' but sometimes it maybe 'source .venv/bin/activate'.
I know I can alias the the cd and the hook together but I don't want that.
Its intended use case is loading environment variables (you could use this to load your virtualenv), but it works by sourcing a script — and that script can be ‘cat usage.txt.’
Great tool.
If you use Emacs (and you should!), there’s a direnv mode. Emacs also has its own way to set configuration items within a directory (directory-local variables), and is smart enough to support two files, so that there can be one file checked into source control for all members of a project and another ignored for one’s personal config.
direnv does exactly what you describe (and a lot more) using flake.nix. cd into the directory and it automatically runs. I use it in every single project/repository to set environment variables and install project-specific dependencies locked to specific versions.
As other comments say, direnv does that, but honestly you should look into mise-en-place (mise) which is really great, and also includes a "mini-direnv"
It is often useful to chain multiple sed commands and sometimes shuffle them around. In those cases I would need to keep changing the fist sed. Sometimes I need to grep before I sed. Using cat, tail and head makes things more modular in the long run I feel. It’s the ethos of each command doing one small thing
My fav script to unpack anything, found a few years ago somewhere
# ex - archive extractor
# usage: ex <file>
function ex() {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.tar.xz) tar xf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via ex()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
For compression, I have one for .tar.gz. But it's not that popular in my system. I need something a bit easier than 'pack file file file archive.tar.gz'
I use an `anon` function to anonymize my Mac clipboard when I want to paste something to the public ChatGPT, company Slack, private notes, etc. I ran it through itself before pasting it here, for example.
I think they're the same except '.' is POSIX and 'source' is specific to bash and compatible shells. I personally just use source since it's easier to read and zsh and bash account for basically 100% of my shell usage.
Historical note: getting hold of these scripts by chatting to various developers was the motivation for the original 2004 "lifehacks" talk[1][2]. If you ever get into an online argument over what is a "life hack" and what isn't, feel free to use short scripts like these as the canonical example.
Otherwise, I am happy to be pulled into your discussion, Marshall McLuhan style[3] to adjudicate, for a very reasonable fee.
Note, fftime copies the audio and video data without re-encoding, which can be a little janky, but often works fine, and can be much (100x) faster on large files. To re-encode just remove "-c copy"
I'm kicking myself for not thinking of the `nato` script.
I tend to try to not get too used to custom "helper" scripts because I become incapacitated when working in other systems. Nevertheless, I really appreciate all these scripts if nothing else than to see what patterns other programmers pick up.
My only addition is a small `tplate` script that creates HTML, C, C++, Makefile, etc. "template" files to start a project. Kind of like a "wizard setup". e.g.
$ tplate c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
}
And of course, three scripts `:q`, `:w` and `:wq` that get used surprisingly often:
Some cool things here but in general I like to learn and use the standard utilities for most of this. Main reason is I hop in and out of a lot of different systems and my personal aliases and scripts are not on most of them.
sed, awk, grep, and xargs along with standard utilities get you a long long way.
Same. I interact with too many machines, many of which are ephemeral and will have been reprovisioned the next time I have to interact with it.
I value out of the box stuff that works most everywhere. I have a fairly lightweight zsh config I use locally but it’s mostly just stuff like a status like that suits me, better history settings, etc. Stuff I won’t miss if it’s not there.
I totally agree with this, I end up working on many systems, and very few of them have all my creature comforts. At the same time, really good tools can stick around and become impactful enough to ship by default, or to be easily apt-get-able. I don't think a personal collection of scripts is the way, but maybe a well maintained package.
I keep meaning to generalize this (directory target, multiple sources, flags), but I get quite a bit of mileage out of this `unmv` script even as it is:
#!/bin/sh
if test "$#" != 2
then
echo 'Error: unmv must have exactly 2 arguments'
exit 1
fi
exec mv "$2" "$1"
I have mkcd exactly ( I wonder how many of us do, it's so obvious)
I have almost the same, but differently named with scratch(day), copy(xc), markdown quote(blockquote), murder, waitfor, tryna, etc.
I used to use telegram-send with a custom notification sounnd a lot for notifications from long-running scripts if I walked away from the laptop.
I used to have one called timespeak that would speak the time to me every hour or half hour.
I have go_clone that clones a repo into GOPATH which I use for organising even non-go projects long after putting go projects in GOPATH stopped being needed.
I liked writing one-offs, and I don't think it's premature optimization because I kept getting faster at it.
> ocr my_image.png extracts text from an image and prints it to stdout. It only works on macOS
The Mac Shortcut at https://github.com/e-kotov/macos-shortcuts lets you select a particular area of the screen (as with Cmd-Shift-4) and copies the text out of that, allowing you to copy exactly the text you need from anywhere on your screen with one keyboard shortcut. Great for popups with unselectable text, and copying error messages from coworkers' screenshares.
Something I've long appreciated is a little Perl script to compute statistics on piped in numbers, I find it great for getting quick summaries from report CSVs.
I find that I like working with the directory stack and having a shortened version of the directory stack in the title bar, e.g. by modifying the stock Debian .bashrc
# If this is an xterm set the title to the directory stack
case "$TERM" in
xterm*|rxvt*)
if [ -x ~/bin/shorten-ds.pl ]; then
PS1="\[\e]0;\$(dirs -v | ~/bin/shorten-ds.pl)\a\]$PS1"
else
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
fi
;;
\*)
;;
esac
The script shorten_ds.pl takes e.g.
0 /var/log/apt
1 ~/Downloads
2 ~
and shortens it to:
0:apt 1:Downloads 2:~
#!/usr/bin/perl -w
use strict;
my @lines;
while (<>) {
chomp;
s%^ (\d+) %$1:%;
s%:.*/([^/]+)$%:$1%;
push @lines, $_
}
print join ' ', @lines;
That coupled with functions that take 'u 2' as shorthand for 'pushd +2' and
'o 2' for 'popd +2' make for easy manipulation of the directory stack:
u() {
if [[ $1 =~ ^[0-9]+$ ]]; then
pushd "+$1"
else
pushd "$@"
fi
}
o() {
if [[ $1 =~ ^[0-9]+$ ]]; then
popd "+$1"
else
popd "$@" # lazy way to cause an error
fi
}
I have three different way to open file with vim:
v: vim (or neovim, in my case)
vv: search/preview and open file by filename
vvv: search/preview and open file by its content
alias v='nvim'
alias vv='f=$(fzf --preview-window "right:50%" --preview "bat --color=always {1}"); test -n "$f" && v "$f"'
alias vvv='f=$(rg --line-number --no-heading . | fzf -d: -n 2.. --preview-window "right:50%:+{2}" --preview "bat --color=always --highlight-line {2} {1}"); test -n "$(echo "$f" | cut -d: -f1)" && v "+$(echo "$f" | cut -d: -f2)" "$(echo "$f" | cut -d: -f1)"'
Broadly, I very much love this approach to things and wish it was more "acceptable?" It reminds me of the opposite of things like "the useless use of cat" which to me is one of the WORST meme-type-things in this space.
Like, it's okay -- even good -- for the tools to bend to the user and not the other way around.
I used to do this, but unary kind of sucks after 3; So maybe others might like this better before their fingers get trained:
..() { # Usage: .. [N=1] -> cd up N levels
local d="" i
for ((i = 0; i < ${1:-"1"}; i++))
d="$d/.." # Build up a string & do 1 cd to preserve dirstack
[[ -z $d ]] || cd ./$d
}
Of course, what I actually have been doing since the early 90s is realize that a single "." with no-args is normally illegal and people "cd" soooo much more often than sourcing script definitions. So, I hijack that to save one "." in the first 3 cases and then take a number for the general case.
# dash allows non-AlphaNumeric alias but not function names; POSIX is silent.
cd1 () { if [ $# -eq 0 ]; then cd ..; else command . "$@"; fi; } # nice "cd .."
alias .=cd1
cdu() { # Usage: cdu [N=2] -> cd up N levels
local i=0 d="" # "." already does 1 level
while [ $i -lt ${1:-"2"} ]; do d=$d/..; i=$((i+1)); done
[ -z "$d" ] || cd ./$d; }
alias ..=cdu
alias ...='cd ../../..' # so, "."=1up, ".."=2up, "..."=3up, ".. N"=Nup
and as per the comment this even works in lowly dash, but needs a slight workaround. bash can just do a .() and ..() shell function as with the zsh.
In fish, I have an abbreviation that automatically expands double dots into ../ so that you can just spam double dots and visually see how far you're going.
# Modified from
# https://github.com/fish-shell/fish-shell/issues/1891#issuecomment-451961517
function append-slash-to-double-dot -d 'expand .. to ../'
# Get commandline up to cursor
set -l cmd (commandline --cut-at-cursor)
# Match last line
switch $cmd[-1]
case '*.'
commandline --insert './'
case '*'
commandline --insert '.'
end
end
Good point, when working with keybindings, you'll inevitably end up overriding built-ins. I see it as a trade-off, between something I don't know of (and wouldn't use) and something I find useful. Works for me :)
absolutely. From back in the day, the annoying one was GNU screen, which took over ctrl-a by default. Overrode that to be ctrl-^, which in bash is transpose, make "zx be "xz", which was rare enough to okay with losing.
Does zsh support this out-of-the-box? Because I definitely never had to setup any of these kinds of aliases but have been using this shorthand dot notation for years.
Nice! Tangentially related: I built a (MacOS only) tool called clippy to be a much better pbcopy. It was just added to homebrew core. Among other things, it auto-detects when you want files as references so they paste into GUI apps as uploads, not bytes.
clippy image.png # then paste into Slack, etc. as upload
clippy -r # copy most recent download
pasty # copy file in Finder, then paste actual file here
Adding the word "then" to your first comment would have helped me: (lacking context, I thought the comments explained what the command does, as is common convention)
clippy image.png # then paste into Slack, etc. as upload
Also:
pasty # paste actual file, after copying file in Finder
As a programmer, you sometimes want to make an alphabet lookup table. So, something like:
var alpha_lu = "abcdefghijklmnopqrstuvwxyz";
Typing it out by hand is error prone as it's not easy to see if you've swapped the order or missed a character.
I've needed the alphabet string or lookup rarely, but I have needed it before. Some applications could include making your own UUID function, making a small random naming scheme, associating small categorical numbers to letters, etc.
The author of article mentioned they do web development, so it's not hard to imagine they've had to create a URL shortener, maybe more than once. So, for example, creating a small name could look like:
function small_name(len) {
let a = "abcdefghijklmnopqrstuvwxyz",
v = [];
for (let i=0; i<len; i++) {
v.push( a[ Math.floor( Math.random()*a.length ) ] );
}
return v.join("");
}
//...
small_name(5); // e.g. "pfsor"
Dealing with strings, dealing with hashes, random names, etc., one could imagine needing to do functions like this, or functions that are adjacent to these types of tasks, at least once a month.
Personally I only ever needed it once. I was re-implementing javascript function doing some strange string processing by using characters in the input string to calculate indexes of alphabet array to replace them with. Since I was using Python I just imported string.ascii_lowercase instead of manually typing the sequence, and when I showed the code to someone more experienced than me, I was told it's base64, so all my efforts were replaced with a single base64.b64_decode() call.
If your native language uses a different alphabet, you might not have been taught "the alphabet song". For example, I speak/read passable Russian, but could not alphabetize a list in Russian.
For me it's when I call customer service or support on the phone, and either give them an account #, or confirm a temporary password that I have been verbally given.
Using 'copy' as a clipboard script tells me OP never lived through the DOS era I guess... Used to drive me mad switching between 'cp' in UNIX and 'copy' in DOS.
(Same with the whole slash vs backslash mess.)
> cpwd copies the current directory to the clipboard. Basically pwd | copy. I often use this when I’m in a directory and I want use that directory in another terminal tab; I copy it in one tab and cd to it in another. I use this once a day or so.
You can configure your shell to notify the terminal of directory changes, and then use your terminal’s “open new window” function (eg: ctrl+shift+n) to open a new window retaining the current directory.
As a bonus, I prepend my custom aliases or scripts with my user name and hyphen (i.e helicaltwine-). It helps me recall rarely used scripts when I need them and forget the names.
I follow a similar but more terse pattern. I prepend them all with a comma, and I have yet to find any collisions. If you're using bash (and I assume posix sh as well), the comma character has no special meaning, so this is quite a nice use for it. I agree that it's nice to type ",<tab>" and see all my custom scripts appear.
The Gen AI tooling is exceptionally good at doing these sorts of things, and way more than just "mkdir $1 && cd $1". For example:
I have used it to build an "escmd" tool for interacting with Elasticsearch. It makes the available commands much more discoverable, the output it formats in tables, and gets rid of sending JSON to a curl command.
A variety of small tools that interact with Jira (list my tickets, show tickets that are tagged as needing ops interaction in the current release).
A tool to interact with our docker registry to list available tags and to modify tags, including colorizing them based on the sha hash of the image so it's obvious which ones are the same. We manage docker container deploys based on tags so if we "cptag stg prod" on a project, that releases the staging artifact to production, but we also tag them by build date and git commit hash, so we're often working with 5-7 tags.
Script to send a "Software has successfully been released" message via gmail from the command-line.
A program to "waituntil" a certain time to run a command: "waituntil 20:00 && run_release", with nice display of a countdown.
I have a problem with working on too many things at once and then committing unrelated things tagged with a particular Jira case. So I had it write me a commit program that lists my tickets, shows the changed files, and lets me select which ones go with that ticket.
All these are things I could have built before, but would have taken me hours each. With the GenAI, they take 5-15 minutes of my attention to build something like this. And Gen AI seems really, really great at building these small, independent tools.
Where vtt2txt is a python script — slightly too long to paste here — which strips out the subtitle formatting, leaving a (mostly) human readable transcript.
Love it! I'll absolutely be borrowing some of these :)
On every machine of mine I tend to accumulate a bunch of random little scripts along these lines in my ~/.local/bin, but I never seem to get around to actually putting them anywhere. Trying to knock that habit by putting any new such scripts in a “snippets” repo (https://fsl.yellowapple.us/snippets); ain't a whole lot in there yet, but hopefully that starts to change over time.
I started writing way more utility scripts when I found babashka. Magic of clojure, instant startup, easy to shell out to any other command, tons of useful built in stuff, developing with the REPL. It’s just a good time!!
I wrote it in a way that's too intertwined with my other shit to be shareable with people, but honestly you can copy-paste my comment to your friendly neighborhood LLM and you'll get something decent. Indeed it uses `env`.
I have a script/alias,named p, that allows me to task switch. It takes an issue and maybe an argument, and does a bunch of things if they make sense in context. It has grown over the years.
So 'p ISSUE-123' :
* creates a folder issues/ISSUE-123 for work files, containing links to a backed up folder and the project repository . The shell is cd'd to it
* The repo might get a new branch with the issue name.
* An IDE might start containing the project.
* The browsers home button brings you to a page with all kinds of relevant links:. The issue tracker, the CI, all kinds of test pages, etc...
* The open/save dialogs for every program gets a shortcut named'issue'
* A note is made in a log that allows me to do time tracking at the end if the week.
* A commit message template with the issue is created.
My most important script has been to remap CapsLock as a kind of custom Meta key, that transforms (when pressed) the Space into Return, hjkl into arrows, io into PgUp/PgDn, and 1-9 into function keys. Now I have a 60% keyboard that takes 0 space on my desk. And I am reaaaally happy with this setup.
[that, plus LinkHint plugin for Firefox, and i3 for WM is my way to go for a better life]
Regarding the `timer` script, it appears to block the shell. A way to avoid this would be to spawn a subshell for the sleep command like this: `( sleep "$1" && notify ... ) &`
I got a ccurl python script that extracts the cookies from my Firefox profile and then passes those on to curl, that way I can get webpages where I'm logged in.
This is one area that I've found success in vibe coding with. Making scripts for repetitive tasks that are just above the complexity threshold where the math between automating and doing manually is not so clear. I have copilot generate the code for me and honestly I don't care too much of its quality, extensibility, and are easy enough to read through where I don't feel like my job is AI pr reviewer.
> url "$my_url" parses a URL into its parts. I use this about once a month to pull data out of a URL, often because I don’t want to click a nasty tracking link.
This sounds pretty useful!
Coincidentally, I have recently learned that Daniel Stenberg et al (of cURL fame) wrote trurl[1], a libcurl-based CLI tool for URL parsing. Its `--json` option seems to yield similar results as TFA's url, if slightly less concise because of the JSON encoding. The advantage is that recent releases of common Linux distros seem to include trurl in their repos[2].
Obviously, to each their own, but to me, this is an overwhelming amount of commands to remember on top of all the ones they are composed of that you will likely need to know anyway — regardless if all the custom ones exist.
Like, I'd have to remember both `prettypath` and `sed`, and given that there's hardly any chance I'll not need `sed` in other situations, I now need to remember two commands instead of one.
On top of that `prettypath` only does s/:/\\n/ on my path, not on other strings, making its use extremely narrow. But generally doing search and replace in a string is incredibly useful, so I'd personally rather just use `sed` directly and become more comfortable with it. (Or `perl`, but the point is the same.)
As I said, that's obviously just my opinion, if loads of custom scripts/commands works for you, all the more power to you!
For someone using sed often enough inventing prettypath won't make sense. However, if producing correct sed command, be it by remembering the options, reading manual or digging through the shell history, takes some amount of mental effort, your brain will happily stick "prettypath" into memory as long as doing so stays less mentally taxing than doing original task from scratch.
The scripts from my junk drawer (https://github.com/peterwwillis/junkdrawer) I use every day are 'kd' and 'gw', which use the Unix dialog command to provide an easy terminal UI for Kubectl and Git Worktrees (respectively)... I probably save 15+ minutes a day just flitting around in those UIs. The rest of the scripts I use for random things; tasks in AWS/Git/etc I can never remember, Terraform module refactoring, Bitbucket/GitHub user management, Docker shortcuts, random password generation, mirroring websites with Wget, finding duplicate files, etc.
As a fun game, I suggest feeding the entire piece to an LLM and asking it to create those scripts. The differences between Claude, GOT-5 and Gemini are very interesting.
FYI, ctrl-d isn't a shortcut to exit terminal. It sends EOF (end of file) character which, when reaches shell, closes stdinput file of shell. It generally closes any active interactive input, like all repls, interactive input to sed etc. When interactive shell loses possibility to get more input, it closes as soon as possible and then its parent, the terminal window, also closes. More-less :)
I've started using snippets for code reviews, where I find myself making the same comments (for different colleagues) regularly. I have a keyboard shortcut opening a fuzzy search to find the entry in a single text file. That saves a lot of time.
As an aside, I find most of these commands very long. I tend to use very short aliases, ideally 2 characters. I'm assuming the author uses tab most of the time, if the prefixes don't overlap beyond 3 characters it's not that bad, and maybe the history is more readable.
Rad is built specifically for writing CLI scripts and is perfect for these sorts of small to medium scripts, takes a declarative approach to script arguments, and has first-class shell command integration. I basically don't write scripts in anything else anymore.
17 years ago I wrote a short VBA macro that takes the high life’s range of cells, concatenates the values into a comma separated list, then opens the list in notepad for easy copy and further use. I can’t begin to count the number of executions by myself and those i have shared it with.
-I replace-str
Replace occurrences of replace-str in the initial-arguments
with names read from standard input. Also, unquoted blanks
do not terminate input items; instead the separator is the
newline character. Implies -x and -L 1.
I had my hopes on this project RawDog using local smol sized LLMs but it hasn't been updated in a while. I feel like all this should be running easily in the background nowadays.
> `nato bar` returns Bravo Alfa Romeo. I use this most often when talking to customer service and need to read out a long alphanumeric string, which has only happened a couple of times in my whole life. But it’s sometimes useful!
Even more useful is just learning the ICAO Spelling Alphabet (aka NATO Phonetic Alphabet, of which it is neither). It takes like an afternoon and is useful in many situations, even if the receiver does not know it.
Some time ago I tried to tell my email address to someone in Japan over the phone who did not speak English very well. It turned out to be basically impossible. I realized later one could probably come up with a phonetic alphabet of English words most Japanese know!
Nice. I have a bash script similar to the one listed "removeexif" called prep_for_web which takes any image file (PNG, BMP, JPG, WebP), scrubs EXIF data, checks for transparency and then compresses it to either JPG using MozJPEG or to PNG using PNGQuant.
The "scripts" I use the most that I am most happy with are a set of Vagrant tools that manage initialising different kinds of application environments with an apt cache on the host. Also .ssh/config includes to make it as easy as possible to work with them from VSCode.
I set this stuff up so long ago I sort of forgot that I did it at all; it's like a standard feature. I have to remember I did it.
A subprocess (git) can't modify the working directory of the parent process (the shell). This is a common annoyance with file managers like yazi and ranger as well—you need an extra (usually manual!) installation step to add a shell integration for whichever shell you're using so the shell itself can change directory.
The best solution for automatically cd'ing into the repo is to wrap git clone in a shell function or alias. Unfortunately I don't think there's any way to make git clone print the path a repository was cloned to, so I had to do some hacky string processing that tries to handle the most common usage (ignore the "gh:" in the URL regex, my git config just expands it to "git@github.com:"):
This is really interesting, but I need the highlights reel. So I need a script to summarize Hacker News pages and/or arbitrary web pages. Maybe that's what I want for getting the juice out of Medium articles.
An important advantage of aliases was not mentioned: I see everything in one place and can easily build aliases on top of other aliases without much thinking.
Anyways, my favourite alias that I use all the time is this:
alias a='nvim ~/.zshrc && . ~/.zshrc'
It solves the ,,not loaded automatically'' part at least for the current terminal
One of my biggest headaches is stripping specific number of bytes from the head or tail of a binary file. and I couldn't find any built-in tool for that, so I wrote one in C++.
if you use x0vnc (useful if you use a linux machine both from the attached screen and from vnc, and in a bunch of other scenarios), copy and paste to and fro the vnc client is not implemented, quite frustrating. here's 2 scripts that does that for you, I now use this all day. https://github.com/francoisp/clipshare
I hope to see an operating system with these scripts as built-in, because they are so intuitive and helpful! Which OS will be the first to take this on?
The nato phonetic alphabet one cracked me up. My dude you don't need that, call center employees don't know it, just say S as in Sugar like ur grandma used to.
The NATO alphabet is made of commonly known words that are hard to misspell and hard to mishear (at least the initial phonemes). The person on the other end doesn't need to be able to recite the words, they just need to be able to hear "november" and recognize that it starts with N.
The nato phonetic alphabet is still useful even if the other party doesn't know it, I've used it a bunch of times on the phone to spell out my 10- letter last name. Saves quite a lot of time and energy for me vs saying "letter as in word" for each letter.
Exactly. The listening party doesn't need to have knowledge of the NATO alphabet to still benefit from it since they are just regular English words.
I once had someone sound out a serial number over a spotty phone connection years ago and they said "N as in NAIL". You know what sounds a lot like NAIL? MAIL.
And that is why we don't just arbitrarily make up phonetic alphabets.
I dunno, there's a pretty good chance that the one that people spent time and effort designing to replace earlier efforts with the goal of reducing potential ambiguity and for use over noisy connections with expectation that mistakes could cost lives is probably better than what you improvise on the spot
When I worked in customer service, I asked a teammate what I could do to spell back something the customer said, and she taught me that system, it helped me a lot.
Then I turned it into an alias, called it "serveit" and tweeted about it. And now I see it as a bash script, made a little bit more robust in case python is not installed :)
I have a script called catfiles that I store in ~/.local/bin that recursively dumps every source file with an associated file header so I can paste the resulting blob in to Gemini and ChatGPT in order to have a conversation about the changes I would like to make before I send off the resulting prompt to Gemini Code Assist.
Heres my script if anyone is interested in as I find it to be incredibly useful.
no offense but a lot of those script are pretty hacky they may work for the user but i would not use them without making sure to review them and adapt them to my workflow
That's a fair point. I think the author intended the post to be a treasure trove of ideas for your own scripts, not as something to blindly include in your daily workflow.
It memoizes the command passed to it.
Manually clearing it (for example if I know the underlying data has changed: In-pipeline memoization (includes the input in the hash of the lookup): This allows me to rapidly iterate on shell pipelines. The main goal is to minimize my development latency, but it also has positive effects on dependencies (avoiding redundant RPC calls). The classic way of doing this is storing something in temporary files: But I find this awkward, and makes it harder than necessary to experiment with the expensive command itself. Both of those will run curl once.NOTE: Currently environment variables are not taken into account when hashing.
If you pipe curl's output to it, you'll get a live playground where you can finesse the rest of your pipeline.
It looks like up(1) and memo(1) have similar use cases (or goals). I'll give it a try to see if I can appreciate its ergonomics. I suspect memo(1) will remain my mainstay:
I currently only have two memoized commands:
That's roughly 10x compression ratio.Can you give a more complete example of how you would use this to speed up developing a pipeline?
I wonder if we have gotten to the point where we can feed an LLM our bash history and it could suggest improvements to our workflow.
If you do it, I'd love to hear your results.
In general, I wonder if we're at the point where an LLM watching you interact with your computer for twenty minutes can improve your workflow, suggest tools, etc. I imagine so, because when I think to ask how to do something, I often get an answer that is very useful, so I've automated/fixed far more things than in the past.
Separately from that:
In practice, this has never been a problem for me, and I've used this hacked together command for years.Uhm, jq _is_ as powerful (more) as awk. You can use jq directly and skip awk.
(I know, old habits die hard, and learning functional programming languages is not easy.)
also, this seems a lot like an automated way to write shell scripts that you can pipe to and from. so why not use a shell script that won't surprise anyone instead of this, which might?
The way you’re doing it trashes files sequentially, meaning you hear the trashing sound once per file and ⌘Z in the Finder will only restore the last one. You can improve that (I did it for years) but consider just using the `trash` commands which ships with macOS. Doesn’t use the Finder, so no sound and no ⌘Z, but it’s fast, official, and still allows “Put Back”.
> jsonformat takes JSON at stdin and pretty-prints it to stdout.
Why prioritise node instead of jq? The latter is considerably less code and even comes preinstalled with macOS, now.
> uuid prints a v4 UUID. I use this about once a month.
Any reason to not simply use `uuidgen`, which ships with macOS and likely your Linux distro?
https://www.man7.org/linux/man-pages/man1/uuidgen.1.html
The best part about sharing your config or knowledge is that someone will always light up your blind spots.
Yes! I will take this as a chance to thank every people who shared their knowledge on the Internet. You guys are so freaking awesome! You are always appreciated.
A big chunk of my whole life's learning came from all the forums that I used to scour through, hours after hour! Because these awesome people always sharing their knowledge, and someone adding more. That's what made Internet, Internet. And all is now almost brink of loss, because of greedy corporates.
This habit also helped me with doom-scrolling. I sometimes, do doomscroll, but I can catch it quickly and snap out of it. Because, my whole life, I always jumped in to the rabbit holes, and actually read those big blog posts, where you had those `A-ha` moments, "Oohh, I can use that", "Ahh, that's clever!".
When, browsing, do not give me that, by brain actually triggers, "What are you doing?"
Later, I got lazy, which I am still paying for. But I am going to get out of it.
Never stop jumping into those rabbit holes!! Well, obviously, not always it's a good rabbit hole, but you'll probably come out wiser.
I've found that pedantic conversations here seem to actually have a greater potential for me to learn something from them than other forums/social platforms. On other platforms, I see someone providing a pedantic response and I'll just keep moving on, but on HN, I get curious to not only see who wins the nerd fight, but also that I might learn at least one thing along the way. I like that it's had an effect on how I engage with comment sections.
https://news.ycombinator.com/item?id=45649771
In short, I've never seen somebody flagged simply for having the wrong opinion. Even controversial opinions tend to stay unflagged, unless they're incredibly dangerous or unhinged.
I agree that most dead posts would be a distraction and good to have been kept out.
OTOH I don’t like flagging stories because good ones get buried regularly. But then HN is not a great place for peaceful, nuanced discussion and these threads often descend into mindless flame wars, which would bury the stories even without flagging.
So, meh. I think flagging is a moderately good thing overall but it really lacks in subtlety.
On stories however, I think the flag system is pretty broken. I've seen so many stories that get flagged because people find them annoying (especially AI-related things) or people assume it will turn into a flame war, but it ends up burying important tech news. Even if the flags are reversed, the damage is usually done because the story fell off the front page (or further) and gets very little traction after that.
Can you back this up with data? ;-)
I see citations and links to sources about as little as on reddit around here.
The difference I see is in the top 1% comments, which exist in the first place, and are better on average (but that depends on what other forums or subreddits you compare it to, /r/AskHistorians is pretty good for serious history answers for example), but not in the rest of the comments. Also, less distractions, more staying on topic, the joke replies are punished more often and are less frequent.
- either critique is solid and I learn something
- or commenter is clueless which makes it entertaining
there is very seldom a “middle”
People who agree with an article will most likely just upvote. Hardly anyone ever bothers to comment to offer praise, so most comments that you end up seeing are criticisms.
https://meta.wikimedia.org/wiki/Cunningham%27s_Law
...aaand less directly (though referenced in the wikipedia article)...
https://xkcd.com/386/
> vim [...] I select a region and then run :'<,'>!markdownquote
Just select the first column with ctrl-v, then "i> " then escape. That's 4 keys after the selection, instead of 20.
> u+ 2025 returns ñ, LATIN SMALL LETTER N WITH TILDE
`unicode` is widely available, has a good default search, and many options. BTW, I wonder why "2025" matched "ñ".
> catbin foo is basically cat "$(which foo)"Since the author is using zsh, `cat =foo` is shorter and more powerful. It's also much less error-prone with long commands, since zsh can smartly complete after =.
I use it often, e.g. `file =firefox` or `vim =myscript.sh`.
It's not installed by default on macOS or Ubuntu, for me.
https://github.com/nivekuil/rip
In powershell I just do
But as a functionThat was my thought. I use jq to pretty print json.
What I have found useful is j2p and p2j to convert to/from python dict format to json format (and pretty print the output). I also have j2p_clip and p2j_clip, which read from and then write to the system clipboard so I don't have to manually pipe in and out.
> Any reason to not simply use `uuidgen`, which ships with macOS and likely your Linux distro?
I also made a uuid, which just runs uuidgen, but then trims the \n. (And maybe copied to clipboard? It was at my old job, and I don't seem to have saved it to my personal computer.)
Does all the right things and works great.
There’s a similar tool that works well on Linux/BSDs that I’ve used for years, but I don’t have my FreeBSD desktop handy to check.
1: https://mjtsai.com/blog/2025/08/26/the-trash-command/
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Instead of being rude to a fellow human making an inoffensive remark, you could’ve spent your words being kind and describing the scenario you claim exists. For all you know, maybe they did ask ChatGPT and were unconvinced by the answer.
As a side note, I don’t even understand how your swipe would make sense. If anything, needing ChatGPT is what demonstrates a lack of imagination (having the latter you don’t need the former).
What's always interesting to me is how many of these I'll see and initially think, "I don't really need that." Because I'm well aware of the effect (which I'm sure has a name - I suppose it's similar to induced demand) of "make $uncommon_task much cheaper" -> "$uncommon_task becomes the basis of an entirely new workflow/skill". So I'm going to try out most of them and see what sticks!
Also: really love the style of the post. It's very clear but also includes super valuable information about how often the author actually uses each script, to get a sense ahead of time for which ones are more likely to trigger the effect described above.
A final aside about my own workflows which betrays my origins... for some of these operations and for others i occasionally need, I'll just open a browser dev tools window and use JS to do it, for example lowercasing a string :)
My favorite recent one was Handmade Seattle, but that one's kaput as of this year, and it seems everything else along similar lines is overseas and/or invite-only.
Of course, I _do_ have some custom shell scripts and aliases, but these are only for things I will ever do locally.
If you have to do that, the script needs improvement. Always add a `--help` which explains what it does and what arguments it takes.
¹ https://github.com/ruby/optparse
² https://github.com/apple/swift-argument-parser
https://github.com/p-ranav/argparse is a single-file argparse for Modern C++, which means it's typically straightforward, if baffling in places and a bit annoying to step through in the debugger.
The nice thing about the argparse ports is that provided they take their job seriously, your programs will all end up with a vaguely consistent command line UX in terms of longopt syntax, and, importantly, a --help page.
----
¹ https://github.com/YellowApple/bales
For example. The "saves 5 seconds task that I do once a month" from the blog post. Hopefully the author did not spend more than 5 minutes writing said script and maintaining it, or they're losing time in the long run.
1. even if it costs more time, it could also save more annoyance which could be a benefit
2. by publishing the scripts, anyone else who comes across them can use them and save time without the initial cost. similarly, making and sharing these can encourage others to share their own scripts, some of which the author could save time with
The annoyance of all these factors for outweighs the benefits, in my experience. It's just that the scripts feel good at first and the annoyance doesn't come until later and eventually you abandon them.
Sounds like something you could automate with a script :)
If something is time sensitive it is worth spending a disproportionate amount of time to speed things up at some later time. For example if you’re debugging something live, in a live presentation, working on something with a tight deadline etc.
Also you don’t necessarily know how often you’ll do something anyways.
The xkcd doesn't seem to be pushing an agenda, just providing a lookup table. Time spent vs time saved is factual.
To take a concrete example, if I spend 30 minutes on a task every six months, over 5 years that’s 5 hours of “work” hours. So the implication is that it’s not worth automating if it takes more than 5 hours to automate.
But if those are 5 hours of application downtime, it’s pretty clearly worth it even if I have to spend way more than 5 hours to reduce downtime.
- When I was a fresh engineer I used a pretty vanilla shell environment
- When I got a year or two of experience, I wrote tons of scripts and bash aliases and had a 1k+ line .bashrc the same as OP
- Now, as a more tenured engineer (15 years of experience), I basically just want a vanilla shell with zero distractions, aliases or scripts and use native UNIX implementations. If it's more complicated than that, I'll code it in Python or Go.
When I watch the work of coworkers or friends who have gone these rabbit holes of customization I always learn some interesting new tools to use - lately I've added atuin, fzf, and a few others to my linux install
The amount of shit you'll get for "applying your dotfiles" on a client machine or a production server is going to be legendary.
Same with containers, please don't install random dotfiles inside them. The whole point of a container is to be predictable.
If something is wrong with a server, we terminate it and spin up a new one. No need for anyone to log in.
In very rare cases it might be relevant to log in to a running server, but I haven’t done that in years.
Aren't you therefore optimizing for 1% of the cases, but sabotaging the 99%?
You said you were already using someone else's environment.
You can't later say that you don't.
Whether or not shell access makes sense depends on what you are doing, but a well written application server running in a cloud environment doesn't need any remote shell account.
It's just that approximately zero typical monolithic web applications meet that level of quality and given that 90% of "developers" are clueless, often they can convince management that being stupid is OK.
Accounts are basically free. Not having accounts; that's expensive.
Let's assume they need access to the full service account environment for the work, which means they need to login or run commands as the service account.
This is a bit outside my domain, so this is a genuine question. I've worked on single user and embedded systems where this isn't possible, so I find the "unprofessional" statement very naive.
I'd rather take the pain of writing scripts to automate this for multiple environments than suffer the death by a thousand cuts which are the defaults.
https://github.com/atuinsh/atuin
Discussed 4 months ago:
Atuin – Magical Shell History https://news.ycombinator.com/item?id=44364186 - June 2025, 71 comments
The right way this would work is via a systemd service and then it should be instant.
Now I have many nix computers and I want them consistent and with only the most necessary packages installed.
(had to use a double backslash to render that correctly)
*may not be applicable to all wives, ymmv.
Some are desktops, some laptops, some servers. Different packages installed, different hardware. Three more variants.
Yes, I do have a script to set up my environment, but it already has a lot of conditional behavior to handle these five total variants. And I don't want to have to re-test the scripts and re-sync often.
Admittedly, I've toned down the configs of some programs, as my usage of them has evolved or diminished, but many are still highly tailored to my preferences. For example, you can't really use Emacs without a considerable amount of tweaking. I mean, you technically could, but such programs are a blank slate made to be configured (and Emacs is awful OOB...). Similarly for zsh, which is my main shell, although I keep bash more vanilla. Practically the entire command-line environment and the choices you make about which programs to use can be considered configuration. If you use NixOS or Guix, then that extends to the entire system.
If you're willing to allow someone else to tell you how you should use your computer, then you might as well use macOS or Windows. :)
Nowadays I just try to be quite selective with my tooling and learn to change with it - "like water", so to speak.
(I say this with no shade to those who like maintaining their dotfiles - it takes all sorts :))
- if you commit them to git, they last your entire career
- improving your setup is basically compound interest
- with a new laptop, my setup script might cause me 15 minutes of fixing a few things
- the more you do it, the less any individual hassle becomes, and the easier it looks to make changes – no more "i don't have time" mindset
as a person who loves their computer, my ~/bin is full. i definitely (not that you said this) do not think "everything i do has to be possible on every computer i am ever shelled into"
being a person on a computer for decades, i have tuned how i want to do things that are incredibly common for me
though perhaps you're referring to work and not hobby/life
Does this mean that you learned to code to earn a paycheck? I'm asking because I had written hundreds of scripts and Emacs Lisp functions to optimize my PC before I got my first job.
Personally I tend to agree... there is a very small subset of things I find worth aliasing. I have a very small amount and probably only use half of them regularly. Frankly I wonder how my use case is so different.
edit: In the case of the author I guess he's probably wants to live in the terminal full time. And perhaps offline. there is a lot of static data he's stored like http status codes: https://codeberg.org/EvanHahn/dotfiles/src/commit/843b9ee13d...
In my case i'd start typing it in my browser then just click something i've visited 100 times before. There is something to be said about reducing that redundant network call but I dont think it makes much practical difference and the mental mapping/discoverability of aliases isnt nothing.
That, plus knowing how to parse a man file to actually understand how to use a command (a skill that takes years to master) pretty much removes the need for most aliases and scripts.
I use ctrl-R with a fuzzy matching program, and let my terminal remember it for me.
And before it's asked: yes that means I'd have more trouble working in a different/someone else's environment. But as it barely ever happens for me, it's hardly an important enough scenario to optimize for.
to this day, i still get tripped up when using a shell for the first time without those as they're muscle memory now.
Rob Pike: Those days are dead and gone and the eulogy was delivered by Perl.
Over time that's grown to an @foo script for every project I work on, every place I frequent that has some kind of specific setup. They are prefixed with an @ because that only rarely conflicts with anything, and tab-complete helps me remember the less frequently used ones.
The @project scripts setup the whole environment, alias the appropriate build tools and versions of those tools, prepare the correct IDE config if needed, drop me in the project's directory, etc. Some start a VPN connection because some of my clients only have git access over VPN etc.
Because I've worked on many things over many years, most of these scripts also output some "help" output so I can remember how shit works for a given project.
Here's an example:
Edit: a word on aliases, I frequently alias tools like maven or ansible to include config files that are specific to that project. That way I can have a .m2 folder for every project that doesn't get polluted by other projects, I don't have to remember to tell ansible which inventory file to use, etc. I'm lazy and my memory is for shit.MISE_ENV=testing bun run test
(“testing” in this example can be whatever you like)
- direnv: https://direnv.net/ simple tool and integrates with nix
- devenv: https://devenv.sh/ built on nix and is pretty slice
I've always wanted a linux directory hook that runs some action. Say I have a scripts dir filled with 10 different shells scripts. I could easily have a readme or something to remember what they all do.
What I want is some hook in a dir that every time I cd into that dir it runs the hook. Most of the time it would be a simple 'cat usage.txt' but sometimes it maybe 'source .venv/bin/activate'.
I know I can alias the the cd and the hook together but I don't want that.
Its intended use case is loading environment variables (you could use this to load your virtualenv), but it works by sourcing a script — and that script can be ‘cat usage.txt.’
Great tool.
If you use Emacs (and you should!), there’s a direnv mode. Emacs also has its own way to set configuration items within a directory (directory-local variables), and is smart enough to support two files, so that there can be one file checked into source control for all members of a project and another ignored for one’s personal config.
Direnv is awesome! Note, thought, that it does not depend on Nix, just a Unix-like OS and a supported shell: https://direnv.net/#prerequisites
So, you created a square wheel, instead of a NASA wheel.
Otherwise, I am happy to be pulled into your discussion, Marshall McLuhan style[3] to adjudicate, for a very reasonable fee.
[1] https://craphound.com/lifehacksetcon04.txt
[2] https://archive.org/details/Notcon2004DannyOBrienLifehacks
[3] https://www.openculture.com/2017/05/woody-allen-gets-marshal...
The flags are for maximum compatibility (e.g. without them, some MP4s don't play in WhatsApp, or Discord on mobile, or whatever.)
ffmp4 foo.webm-> foo_sd.mp4
fftime foo.mp4 01:30 01:45-> foo_cut.mp4
Note, fftime copies the audio and video data without re-encoding, which can be a little janky, but often works fine, and can be much (100x) faster on large files. To re-encode just remove "-c copy"
I tend to try to not get too used to custom "helper" scripts because I become incapacitated when working in other systems. Nevertheless, I really appreciate all these scripts if nothing else than to see what patterns other programmers pick up.
My only addition is a small `tplate` script that creates HTML, C, C++, Makefile, etc. "template" files to start a project. Kind of like a "wizard setup". e.g.
And of course, three scripts `:q`, `:w` and `:wq` that get used surprisingly often:sed, awk, grep, and xargs along with standard utilities get you a long long way.
I value out of the box stuff that works most everywhere. I have a fairly lightweight zsh config I use locally but it’s mostly just stuff like a status like that suits me, better history settings, etc. Stuff I won’t miss if it’s not there.
I have almost the same, but differently named with scratch(day), copy(xc), markdown quote(blockquote), murder, waitfor, tryna, etc.
I used to use telegram-send with a custom notification sounnd a lot for notifications from long-running scripts if I walked away from the laptop.
I used to have one called timespeak that would speak the time to me every hour or half hour.
I have go_clone that clones a repo into GOPATH which I use for organising even non-go projects long after putting go projects in GOPATH stopped being needed.
I liked writing one-offs, and I don't think it's premature optimization because I kept getting faster at it.
Mine is called "md" and it has "-p" on the mkdir. "mkdir -p $1 && cd $1"
Edit: looks like it’s a zsh thing
The Mac Shortcut at https://github.com/e-kotov/macos-shortcuts lets you select a particular area of the screen (as with Cmd-Shift-4) and copies the text out of that, allowing you to copy exactly the text you need from anywhere on your screen with one keyboard shortcut. Great for popups with unselectable text, and copying error messages from coworkers' screenshares.
Ex:
Useful for times when I don't want to type a long train of dot slashes(ex. cd ../../..).Also useful when using Zoxide, and I tab complete into a directory tree path where parent directories are not in Zoxide history.
Added tab complete for speed.
> dc is the oldest surviving Unix language program.
https://en.wikipedia.org/wiki/Dc_%28computer_program%29
I used "dc" (the calculator) just earlier this week. Kids these days? :)
So if you're in your projects folder and want to keep working on your latest project, I just type "cdn" to go there.
One thing I have found that's worth it is periodically running an aggregation on one's history and purging old ones that I don't use.
Like, it's okay -- even good -- for the tools to bend to the user and not the other way around.
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'
alias .....='cd ../../../..'
alias ......='cd ../../../../..'
alias .......='cd ../../../../../..'
Other single-key bindings I use often are:
KP* executes 'ls'
KP- executes 'cd -'
KP+ executes 'make -j `nproc`'
up 2, up 3 etc.
I also aliased - to run cd -
It occurred to me that it would be more useful to me in Emacs, and that might make a fun little exercise.
And that's how I discovered `M-x nato-region` was already a thing.
I genuinely wonder, why would anyone want to use this, often?
I've needed the alphabet string or lookup rarely, but I have needed it before. Some applications could include making your own UUID function, making a small random naming scheme, associating small categorical numbers to letters, etc.
The author of article mentioned they do web development, so it's not hard to imagine they've had to create a URL shortener, maybe more than once. So, for example, creating a small name could look like:
Dealing with strings, dealing with hashes, random names, etc., one could imagine needing to do functions like this, or functions that are adjacent to these types of tasks, at least once a month.Just a guess on my part though.
You can configure your shell to notify the terminal of directory changes, and then use your terminal’s “open new window” function (eg: ctrl+shift+n) to open a new window retaining the current directory.
I'm curious to hear some examples (feel like I'm missing out)
I have used it to build an "escmd" tool for interacting with Elasticsearch. It makes the available commands much more discoverable, the output it formats in tables, and gets rid of sending JSON to a curl command.
A variety of small tools that interact with Jira (list my tickets, show tickets that are tagged as needing ops interaction in the current release).
A tool to interact with our docker registry to list available tags and to modify tags, including colorizing them based on the sha hash of the image so it's obvious which ones are the same. We manage docker container deploys based on tags so if we "cptag stg prod" on a project, that releases the staging artifact to production, but we also tag them by build date and git commit hash, so we're often working with 5-7 tags.
Script to send a "Software has successfully been released" message via gmail from the command-line.
A program to "waituntil" a certain time to run a command: "waituntil 20:00 && run_release", with nice display of a countdown.
I have a problem with working on too many things at once and then committing unrelated things tagged with a particular Jira case. So I had it write me a commit program that lists my tickets, shows the changed files, and lets me select which ones go with that ticket.
All these are things I could have built before, but would have taken me hours each. With the GenAI, they take 5-15 minutes of my attention to build something like this. And Gen AI seems really, really great at building these small, independent tools.
I think it needs yt-dlp installed — and reasonably up to date, since YouTube keeps breaking yt-dlp... but the updates keep fixing it :)
On every machine of mine I tend to accumulate a bunch of random little scripts along these lines in my ~/.local/bin, but I never seem to get around to actually putting them anywhere. Trying to knock that habit by putting any new such scripts in a “snippets” repo (https://fsl.yellowapple.us/snippets); ain't a whole lot in there yet, but hopefully that starts to change over time.
I use this as a bookmarklet to grab the front page of the new york times (print edition). (You can also go back to any date up to like 2011)
I think they go out at like 4 am. So, day-of, note that it will fail if you're in that window before publishing.
I use this a lot in all of my scripts. Basically whenever any of my script prints a path, it passes it through `posh`.
So 'p ISSUE-123' :
* creates a folder issues/ISSUE-123 for work files, containing links to a backed up folder and the project repository . The shell is cd'd to it
* The repo might get a new branch with the issue name.
* An IDE might start containing the project.
* The browsers home button brings you to a page with all kinds of relevant links:. The issue tracker, the CI, all kinds of test pages, etc...
* The open/save dialogs for every program gets a shortcut named'issue'
* A note is made in a log that allows me to do time tracking at the end if the week.
* A commit message template with the issue is created.
[that, plus LinkHint plugin for Firefox, and i3 for WM is my way to go for a better life]
And you can type `rn -rf *` to see all timezones recursively. :)
Also re: alphabet
If you want the exact alphabet behaviour as the OP:
and alias debian="docker run -it --rm -v $(pwd):/mnt/host -w /mnt/host --name debug-debian debian"
This sounds pretty useful!
Coincidentally, I have recently learned that Daniel Stenberg et al (of cURL fame) wrote trurl[1], a libcurl-based CLI tool for URL parsing. Its `--json` option seems to yield similar results as TFA's url, if slightly less concise because of the JSON encoding. The advantage is that recent releases of common Linux distros seem to include trurl in their repos[2].
[1]: https://curl.se/trurl/
[2]: https://pkgs.org/search/?q=trurl
alias df='duf'
alias ls='eza'
alias ll='eza -l'
alias cat='bat'
alias cap='bat -p'
alias man='tldr'
alias top='glances'
alias grep='rg'
alias ps='procs'
alias cd='z'
alias g='gitui'
alias gs='git st'
alias gp='git pull'
alias gu='git add . && git commit -m "Update" && git push'
alias check='shellcheck'
alias v='nvim'
alias len='wc -l'
alias uuid='uuidgen'
alias src='source ~/.zshrc'
Like, I'd have to remember both `prettypath` and `sed`, and given that there's hardly any chance I'll not need `sed` in other situations, I now need to remember two commands instead of one.
On top of that `prettypath` only does s/:/\\n/ on my path, not on other strings, making its use extremely narrow. But generally doing search and replace in a string is incredibly useful, so I'd personally rather just use `sed` directly and become more comfortable with it. (Or `perl`, but the point is the same.)
As I said, that's obviously just my opinion, if loads of custom scripts/commands works for you, all the more power to you!
This can be replaced with
sed -n $1p\;$1q
Test it versus
head -$1|tail -1
https://snhps.com
They're not all necessarily the most efficient/proper way to accomplish a task, but they're nice to have on hand and be able to quickly share.
Admittedly, their usefulness has been diminished a bit since the rise of LLMs, but they still come in handy from time to time.
Here are some super simple ones I didn't see that I use almost every day:
cl="clear"
g="git"
h="history"
ll="ls -al"
path='echo -e ${PATH//:/\\n}'
lv="live-server"
And for common navigation:
dl="cd ~/Downloads"
dt="cd ~/Desktop"
That and exit (CTRL-d). A guy I used to work with just mentioned it casually and someone it just seared itself into my brain.
As an aside, I find most of these commands very long. I tend to use very short aliases, ideally 2 characters. I'm assuming the author uses tab most of the time, if the prefixes don't overlap beyond 3 characters it's not that bad, and maybe the history is more readable.
Folks interested in scripting like this might like this tool I'm working on https://github.com/amterp/rad
Rad is built specifically for writing CLI scripts and is perfect for these sorts of small to medium scripts, takes a declarative approach to script arguments, and has first-class shell command integration. I basically don't write scripts in anything else anymore.
https://github.com/AbanteAI/rawdog
jsonformat -> jq
running -> pgrep
Even more useful is just learning the ICAO Spelling Alphabet (aka NATO Phonetic Alphabet, of which it is neither). It takes like an afternoon and is useful in many situations, even if the receiver does not know it.
[1] https://github.com/mozilla/mozjpeg
[2] https://pngquant.org
I set this stuff up so long ago I sort of forgot that I did it at all; it's like a standard feature. I have to remember I did it.
The best solution for automatically cd'ing into the repo is to wrap git clone in a shell function or alias. Unfortunately I don't think there's any way to make git clone print the path a repository was cloned to, so I had to do some hacky string processing that tries to handle the most common usage (ignore the "gh:" in the URL regex, my git config just expands it to "git@github.com:"):
https://github.com/Andriamanitra/dotfiles/blob/d1aecb8c37f09...
A password or token generator, simple or complicated random text.
Scripts to list, view and delete mail messages inside POP3 servers
n, to start Nautilus from terminal in the current directory.
lastpdf, to open the last file I printed as PDF.
lastdownload, to view the names of the n most recent files in the Downloads directory.
And many more but those are the ones that I use often and I remember without looking at ~/bin
Anyways, my favourite alias that I use all the time is this:
It solves the ,,not loaded automatically'' part at least for the current terminal`alias clip="base64 | xargs -0 printf '\e]52;c;%s\007'"`
It just sends it to the client’s terminal clipboard.
`cat thing.txt | clip`
First X bytes: dd bs=X count=1
1. stripping fist X bytes: dd bs=1 skip=X
2. stripping last X bytes: truncate -s -X
https://evanhahn.com/why-alias-is-my-last-resort-for-aliases...
There also some very niche stuff that I won't use but found funny
I once had someone sound out a serial number over a spotty phone connection years ago and they said "N as in NAIL". You know what sounds a lot like NAIL? MAIL.
And that is why we don't just arbitrarily make up phonetic alphabets.
Which often just confuses things further.
Me: My name is "Farb" F-A-R-B. B as in Baker.
Them: Farb-Baker, got it.
"S-T-E-V-E @ gmail.com, S as in sun, T as in taste, ..." "Got it, fpeve."
It worked with me and I guess it must have usually worked for him in most of his customer interactions.
this fella doesn't know what "toggle" means. in this context, it means "turn off if it's currently on, or turn on if it's currently off."
this should be named `wifi cycle` instead. "cycle" is a good word for turning something off then on again.
naming things is hard, but it's not so hard that you can't use the right word. :)
https://news.ycombinator.com/item?id=42057431
https://news.ycombinator.com/item?id=31928736
^ (jump to the beginning)
ctrl+v (block selection)
j (move cursor down)
shift+i (bulk insert?)
type ><space>
ESC
python3 -m http.server 1337
Then I turned it into an alias, called it "serveit" and tweeted about it. And now I see it as a bash script, made a little bit more robust in case python is not installed :)
https://gist.github.com/jgbrwn/7dd4b262c544f750cb0291161b2ec...
(actually avoids having to do a one liner like: for h in {1..5}; do dig +short A mail”${h}”.domain.com @1.1.1.1 )
Hmm speaking of which I need to add in support for using a specific DNS server
E.g. cat --copy
Heres my script if anyone is interested in as I find it to be incredibly useful.
find . -type f \( -name ".tf" -o -name ".tfvars" -o -name ".json" -o -name ".hcl" -o -name ".sh" -o -name ".tpl" -o -name ".yml" -o -name ".yaml" -o -name ".py" -o -name ".md" \) -exec sh -c 'for f; do echo "### FILE: $f ###"; cat "$f"; echo; done' sh {} +