Posted by sdovan1 2 days ago
${HOME} is where your dotfiles are.
https://erock-git-dotfiles.pgs.sh/tree/main/item/dotfiles.sh...
This installs into temp dirs and cleans it all up when you disconnect.
Personally, my old-man solution to this problem is different: always roll with defaults even if you don't like them, and don't use aliases. Not for everyone, but I can ssh into any random box and not be flailing about.
Even with OP's neat solution, it's not really going to work when you have to go through a jump box, or have to connect with a serial connection or some enterprise audit loggable ssh wrapper, etc
On the other hand, your comment has me wondering if ssh-agent could be abused to drag your config along between jump hosts and enterprise nonsense, like ti does forwarding of keys.
e.g. I type an alias, the ssh client expands it on my local machine and send complex commands to remote. Could this be possible?
I suppose a special shell could make it work.
Because the processes that use them run on the remote machines.
> I type an alias, the ssh client expands it on my local machine and send complex commands to remote.
This is not how SSH works. It merely takes your keystrokes and sends them to the remote machine, where bash/whatever reads and processes them.
Of course, you can have it work the way you imagine, it's just that it'd require a very special shell on your local machine, and a whole RAT client on the remote machine, which your special shell should be intimately aware about. E.g. TAB-completion of files would involve asking the remote machine to send the dir contents to your shell, and if your alias includes a process substitution... where should that process run?
Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?
> your alias includes a process substitution
Very valid point. How about a special shell only provides sys calls and process substitution on remote, the rest runs on local client, and communicate via ssh?
I understand this will make client "fat" but it's way more portable.
Well, no. But if you didn't write that program (e.g. bash or vim), you're stuck with what their actual logic is. Which is "read a file from the filesystem". You can, of course, do something like mounting your local home directory onto the remote's filesystem (hopefully, read-only)... But in the end of the day, there are still two separate machines, and you have to mend the divide somehow, and it'll never be completely pretty, I'm afraid.
> How about a special shell only provides sys calls and process substitution on remote.
Again, as I said, lots of RATs exist, not all of them malicious. But to make "the rest runs on local client" you need to write what essentially will end up a "purely remote-only shell". Essentially, all the parts of bash that manage parsing, user interaction and internal state tracking but without actual process management. Perhaps it's a good idea, actually; but untangling the mess of bash source is not going to be easy.
The current solution of "have a completely normal, standard shell run on the remote and stretch the terminal connection to it over the network" is Good Enough for most of people. Which is not surprising given that that's the environment in which UNIX and its shell were originally implemented.
Working on it! :)
Remote machines usually don’t need to know your keystrokes or handle your line editing, either. There’s a lot of latency to cut out, local customization to preserve, and protocol simplification to be had.
It’s not bad software, it’s also not mature. I’m currently on a phone and on vacation so this is the extent of my review. Maybe I’ll circle back around with some PRs next week