I have a handful of machines that I need to access over ssh from time to time. This is fine because ssh is a very handy tool, however I like to tweak my local configuration, and it's annoying for my local tweaks to not be available.

One approach for this is to put your configuration in a centralised location, which I do, and fetch it every time on every machine. Which I don't do, because it is extra hassle to keep in sync, especially when only one or two machines are used regularly.

Wouldn't it just be better if all your local configuration were automatically made available.

To this end, I wrote a script called homely-ssh, which makes the home directory of my local machine available to the remote machine with sshfs.

The script

#!/bin/sh

set -e

ADDR="$1"
shift

sshcmd(){
    ssh -o ControlPath="$HOME/.ssh/controlmasters/%r@%h:%p" "$@"
}

td="$(mktemp -d)"

# Create master connection
echo Starting master connection >&2
sshcmd -f -M -T -N "$ADDR" "$@"
echo Started master connection >&2

SSH has a mode where it can multiplex multiple sessions over the same connection. This is generally so you can do multiple ssh sessions to the same machine, which I need something like for the sshfs mount, but I don't need it exactly for that reason.

echo Starting local sftp server >&2
mkfifo "$td/sftp"
ephemeral-launch --host 127.0.0.1 --port-file "$td/sftp" \
    /usr/lib/sftp-server -d ~ &
SFTP_PORT="$(cat "$td/sftp")"
echo Started sftp service on $SFTP_PORT

sshfs usually works by making an ssh connection to the target machine and starting the sftp server. However, because I want to do this over the same connection as I'm using to run a shell on the remote machine, I have to start the sftp server myself.

The sftp server binary isn't port-smart, it just gets command and input over its stdin, and writes output to stdout. So to make it work you need to bind its stdin and stdout to ports. Typically the way you would normally do this is to run it through inetd, but I need to be able to start multiple sftp servers for multiple different connections, so I have a wrapper script called ephemeral-launch, which opens an ephemeral port to serve the sftp server on, and reports the port that was chosen by writing to the file specified with --port-file.

I use a fifo(7) rather than a normal file though, as I can then start the service backgrounded and read from the file in my main process. This causes the ephemeral-launch process to wait until my thread has read the port number, ensuring I don't attempt to use the sftp service until it is ready to accept requests.

echo Forwarding sftp port to remote >&2
REMOTEPORT="$(sshcmd -O forward -R 127.0.0.1:0:127.0.0.1:"$SFTP_PORT" "$ADDR" "$@")"
echo Proxied sftp port to remote port $REMOTEPORT

When ssh is used in "Control Socket" mode, you can specify -O forward to add port forwarding rules to the existing connection. If 0 is specified as the port to bind to, it reports the actual port that was used on stdout.

The -R 127.0.0.1:0:127.0.0.1:$SFTP_PORT" is asking for a new ephemeral port to be bound to on the remote end, forwarding connections to the local sftp server port.

echo Starting remote sshfs >&2
sshcmd -n -f "$ADDR" "sshfs -f -o directport=$REMOTEPORT 127.0.0.1: ~/mnt" &

Slightly oddly, the remote end of the ssh connection isn't made aware of what the forwarded ports are, so this spawns the sshfs mount on the remote end. It uses the -f option of sshfs to make the process which manages the mount-point remain in the foreground, so if I terminate the session, the mount goes away cleanly, rather than crashing because the connection was closed.

I ought to be able to use the -F option to ssh to make the connection go into the background after it connects to the target, but for some reason this wasn't working, so I start it with -n to stop it reading input from my terminal, and background it with &.

This is not ideal, as -F serves the backgrounding of ssh connections better, but I should be ok backgrounding it in the shell script, as the main reason for -F is to allow backgrounding when the ssh session may need to ask for a password, but I've already authenticated and have the control socket keeping this open, so I shouldn't be asked for any passwords.

echo Starting shell >&2
sshcmd -o PermitLocalCommand=yes "$ADDR" "$@"

Now I run the ssh command to run a shell on the target. I discovered you can press an escape code to get a command prompt for running local commands if you enable PermitLocalCommand and press \n~C, so I've enabled that.

echo Terminating session
sshcmd -O exit "$ADDR" "$@"

After the shell session has exited, we need to terminate the connection, so I tell it to exit with -O exit. This kills the sshfs and master socket sessions.

Future enhancements

I want to use this to automatically have my local config available on the remote end. This ought to be possible by setting environment variables, since most programs allow configuration this way, but ssh doesn't automatically pass enviroment variables over to the remote end, so it will require parsing the command-line before hand and inserting statements to set environment variables before starting the shell.

I don't like spawning the sshfs in a separate session, I'd prefer it be part of the master session, since it would bind the sshfs mount resource to the lifetime of the connection, however this is less easy to do from shell, as the fifo trick only works if the subprocess opens the file just before reading/writing from it, while I'd have to change the standard input of the master socket service, which requires opening it beforehand.

Either way I will have to rewrite the script in a more powerful language, since I'm hitting the limits of what is easy to do in shell.