Recent changes in my health have made it such that some days I’m couch-bound upstairs, while my office (and primary work computer) is in the basement. This has led to some interesting solutions for my daily work.
My day job at Oracle I can mostly do from my couch and my work laptop. The only thing I have to get to my downstairs office for is recording voiceovers for the videos we’ve been creating lately. My mornings are typically better for me health-wise, so I write scripts upstairs in the afternoon, and then record in the mornings with my expensive podcasting setup. So that’s covered. But especially early in the morning I work on my personal stuff, which involves a lot of web development.
For the purposes of this post, “remote” generally refers to a Mac Studio (Agony) in my basement office, and “local” refers to whatever laptop I’m working on at the time. Most of these solutions work with actual remote machines, but some of them are specific to working on machines on the same network. This should become clear as I go.
Websites for projects like nvUltra, Bunch, Dimspirations, and my personal blog (this one) are all Jekyll-based and set up on my Studio in my office. Rather than duplicate my setup and cause disparities between copies of my site, I’ve set up a “remote” development setup that I think is worth writing up, if for no other reason than a reference point next time I set up a new machine.
None of this is specific to Jekyll, so don’t give up reading at this point.
On my couch I have three laptops available. My work laptop, my older Intel MacBook Pro1, and my M4 MBP. My setup allows me to do web development on any of them without breaking the sites hosted on my Mac Studio. Here’s how I keep them all in sync and develop these websites smoothly from the couch.
First, I have a “drafts” folders for each site stored in Dropbox. Those folders are symlinked in the Jekyll directory of each site, so I can created drafts for posts locally in each directory (using a Rake task) and they’re automatically available on all machines. That works great for a folder full of text files, but is a mess when dealing with Git repositories.
Local Git server
I run a Git server on my Synology for private repos. All local copies of the websites are clones of Git repositories on that server. This means that if I create things like images, archives, or other non-text assets on my local machine, I can commit and push them to the git server and pull them on remote machines.
I sometimes use an SSH session to go into remote directories and git pull, but I also have a script called pullremote that takes an SSH alias as an argument. Each machine has an identical directory structure for the websites, so the function just takes the current directory on my local machine and runs an SSH command that runs a git pull in the same directory on the remote machine. So once I’ve created an asset and want it available on the host, I just run pullremote a from my development machine to pull the repository on Agony (my Studio)2.
Creating and syncing assets
When I’m creating images like screenshots, I use Cleanshot X to save the screenshot, then I rename it in Finder or Terminal. I have a Hazel task that watches my desktop for images with %% in their name and a script that responds to naming conventions like screenshot-name@2x%%r1600hco.png. That processes the image with “resize to 1600px” (r1600), “create a 1x version” (h for “halve”), “convert to JPEG” (c), and “optimize all results” (o). The optimize command runs whatever optimization tool works best for that image type (pngcrush, imageoptim, etc.).
I have all of these tools and their preferences synced between
machines using Dropbox and symlinks.
Once the assets are created, I drag the resulting images (created by the aforementioned setup, or with a Retrobatch droplet which Hazel runs — based on a Finder tag — that creates a bunch of image sizes and additional formats like webp) to a Dropzone target for whichever site to which they belong, and that copies the assets to the appropriate local folder and puts an appropriate Liquid tag for that image set or download in my clipboard for use in draft posts/pages. Then I use git and pullremote a to sync to the main setup.
iTerm and tmux
iTerm has a tmux mode that makes working on Agony easy. It’s not a fully-functional tmux setup, but it means that I can have a shared tmux configuration that functions with native windows and panes. I can run an alias like jeka on any machine to open up a tab with multiple panes that are actually shared remote sessions. I use that to do things like spin up a jekyll serve command on the main machine and have all of its output and Rake tasks available on whatever machine I’m working on, and I can close one machine and open another and the remote session is seamlessly available in native windows with the same session.
Using tmux also means that closing a machine (thus disconnecting the SSH session) doesn’t interrupt the remote tasks. Native windows mean I can navigate them just like local windows, switching panes and managing windows with the same keyboard shortcuts as local windows, instead of having to use tmux leader-based commands in one tab and ⌘]/[ commands in another, and scrolling, copying and shell integration work much more smoothly than in tmux. (My Fish prompt has a bright yellow tag in it that alerts me when the current window is an SSH session and which machine it’s connected to so I don’t lose track.)
Port forwarding
OK, so a jekyll serve command runs a Webrick server on localhost, so now I’ve set up a port 4000 server on Agony that’s only available on that machine. So I run reverse forward SSH tunnels on my laptops that tunnel port 4000 and other ports (like the livereload server) from Agony to the local machine. These forwards are easy to set up with SSH Config Editor, and all of my laptops are set up this way. I set them up one time in my SSH config and I don’t have to think about it again — I can point my browser on any machine to localhost:4000 and get the site being served from Agony.
VS Code Remote Explorer
In Visual Studio Code, my current editor of choice, there’s a Remote Explorer extension that allows me to tunnel to a remote machine and then edit files as if they were local. So I’m tunneled in to Agony, but editing its files as if they were on my local machine. It’s brilliant. I’m working on Agony’s files, previewing my work in my browser, all as if it were local to the machine I’m working on. I can load all of VS Code’s extensions in the remote session, too, and run commands from the built-in terminal. I love it. (And working with GitHub Copilot in VS Code is a next-level coding experience.)
My Rake deploy task automatically does a git commit with all changes and a message summarizing them, pushing it to the main git server. Then when I pick up development on any other machine, I just have to do a git pull to get any files I edited remotely.
Vim setups
Of course there are times I just need to load up Vim on Agony to fix a little spelling error in a page or quickly edit a config file. I use DotBot to sync all of my config files, including shortcuts and Vim plugins such as Markdown editing tools.
DotBot makes it easy to do the initial symlinking of config directories, then updating setups is just a couple of Git commands away. And vim works great over SSH, so while it’s not my main editor, it’s a great tool for quick fixes. I mean, if you know a whole bunch of keyboard sequences…
MAMP stacks and local DNS
Lastly, there’s a remote Apache/Nginx setup. Some functionality on my sites is CGI/Ajax-based, and I have special image handling that relies on Apache .htaccess files, and none of that works with Jekyll’s built-in server. I need a full Apache or Nginx server to see those. So I run MAMP Pro or the new IndigoStack on Agony.
MAMP has a product called NAMO that runs a local DNS server for you. That could be done with some config file editing, but NAMO makes serving up the hosts configured on Agony to any machine on my network an easy task. I set up a development/staging host like dev.bunch in MAMP, add Agony’s local IP as the first DNS server in my System Settings->Network setup on my laptops, and suddenly dev.bunch works on any machine on my network (including my iPhone/iPad) for testing.
NAMO passes any non-local addresses through whatever fallback DNS servers you configure in it. I use network “locations” to easily switch between the NAMO setup and my regular DNS settings (which forward through my Synology DNS cache and router VPN setup).
Screen sharing
I should mention that Sequoia’s High Performance mode for Screen Sharing is the coolest VNC setup I’ve ever seen. If you have a remote machine and haven’t tried it (and your bandwidth allows), check it out. When I need to do something on Agony that I haven’t automated with any of the above, it makes working with the remote machine so, so nice.
It’s especially handy when I want to reboot my basement machine, but more smoothly than shutdown -r now. I can load it up, quit apps gracefully, and then run a macOS reboot instead, storing all my open apps and windows and generally doing things more nicely.
Actual remote work
When I want to access all of this while I’m on the road (hopefully
I’ll eventually be able to do that again), I love TailScale for
creating a personal VPN that makes all of my home machines accessible
via IP address. If you haven’t tried it, you really should. It’s free
and can function using your GitHub account as login.
In closing…
With this (admittedly somewhat complex) setup, I can edit sites on three different laptops without missing a beat. It’s really not that hard to set up once, and from then on all of the changes I make are instantly available on my main desktop machine, and everything is synced between development machines with very little effort while I’m working on the couch.
If any of this is of interest to you and I haven’t explained it well enough, drop by the forum and I’m happy to elaborate!
Show Links
Here are all the things I’ve mentioned in this post:
As a final and mostly unrelated aside, I have a cool setup for my work VPN. Cisco’s AnyConnect VPN client makes accessing anything not on the VPN a pain, and it doesn’t store passwords, so you have to constantly log in and out of the VPN and paste your password every time.
Instead I run a Docker container on my Mac mini server in the basement that’s constantly logged into the VPN — but only for Oracle sites using an automatic proxy configuration it hosts — and tunnel a port from my various machines through that container’s ports. So I’m always on VPN, but also not. It’s pretty cool.
I need this machine mostly for app development. There’s a weird bug in the old version of WebKit that crashes on print operations in Marked unless it’s compiled on an Intel machine. It’s so weird that even legends like Rich Siegel and Daniel Jalkut couldn’t help me figure it out. ↩
I don’t know why, but my machines are all named for horrible emotions. Agony, Misery, Despair, Doom… but my Mac mini is named “Micro,” the exception to the rule. I guess my Oracle laptop is another exception – it’s obviously named “Seer.” ↩