Welcome to the lab.

Marked 2.5.32 with extra Bear

[Tweet : nvALT]

Despite putting time into the replacement for nvALT, I also managed to get the latest update for Marked 2 out. My meds must be working. It’s available now on Setapp, direct (Check for Updates), and Mac App Store. It has a longer-than-usual list of improvements and fixes, so this post might get lengthy.

One area of focus was better support for Bear. If you want a true HTML preview with export options when using Bear, Marked is the perfect companion. Bear even offers a Note->Preview in Marked option. I get a lot of feedback from Bear users, so I put some extra time into polishing up compatibilities.

Better with Bear

Before I start talking too much about Bear, there’s one issue to note. Bear writes its preview files out to a system temp folder that Marked can’t permanently access from the sandboxed Mac App Store version, so users are constantly asked for permission. If you’re using Bear with the Mac App Store version of Marked, I offer a free crossgrade to the unsandboxed direct version. If you use the Help->Report an Issue feature and just send me the top part of the report section (above the ---), I’ll consider that enough proof to provide you with a license. You can also contact me through the support forum.

Anyway, this version of Marked takes care of a few Bear integration issues. First, when exporting a Bear preview to a PDF, the %title variable in headers and footers was using the UUID that Bear assigned to the note, which was an ungainly string of letters and numbers. That could definitely use some fixing.

Because every Bear note typically starts with an H1 used as the note title, I added an option to Export preferences to “Use first H1 as fallback title.” This applies to more than just Bear, since typically the fallback title was the filename unless a “title:” line was provided in metadata. Now if you don’t have metadata, you can have it automatically use the first H1 if one exists. It will still fall back to the source document’s filename if neither title metadata or H1 exist in the document.

If the preview is a Bear note, this feature also affects the window title and the filenames automatically assumed when exporting, so you get a file named after the title of the note instead of AD7BDC7A-DEE1-4ECA-A07E-0C202ED1B681-61957-0001952A65A19C70.html.

I also fixed Marked’s handling of relative image paths within the Bear TextBundle, so now images included directly in Bear notes will properly display in Marked and its various exports.

In Bear, you create tags with Twitter-style hashtags, #likethis. If these are at the beginning of a line, as is common, Markdown turns them into h1 headers, which is the most visually intrusive way possible to screw them up. So now, when previewing a Bear note, Marked will detect these tags and turn them into styled notations, which both improves aesthetics and completely avoids the accidental headlines. Marked supports #tag, #nested/tag, and #crazy weird! tag# formats. You can hide the display of Bear tags in Marked by turning off Gear Menu->Proofing->Show Comments.

Lastly, I made a Bear Custom Style, mostly on a whim. Grab it from the MarkedCustomStyles repository and add it to Marked using the Preferences->Style pane. For the full effect, go to Preferences->Style and “Limit text width in Preview” to about 450.

Scrivener, Too

There are a few improvements for Scrivener as well. Images referenced using Markdown syntax and a path relative to the base document will now display, and embedded image handling is improved.

There was an issue causing rendering to break on certain inline annotations that’s been resolved, and Marked does a better job of visually differentiating comments and inline annotations.

Like MathJax, but Faster

I’ve also added an option to use KaTeX instead of MathJax. It’s significantly faster for rendering large numbers of equations, and is only missing a few of MathJax’s advanced features. If you write with a lot of math, try it out and see if you can speed up your page renders.

Speaking of equations, I also made some modifications to allow better compatibility between the MultiMarkdown and Discount (GFM) processor options when dealing with MathJax (and KaTeX) syntax. It should be pretty transparent, whatever you use to delineate your equations should Just Work™ in both MMD and Discount modes.

Those Pandoc Crashes

There was an issue where errors generated when using Pandoc as a custom processor caused Marked to Just Crash™ rather than reporting the error gracefully. That’s fixed. It had nothing to do with Pandoc. It only had to do with calling an alert off the main thread. Facepalm.

Miscellaneous

One kind of random feature — the result of just two user requests but easy enough to implement — is page numbering offset (Preferences->Export). You can now have page numbering start at whatever number you like, so if you’re creating a title/cover page, including a long table of contents at the top, etc., you can adjust the page numbering to start on page 2, or 3, or whatever you need. It also accepts negative numbers, so ‘-1’ starts the page numbering at 2.

Paddle customers can now deactivate their licenses. Because Paddle limits by “activation” instead of by user or machine ids, every time you re-register Marked you use up an activation. I’ve always accommodated users by increasing their activation limits on request, but this will allow users to go ahead and deactivate a previous installation and recover the activation, whether they’re installing on a new/re-installed machine or transferring their license to another user. The license view also shows the activated license as selectable text, so it’s easier to copy out for safekeeping when needed.

Phew, that’s a pretty complete look at all the new stuff in Marked 2. You should definitely check it out.

sizes: better disk usage reporting in Terminal

[Tweet : nvALT]

I’ve come up with a lot of ways to see what’s taking up space in my directories from Terminal over the years. I’ve finally built one that covers all the little niggles I’ve had in the past.

Let’s start with the other ways you can do this.

du

Since we’re talking about disk usage, the obvious choice is du, the “disk usage” command. To see the filesize of every file in the directory you can run du -sh *. The -h switch tells it to output human readable sizes, so it looks like:

4.0K	test.rb
4.0K	token.js
8.0K	utils
1.2M	webexcursions

This is pretty close to what I want, but it can’t be sorted by size. Also, du reports in blocks (512B per segment), so if you’re interested in accurate readings on files under 4kb, it won’t do it.

ls

You can also use ls -l to list all files along with their file sizes (and a whole bunch of other info). You can sort by size with -S (or -Sr for reverse order), and -h works here too to show human readable size formats. So that’s closer to what I want, but there’s a whole bunch of irrelevant info as well as the fact that ls isn’t going to report the total size of directories (all the files they contain added together) the way du will.

ncdu

I also have to mention ncdu, an ncurses utility that’s excellent for exploring disk usage. It’s overkill for what I want, but worth checking out (and available via Homebrew, brew install ncdu).

My Solution

You have no reason to recall this, but I’ve tried to solve this in the past. I wrote a bash function called sizeup that would do the trick. It’s super slow, though, and does things the hard way. So I decided to put ls and du together with some of my own sorting and formatting to get fast filesize info. I call it sizes.

Installation

I have the script posted in this gist. Save that file in your path, name it sizes, and make it executable (chmod a+x sizes).

Usage

To use it, just run sizes. You can optionally pass it a directory, e.g. sizes ~/Desktop, and it will operate there. And for whatever reason, I added help to it, so sizes -h will show you the obvious lack of other options.

The script will output a listing of all of the files in the current directory with sizes in bytes, kilobytes, megabytes, etc., calculated to 2 decimal places. It includes hidden files and reports actual sizes of directories (without traversing them). Output is colorized, with colors ascending from blue to red based on file size, and filenames colorized to indicate regular files, directories, and hidden files.

On any directory containing under 20GB it’s quite fast. Large directories can take a while to calculate, but you’d have the same delay using du directly.

How It Works

The script starts with an ls -l listing (actually ls -lSrAF) of the directory, using Ruby regex to extract the size (in bytes) and filename from the output.

Then it detects directories, which will be insufficiently reported by ls, and passes them to du to get the block-based filesize of the combined contents. It multiplies the size by 512 and gets as close to an accurate byte reading as possible. (It should be noted that the GNU coreutils version of du has a -b switch that will report in bytes. I wanted to make this without additional dependencies.)

The sizes are humanized, colorized, sorted, and output with the filenames and a total for the directory at the bottom.

I could easily be wrong, but I assume there are other people like me who want to find the space hogs without decoding too much output or loading up DaisyDisk. If so, enjoy the script.

Codename: nvUltra

[Tweet : nvALT]

I have exciting news!

You’ve been hearing from me for years about BitWriter, the nvALT replacement I was working on with David Halter. Well, I failed at my part, then we lost touch, and it never came to fruition. Now that my health is back to working state, I attempted to pick the project back up. Turned out David was MIA (hopefully ok), and the code I was left with no longer compiled on the latest operating systems. Seemed like it might be time to let go.

Then I heard from Fletcher Penney. You know, the guy who created MultiMarkdown, and who develops my favorite Markdown editor, MultiMarkdown Composer. He was working on a similar project and invited me to join him on it. Now we have an app nearing beta stage that’s better than any modal notes app you’ve used. Code name: nvUltra.

We need to wrap up some UI/UX work before we release the first round of betas, so I’m not ready to put an official ETA on it. But it’s close, and our goal is to start a beta test round in the next month or two. Sign up today for notifications, and the first round of beta testers will be taken from the email list. First in, first served.

This app works a lot like nvALT (and Notational Velocity, naturally). You pop it up and start typing. Search or create a note in seconds. It has blazing fast and accurate full-text search, the ability to find related notes based on content, and very complete Markdown editing tools (complete with syntax highlighting and theme editing). The biggest difference is that it works with multiple folders and sub-folders. You pick a folder, it indexes it, and you can use it just like nvALT. But then you can open another folder, or create a new one and start editing. It allows you to create folders anywhere, maybe one on Dropbox or iCloud Drive that’s shared, one on an encrypted disk that’s private, one for work, one for home, one for every writing project. You’re not limited to tags (though you can search by and sync with macOS tags within the app), and you can sort your notes into subfolders as well.

We don’t have an official name yet. We have some good ideas, but nothing that’s struck us both as “that’s it!” Have any suggestions? Feel free to brainstorm in the comments!

Sign up for the email list here, and get notifications and beta access as they come out.

HoudahSpot 5.0

[Tweet : nvALT]

For those of us who have shifted from folder hierarchies to search as our primary method of “filing,” Spotlight has become a way of life. And where Spotlight falls short, HoudahSpot steps in and fills the gaps. I’ve said it enough that it sounds cliché to me, but HoudahSpot really is steroids for Spotlight on macOS.

The latest version of HoudahSpot is a huge update with a ton of new features. Some highlights include:

  • Folding Text Preview — search results can focus on specific paragraphs that match
  • Arranged Results — group search results by date, size, kind, or application
  • Recent Attributes and Values — HoudahSpot remembers recently used search attributes and result columns, as well as things like file extensions, tags, and types
  • Regular Expressions — filter search results by name, path, parent folders, etc.
  • Faster File Tagging — with custom keyboard shortcuts, favorite and recent tags

HoudahSpot can even work directly with Default Folder X (if you have it installed), sending results directly to open and save dialogs.

HoudahSpot 5 is available now for $34 US ($52 for a family license). If you want to make more use of search in your daily computer usage, check it out!

Using htaccess to provide better Open Graph images

[Tweet : nvALT]

I joined David Sparks and Rosemary Orchard on episode 20 of the Automators podcast. It was a riot, and made me realize exactly how nerdy I am about automation and its peripheral nerdery. One of the things that came up was my htaccess trick for handling Open Graph metadata on my blog. I got a bunch of questions about that, so I’m writing this up to explain.

If you’re unfamiliar with Open Graph, it’s a protocol which allows you to use meta tags in HTML to explain to various services what a page/post is, what image should represent it, and other information about a page. When you share a URL on Twitter or Facebook and it shows an image, summary, author information, etc., Open Graph is what allows the creator to control what’s shown.

Facebook and Twitter each have their own specs for preferred image dimensions and minimum size. I won’t go into all of those details right now, but if you’re setting up a system for yourself you’ll want to search the web for the latest information (it changes now and then). Different Open Graph tags target specific services, so I need different images for each service.

Here are my current sizes generated:

  • _tw (Twitter): 715x383
  • _fb (Facebook): 476x249
  • _sq (Square, Twitter small size): 158x158

My setup automatically generates all of the necessary sizes from a template I use, naming each one with a suffix for the particular service it’s for. Sometimes, though, I don’t have a certain size available, which is where the .htaccess trick will come in.

In my templates I generate a standard boilerplate based on the name of the primary image for the post, appending the suffixes expected:

<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:image" content="[url/path]/image-filename_tw.jpg">
<meta property="og:image" content="[url/path]/image-filename_fb.jpg">
<meta property="og:image:type" content="image/jpeg" />
<meta property="og:image:width" content="476">
<meta property="og:image:height" content="249">

Twitter doesn’t have specific tags for width and height, so those only apply to the Facebook image. My dimension tags are created using a call to sips when the static site is generated.

So what happens when a file with the appropriate suffix doesn’t exist? The .htaccess has a cascade of fallbacks, so when the image specified is requested but doesn’t exist, something gets served back. It checks to see if each requested filename exists, then rewrites the filename as the next option, repeating until finally it just falls back to serving the original image from my post.

For example, my current Web Excursions header doesn’t have a _tw version. I actually added that size since I created that image. So while the HTML specifies https://cdn3.brettterpstra.com/uploads/2017/03/web-exc-map_tw.jpg as the Twitter image, when you put that in a browser, you’ll actually be served the Facebook version: https://cdn3.brettterpstra.com/uploads/2017/03/web-exc-map_fb.jpg. Which is close enough in this case, each service will crop as needed. Having the right dimensions to begin with simply gives you control over how the image appears.

The Twitter Card for my last Web Excursions post

Here are the rules in my .htaccess file:

# Image handling for open graph meta

# Try _fb if _tw doesn't exist
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*)_tw\.(jpg|png|gif) $1_fb.$2 [L]

# Try _sq if _fb doesn't exist
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*)_fb\.(jpg|png|gif) $1_sq.$2 [L]

# Try _lg if _sq doesn't exist
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*)_sq\.(jpg|png|gif) $1_lg.$2 [L]

# Try @2x if _lg doesn't exist
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*)_lg\.(jpg|png|gif) $1@2x.$2 [L]

# Fall back to base image file if no @2x
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*)@2x\.(jpg|png|gif) $1.$2 [L]

By the way, this has the side effect of letting xxx@2x.jpg fall back to xxx.jpg if the @2x doesn’t exist, so I can always request @2x when serving to retina displays, even if I haven’t created one.

In summary, this system allows me to easily provide different images for different services, but gracefully handles cases where I don’t generate all of the needed sizes without me needing to modify meta or templates each time. The htaccess part of the trick is really just the tail end, but enough people asked specifically about it that it seems worth sharing. Hope that helps, feel free to contact me with any specific questions!

Keeping track of all your projects’ build systems

[Tweet : nvALT]

I work on a lot of different coding projects. Websites, front end and back, Mac and iOS coding, Ruby gems, scripting, design projects. While I’m working on a project, the build, deploy, and other development processes I set up become second nature. Once I’ve moved to another project or 20, I’ve learned it’s really easy to forget how I had it all set up. Maybe I was using CodeKit, or maybe I had gulp-watch set up, maybe everything is in a Rakefile…

As a result, I take notes whenever I set up Grunt or Gulp, add npm tasks, build out a rakefile, or just create some shell scripts to automate my processes. Just a reminder of what the process is and what tricks I’ve been up to. The notes are saved in the root directory, and I add the notes to the project’s git repo; they can help me save time explaining things to any collaborators, but mostly they’re just there so that when I dig the project up a year later, I don’t have to dig through all of the config files to remember what’s what.

To that end, I wrote a simple script to find these build notes and show me all or any given section of them. It relies on there being a file in the current directory with a name that starts with “build” and an extension of “.md”, “.txt”, or “.markdown”. I usually call mine “buildnotes.md,” but it will find anything matching those criteria.

The sections of the notes are delineated by Markdown headings, level 2 or higher, with the heading being the title of the section. I split all of mine apart with h2s. For example, a short one from the little website I was working on yesterday:

## Build

gulp js: compiles and minifies all js to dist/js/main.min.js

gulp css: compass compile to dist/css/

gulp watch

gulp (default): [css,js]

## Deploy

gulp sync: rsync /dist/ to scoffb.local

## Package management

yarn

## Components

- UIKit

The script isn’t terribly advanced. It expects there to only be one header level used to split sections. Anything before the first header is ignored.

Sometimes I write more detailed notes, but the above project was pretty straightforward. I did, however, end up with config files from multiple package managers around from my discovery phase, so until I clean it up you couldn’t just look at the directory and tell what package manager or build system to use. Ultimately, I just need enough info to know where to look for more details. If I know that I was compiling my css with gulp and compass, I know I’m looking for a gulpfile.js and a sass folder to start editing the CSS.

To use the tool, save the script to a file and make it executable (chmod a+x buildhelp.rb). I alias the script to build?, which makes it really easy to remember (I just ask the question, “build?”). To do so, just add alias build?="/path/to/buildhelp.rb" to your .bash_profile or wherever your aliases are stored.

Now with a build notes file in the directory I’m in, if I run build? in the root folder, it will output a colorized version of that entire file. I can also get specific sections by including the section name (case insensitive, only the first few characters needed to match).

Run build? -s and it will list all of the sections in the file:

- Build
- Deploy
- Package management
- Components

Then run build? dep to get just the Deploy notes.

I know it’s a bit silly, but keeping a consistent system as I move through different projects helps keep me sane and avoid wasting time tracking down my tools. If it sounds helpful, grab the script from this gist.

Web Excursions for March 29, 2019

[Tweet : nvALT]

Web excursions brought to you in partnership with CleanMyMac X, all the tools to speed up your Mac, in one app.

Norsk Hydro will not pay ransom demand and will restore from backups
Just remember that every time you hear about a company paying ransomware demands, it probably means they have outdated/nonexistent backups. I know it’s more complex on a large scale IT network, but you really should back up (ooh, check out this week’s sponsor, Backblaze :)).
Grav - A Modern Flat-File CMS
I’m still pretty deep in Jekyll as my blogging platform right now, but I’m reaching some limits. Assuming I stick with a flat-file CMS (as opposed to WordPress), this one that Rosemary Orchard turned me on to is a top contender.
tmux-plugins/tmux-continuum
This tmux plugin is awesome: continuous save of your tmux environment for automatic restore whenever tmux is started, even after a reboot. Load up the tpm plugin manager so you can install this and the requisite tmux-resurrect plugin to get going.
asciinema - Record and share your terminal sessions, the right way
I’ve been seeing these terminal recordings in GitHub readmes and they’re pretty awesome. Text-based session recordings from your terminal, optionally hosted for playback. Recordings can be paused so you can copy text right out of it.
postlight/mercury-parser-api
Mercury Parser is the API that services like Feedbin and Reeder use to give you full content articles in your feed. It’s shutting down, but Postlight has open sourced the parser and the API. I’ve been playing with a local install and it makes a great markdownifier. (I’ll probably be updating Marky with it soon so I can switch over to https…)

CleanMyMac X