Going to store stuff here.

This is inspired by Nikita’s knowledge repository.

The basic idea is, as I come across information - or otherwise generate my own - I write it down in here1. This, combined with tooling I’m writing, provides a single, searchable database for things I know. So that I don’t have to keep searching how I did something previously. Additionally, this serves to help reinforce things I’ve learned, both by forcing myself to write down things I learn, as well as write it in a way that should help my future self not have to spend so long figuring out how I did a thing.


Obviously, I’m going to exercise some discretion and not put things like my tax returns or whatever.

3D Printing

I have a Prusa i3 mk3. It’s a really nice desktop 3d printer available as a kit. Having a 3d printer has been pretty great for all the little things you can do with it, despite not really having a big single “thing” to be the actual driver to get one. Ours doesn’t see all that much usage (we’ve been able to go a couple months between uses), but it’s really nice to have ready to use with little prep.


Checklist for using it.

Preparing the Model

This assumes you already have a model you want to print. This uses the PrusaSlicer slicer.

  • Insert SD card from printer into computer.
  • Open and orient your model in PrusaSlicer.
    • Does your model have overhangs or other parts that need support to print? If yes, add supports.
  • Export the G-Code to the SD card.
  • Eject the SD card and plug it back into the printer.

Using the Printer

  • Wipe down the printer bed (the metal sheet you print on) with a paper towel and isopropyl alcohol.
  • Turn on printer, select the model from the SD card (menu -> print from SD card -> select model)
  • Watch the printer as it prints the first layer.
  • If the filament is known to get caught on itself, watch/listen to the printer, if the filament starts to get tight, undo/give the printer some slack.

Replacing filament

There are official instructions... somewhere.

  • Turn on printer.
  • Turn on print-head heater.
  • Wait ~5 minutes or so.
  • Eject old filament.
  • Cut ~45° angle into the new filament.
  • Replace with new filament.


I love everything related to space, both human exploration of space, and the observation of objects in space.


Resources for learning astronomy.

Richard Pogge’s/OSU’s Astronomy podcasts

Richard Pogge is a professor at OSU. He recorded the lectures for a few of his “astronomy for non-astronomy majors” classes and posted them online as podcasts. They’re absolutely enrapturing to listen to. One of the things I love about these is that he also covers a lot of the history behind astronomy, in addition to what we currently know.

  • AST 161, from Fall 2007, is an introduction to solar system astronomy. This is absolutely fascinating as in addition to things like the planets and the sun, he also talks about how humans developed astronomy, and how astronomy has influenced all sort of our culture. In addition to some of the modern things we’re learning, as we’ve sent spacecraft to objects in the solar system.
  • AST 162, from Winter 2006, is an introduction to stars, galaxies, and cosmology. It covers a bit more of the science behind stars, as well as a lot of the modern history behind how we gained that knowledge, and how we know it’s correct.
  • AST 141>, from Fall 2009, is an introduction to astrobiology. I haven’t listened to this one yet, but I’m very much looking forward to listening to it.

He also has some more recent, itunes only content that first started as itunes U collections, before Apple simply made them podcasts. I haven’t listened to these yet, but am looking forward to them.

  • Life in the Universe is an intro to astrobiology - basically the AST 141 podcast with additional material included (slides, video).
  • From Planets to the Cosmos appears to be a combination of AST 161 and AST 162, with additional material included (slides, video).


Taking pictures of the sky!

See Cafuego’s page on software for osx.

See also the Mac Observatory site.

I use the following software:

Theory and Books

The standard recommended reading is The Deep-sky Imaging Primer.

Finding a site

Find a local dark sky site. In LA, I like Joshua Tree National Park. However, being able to easily access far-away dark sky sites is one of my primary reasons for learning to fly.


You can get away at a bare minimum with just a camera and a tripod. My equipment checklist is:

  • Camera
  • 50 mm lens, because wide field shots are fun.
  • Telescope1
    • Telescope camera mount
      • (barlow lens, T-ring, etc.)
    • Bahtinov mask.
  • Equatorial Mount
    • Motors for said mount
    • Batteries for the motors
  • Computer (Strictly speaking, this isn’t necessary - my camera can be set to take a series of photos at once)
    • USB-A to Mini-USB-A (to talk to camera).
  • RED flashlight - white will ruin your night sight. You also want low-lumen, for the same reason.
  • Water
  • Coffee
  • Snacks
  • Camping chair
  • Sleeping pad/bag (even if you plan to stay up all night, bring these).
  • Pillow
  • Paper and Pen.
  • A book or something else to do while the computer does all the work.

Be sure to set the computer to “night shift” mode2 before it’s dark, as red as possible.

Go there, set up camp. Preferably be set up before dark.

Jerry’s list of beginner equipment for astrophotography, which is a potential source for expansion.

Actually Taking Photos

Regardless of how you use it, be sure to write down what you’re taking a photo of when you do it. Even if you know what the constellation/body you’re photographing is anyway.

Also, for stacking3 reasons, the more photos you take, the better it is, but it does have diminishing returns4.

Using a computer

Use AstroDSLR from computer to control the camera. Keep the camera in bulb mode to allow the software to control exposure time. Otherwise follow these instructions.

Make a different folder for each different set of photos you take.

Drift measurement with AstroDSLR

Copied from their website:

For polar alignment by the drift method or for the validation of the guiding you can use drift measurement helper panel.

The scale of the graph is adjusted automatically. Blue curve represents drift in X and red curve in Y direction. Blue value in lower left corner is drift per image in X direction and red value in lower right corner is drift per image in Y direction.

To use the panel for polar alignment, rotate camera (to align RA/Dec axes along X/Y directions), start preview in endless loop, select accordingly bright star and use drift method.

Please note, that the graph is cleared every time you select the star in the preview image.

Without a computer

Put the camera in manual mode, and have it set to average.

Star Trails

Sometimes you’re going for that really cool effect, othertimes you’re not.

Here’s an article from Jerry Lodriguss on how to deal with star trails.


Nebulosity doesn’t read the color information from your raw files. Convert them to jpeg, because that’s still better than grayscale images.

for i in *.cr2; do sips -s format jpeg "$i" --out "${i%.*}.jpg"; done

From Nebulosity, open batch -> align and combine images. Select “Translation + Rotation + Scale”, click “OK”, and select the images to stack. Now, select the same star in each photograph as it prompts you. You’re going to go through the sets 3 times (so that it can correct for translation/rotation/scale). Now, do some manual editing, and save the end result.

Post to instagram5 or whatever. Use it as your new desktop background.


Much better advice on how to select one. Though, usually, the best one is the one you already own.


or use f.lux to remove as much blue from your screen as possible.


It’s essentially an inverse square relation - to get 5x better quality, you need to take 25x more images.


flume seems to be a decent OSX client for instagram. The pro version is worth it.


Sorted by date

2019-07-12 Joshua Tree

Went out to Joshua Tree National Park to try out a new scope! I’m very pleased with the results.


One of the first things I photographed!

This combined from 200 separate images. Two sets of 100. The first to get details of Jupiter’s clouds (ISO 100, 1/200th second exposure, I could have shortened the exposure length even more to get better details). The second to get details of the 4 Galilean moons (ISO 100, 1/10th second exposure). I then stacked each set of 100 into their own image, and replaced the overexposed Jupiter in the picture with the moons with the much better image of Jupiter’s clouds. I think it worked out pretty well.

Picture of Jupiter

Andromeda Galaxy

I had to wait until about 1:30 AM for Andromeda to be sufficiently high in the sky to clear some ground obstructions (rocks)

This is composed of 100 images stacked together. 10 second exposure, ISO 2000 or so. (Any longer exposure time introduced noticeable star trails, as my mount wasn’t perfectly aligned).

Image of the Andromeda Galaxy


This was the last picture I took - my camera’s battery died after taking the first of what would have been 100 images. This was taken at approximately 3 AM.

10 second exposure, forget the ISO speed.

Image of the Pleiades

2019-08-31 Joshua Tree

First weekend following a new moon, let’s get some photos of the stars!

Unlike last time, I made sure to have both camera batteries fully charged. At some point, though, I need to figure out a way to power the things from either my car or possibly an ebike battery.

Orion Nebula

This was surprisingly easy to capture. This was done with a series of 100 images, ISO 3200, and a 15 second exposure. Stacked (more-or-less automatically), then processed in post to bring out that extra oomph. Cropped from the original to bring more attention to the nebula itself. This nebula is really bright.

Orion Nebula.jpg

California Nebula

The California Nebula (NGC 1499) is an emission nebula in the constellation Perseus.

This was quite difficult to capture. Because it’s so red, you can’t visually see the California Nebula. I could barely make it out on the highest ISO setting my camera can do (25600, wow) at a 20 second exposure. Couldn’t do any longer without introducing star trails. I’d love to revisit this once I have autoguiding figured out - or at least with a much better aligned telescope.

Postprocessed to bring out the red. as much as I could.

California Nebula.jpg

Image Stacking

Image stacking is the process of combining multiple images of the same thing into a single image. This is done for multiple reasons - to increase dynamic range, to reduce the effects of noise, etc.


Keith Wiley has a really good article on the theory behind image stacking.

Astro-Tech AT102ED Telescope

It’s a nice, relatively cheap refracting telescope.

Making the Bahtinov Mask

The outer diameter of the leading element is 122mm.

I have a bahtinov mask available here, it was generated from this svg, which itself was generated from this page. The relevant specs for that is it’s 102mm diameter lens, and it’s 714mm focal length.

This is also available on thingiverse


With a Canon EOS 6D on a T-ring, this focuses at just past 41mm.

Celestron CG-4 Equatorial Mount

My notes on how to use this mount.

Setup is relatively simple, but much more involved than altazimuth mounts we might be used to.


First, we balance in the right ascension, then in declination.

RA balancing is necessary for accurate tracking when using motor. It also eliminates undue stress on the mount.

DEC balancing is necessary to prevent sudden motions when the DEC clamp is released.

Right Ascension

  1. Release the RA clamp (the lower clutch), and position the telescope off to one side of the mount. The counterweight bar should be horizontal on the opposite side of the mount.
  2. Release your hold on the telescope - gradually - to see which way the telescope roles (to one direction or the other)
  3. Move the counterweight as necessary to balance the telescope (remains stationary when the RA clamp is released).
  4. Tighten locking screw to hold counterweights in place.


  1. Release the RA clamp and position the telescope off to one side of the mount - basically, same start as when balancing in RA.
  2. Lock the RA clamp to hold the telescope in place.
  3. Release the DEC clamp, and rotate the telescope until the telescope is parallel to the ground
  4. Gently release hold on the telescope to see which way it rotates. As with before, don’t let go entirely.
  5. Move the telescope on the mounting bracket in either direction until the telescope doesn’t move, as tested in part 4.
  6. Tighten the mounting bracket screws.

Polar Alignment

Now we get to the part of what makes equatorial mounts actually different than altazimuth mounts. This is necessary to track the stars correctly.

The goal is to place the telescope’s axis of rotation parallel to the Earth’s axis of rotation. This is done by moving the telescope vertically (altitude) and horizontally (azimuth), not in RA or DEC.

Note that the mount can really only be adjusted between 20 and 60 degrees.

There are a few ways to do this.

Latitude scale

This is the easiest way to align a telescope. It also can be done in daylight, because it only requires that you know which way is (true) north, and your latitude (degrees above the equator). This is also the least accurate, but it gets close enough for short exposure astrophotography.

  1. Make sure the polar axis of the mount is pointing due north.
  2. Level the tripod (there’s a bubble level built into the mount for this purpose)
  3. Adjust the mount in altitude until the latitude indicator points to your latitude.

Pointing at Polaris

This is conceptually simple. Polaris is less than a degree away from the celestial north pole, so you use Polaris as a stand-in for the celestial north pole. It’s about as accurate as the latitude scale method.

  1. Make sure the polar axis is pointing north.
  2. Loosen the DEC clutch nob and move the telescope so that the tube is parallel to the polar axis. When this is done, the declination setting circle will read +90 degrees. If the declination setting circle is not aligned, move the telescope so that the tube is parallel to the polar axis.
  3. Adjust the mount in altitude and/or azimuth until Polaris is in the field of view of the finder.
  4. Center Polaris using those same altitude/azimuth controls. Do not move the telescope in RA or DEC.

Declination Drift

This takes the longest amount of time, but produces the best results. In this, you’re looking at two stars to see how much they drift in declination over time, which tells you how out of alignment you are from the polar axis. Because this takes a while, you should first get a rough alignment (using either latitude scale or pointing roughly at a polar axis).

The idea here is to choose two bright stars - one near the eastern horizon and one due south near the meridian. Both should be near the celestial equator (0 declination).

For the southern star, choose one within half a degree of the meridian, and 5 degrees of the celestial equator. If the star drifts north, the polar axis is too far east. If it drifts south, the polar axis is too far west.

Once that star no longer drifts, we move on the the eastern star. This should be 20 degrees above the horizon and within 5 degrees of the celestial equator. If it drifts south, the polar axis is too low. If it drifts north, the polar axis is too low. Adjust the latitude scale to fix this.


Modifying Motor Controller for Autoguiding

See this guide from Shoestring Astronomy.


My bae is pretty great.

2019-07-17 Observability in Control Theory

She practiced a talk on control theory in front of me. These are my notes. The talk content might be wrong - she’s still learning about this.

E.G. Drone on top of a car, measuring the car.

It has a method to track the target (car). The way it does it is to measure the state of the target.

x(t+1) = A * x(t)

A is a transition matrix - it maps the target from the current state to the next (next state = current state * transition matrix)

Drone can measure the target’s “process” - it can estimate the next state of the target because it has the transition matrix encoded in it.

Let’s say drone also has some radar/camera sensors (other sensors).

Now, y(t) = C * x(t) - y is the drone’s measurement of x(t). C maps the current state to what the drone is observing.

This format is how we’d model dynamical system. Usually there’d be other terms for noise (B - process noise, D - measurement noise).

Given this system, the system is observable if given y(0), y(1), ..., y(l) if we can backtrack to a state x(0)

How to get from the measurements to the original state (get from y(0), y(1), ... to x(0)). So, we know from x(1) = A * x(0), and we know that x(2) = A * x(1)=A^2 * x(0). Therefore, x(l) = A^l * x(0)`

Similarly, y(0) = C * x(0) and y(1) = C * x(1) = C * A * x(0), and so forth: y(l) = C*A^l*x(0). This can then be rewritten as a system of linear equations, like so:

yobservability matrix

Can then be solved for x(0) if we have A and C. So, we could write this out if we have matrices A and C, but it’s a long matrix, so it’d be difficult to compute.

How do we know that this is observible if it’s computationally hard to get to a unique x(0)

So, this can be written as:

y\_bar = O * x(0) (O = observability matrix). If the rank(O) == n (rank = number of columns, n = number of states that target can be in), then the system is observable.

Doesn’t tell you how observable it is, or how much information you need in order to get to x(0) - it could be observable, but it might be infeasible to observe.

Measuring Observability

So, measuring observability:

Observability gramiam - different kind of matrix that is used to tell how observable a system is.

for all t from 0 to l, the normal of y(t) squared = energy of y = the observability gramiam. The higher, the more observable. = sum from 0 to l of transposed(C * A ^ t * x(0)) * (C * A ^ t * x(0)) = G.

If we take determinant of G, and is high, then we have high energy in y, and it’s highly observable.

want to maximize the minimum eigenvalues of G, in order to have high observability.

These are all ways to say how observable a system is.


Why do this?

You can use this information to calculate how well a kalman filter works by calculating the observability gramiam.

You can determine how well you designed your system.


I should leave reviews on goodreads, but I don’t.

Some definition on genre:

I vastly prefer to read sci-fi and/or fantasy. Of that, I really enjoy hard sci-fi, but that’s not a requirement.

Here’s a list of books and other readings I enjoy:


Sorted by Author

Andy Weir

  • The Martian is a hard sci-fi book about someone left behind on one of the first missions to Mars, and his struggles to get back home.
  • Artemis is a heist novel set in the first city on the moon. Like The Martian, it’s also hard sci-fi.

Fletcher DeLancey

My partner turned me on to her. Her Chronicles of Alsea series is pretty great, though at times it reads like the fan fiction it grew out of. They’re still highly worth reading.

Scott Meyer

I really enjoy his Magic 2.0 series, though it does have a significant drop-off in quality. The first two books are amazing, the third is pretty good, but not as good as the previous two. But the reviews for the fourth one have kept me from continuing.

Tamora Pierce

When I was 11 or 12, her Circle of Magic books caught my eye at a Barnes and Noble. My parents bought the entire quartet for me. Somewhat recently, I began to re-read these, and remembered everything I enjoyed about them, plus additional things that my older perspective was able to pick up on. This time around, I also read her Circle Opens quartet, which is also good. Highly recommend these feminist books for any fans of fantasy.

Other Readings

  • HFY is a subreddit where people share stories sci-fi/fantasy stories where humans are the badasses. Usually by picking one particular trait of humans and overexagerrating it to give them an advantage over other species.


I said I’d link all sorts of crap.

Organized by fandom

Legend of Korra

Almost always Korrasami stuff, because of course.


Of course I’m going there.

Greek Mythology

Found some Medusa F/F.

From this reddit thread

Command Line Programs

Doing things in the shell.

The Silver Searcher

A code-specific searching tool similar to ack, but faster. ag.

By default, you can use it to search for a given pattern in all files under a directory tree.

You can modify it with -g to search for files with names matching the given pattern (similar to find | grep, but much faster). ag -g '.+swift' returns all files ending in “swift”.

You can modify it with ‘-G’ to search for pattern in all files with names matching a pattern. ag -G '.+swift' Foo searches for the pattern ‘Foo’ only in files ending in “swift”.


Fastlane is a set of ruby tooling to make mobile development suck a lot less. I use it to automate a lot of the shitty parts of iOS development.


Fastlane Scan essentially wraps xcodebuild | xcpretty, with additional properties.


  • the -s flag specifies a scheme to use when building and running tests
  • the -q flag allows you to specify the configuration to use when building the app.
  • the -a flag allows you to specify a device to run the tests on
  • the --only_testing allows you to specify a list of test bundles to run. It takes a comma-separated list of strings (e.g. fastlane scan --only_testing "foo,bar,baz")


ffmpeg is a CLI programming for editing and manipulating videos.

Resizing Video Frame Size

From this stackoverflow question, resizing the video frame size is an easy and fast way to reduce video file size. I’ve found it especially useful for reducing file size of screencasts from my phone.

ffmpeg -i input.mkv -vf "scale=iw/2:ih/2" half_the_frame_size.mkv will reduce a 2x retina-sized video down to non-retina size. ffmpeg -i input.mkv -vf "scale=iw/3:ih/3" a_third_the_frame_size.mkv will reduce a 3x retina-sized video down to non-retina size.

Creating a video from images

This is awesome. From this stackoverflow question, it’s more-or-less a combination of the -framerate $X and -r $Y to get what you want. You can also use -vf fps=$X to specify the fps of the video.


Decentralized version control system.

Searching for when a given string was introduced

When you want to find out which commit first referenced a given string:

git log -S <string to search for> --source --all

See this stackoverflow answer.

Reverting commits without creating a new one

This is useful when you want to revert a set of commits, but also when you want to change them before committing again.

git revert -n <commit hashes to revert>

See the git documentation


sed for json data.

The tutorial will cover most use cases.

Color the json input: curl $JSON_PRODUCING_URL | jq.

Get first item in a list: echo $JSON_LIST | jq '.[0]'

Get specific item from a list: echo $JSON_LIST | jq '.[].foo'

You can even convert that to other json objects: echo $JSON_LIST | jq '.[] | {foo:, baz: .bar.baz}'. Don’t forget to use ' so that the | character gets sent to jq and isn’t interpreted by the shell.


pandoc is a utility for converting documents to another.

Creating Slide Shows

See this page.


Bash shell, Z Shell, etc.


You can check if a file exists with -f.

if [ -f "some_file" ]; then
    echo "file at './some_directory' exists and is not a directory!"

You can test that a directory exists with -d, e.g.:

if [ -d "some_directory" ]; then
    echo "directory at './some_directory' exists!"

You can reverse conditionals with !:

if [ ! -d "some_directory" ]; then
    echo "'./some_directory' does not exist!"

Checking if a command exists

You can check whether a command exists by checking if command -v ${COMMAND_TO_CHECK} >/dev/null 2>/dev/null returns 0 (it exists) or non-zero (does not exist)

if [ ! command -v my_special_script >/dev/null 2>&1 ]; then
    echo "my_special_script not found"

Checking if a string is a number

You can use the -eq operator to verify if something is a number: if ! [ "${some_number}" -eq "${some_number}"] 2>/dev/null; then "${some_number} is not a number"; fi

You can similarly use the -ge to determine if something is a positive number.


You can use the trap command to run code when the shell script exits (or any signal occurs), like so:

function on_end {
    echo "woohoo"

trap on_end exit

which will print “woohoo” to stdout when the script exits.


youtube-dl is a python program for downloading videos from youtube and other sites.

Video Formats

To get a list of video formats to download, pass the -F flag, this returns an ascii table of available formats. It looks like so:

$ youtube-dl -F\?v\=9pBmNcv0Mlw
[youtube] 9pBmNcv0Mlw: Downloading webpage
[youtube] 9pBmNcv0Mlw: Downloading video info webpage
[info] Available formats for 9pBmNcv0Mlw:
format code  extension  resolution note
249          webm       audio only DASH audio   82k , opus @ 50k, 118.66MiB
250          webm       audio only DASH audio   97k , opus @ 70k, 151.06MiB
171          webm       audio only DASH audio  138k , vorbis@128k, 251.62MiB
140          m4a        audio only DASH audio  148k , m4a_dash container, mp4a.40.2@128k, 297.95MiB
251          webm       audio only DASH audio  161k , opus @160k, 293.26MiB
160          mp4        256x144    144p  137k , avc1.4d400c, 30fps, video only, 159.66MiB
278          webm       256x144    144p  228k , webm container, vp9, 30fps, video only, 263.76MiB
242          webm       426x240    240p  229k , vp9, 30fps, video only, 304.18MiB
133          mp4        426x240    240p  233k , avc1.4d4015, 30fps, video only, 229.71MiB
243          webm       640x360    360p  408k , vp9, 30fps, video only, 510.55MiB
134          mp4        640x360    360p  528k , avc1.4d401e, 30fps, video only, 406.49MiB
244          webm       854x480    480p  735k , vp9, 30fps, video only, 743.27MiB
135          mp4        854x480    480p  969k , avc1.4d401f, 30fps, video only, 606.68MiB
247          webm       1280x720   720p 1511k , vp9, 30fps, video only, 2.20GiB
302          webm       1280x720   720p60 1752k , vp9, 60fps, video only, 1.82GiB
136          mp4        1280x720   720p 2244k , avc1.4d401f, 30fps, video only, 2.12GiB
298          mp4        1280x720   720p60 2515k , avc1.4d4020, 60fps, video only, 1.11GiB
248          webm       1920x1080  1080p 2658k , vp9, 30fps, video only, 3.95GiB
137          mp4        1920x1080  1080p 3138k , avc1.640028, 30fps, video only, 3.46GiB
299          mp4        1920x1080  1080p60 3941k , avc1.64002a, 60fps, video only, 3.69GiB
303          webm       1920x1080  1080p60 4417k , vp9, 60fps, video only, 6.10GiB
18           mp4        640x360    medium , avc1.42001E, mp4a.40.2@ 96k, 1.17GiB
43           webm       640x360    medium , vp8.0, vorbis@128k, 1.80GiB
22           mp4        1280x720   hd720 , avc1.64001F, mp4a.40.2@192k (best)

The format code (first column in the list) is the code you pass along with the -f flag to download a specific format.

E.g. downloading the above 1280x720 format is:

youtube-dl -f 22\?v\=9pBmNcv0Mlw

Other DIY Projects

DIY Projects other people have done that inspire me.

DIY Smartwatch

Imgur gallery describing the project, with a reddit post, which links to this Github repository.

DIY Ebook Reader

This person published a DIY ebook reader.

Automatic Fume Extractor

Been wanting to build my own, at it’s simplest, it’s a fan connected to an activated charcoal filter.

Automatic Fume Extractor


Personal Finance

A lot of my views on personal finance come from Mr. Money Mustache.


I don’t practice anything formal like YNAB. I do keep track of my finances using ledger with ledger-autosync to automate syncing that, and I occasionally review the status of where I spend money to reduce expenses.

Overall, my system for spending follows this order:

  1. Rent & other debts (car payment, internet, phone, etc.)
  2. Food & other necessities (clothing, etc.)
    • I prefer to spend on groceries vs. eating out. While the notion that a $5/day coffee habit keeps you poor is ridiculous, you generally end up with better food once you learn how to make it yourself. It’s better to reserve eating out as a special thing.
  3. Everything else.

In general, anything that falls under “everything else” is something I spend at least a day thinking about before I decide whether to get it or not. The more expensive it is, the longer I spend thinking on it.

For especially large purchases, I actually do set up budgeting. This works out as a using ledger’s virtual postings feature to place money in an account prefixed with “Budget” every time I get paid. That is, it’s envelope budgeting for a single large purchase.

Buying Used vs. New

I’m really bad at this. I should prefer used, but I often go for new just because it’s easier and faster. This is a habit I’m working on correcting.


Any money you invest, treat it as if it no longer exists. Especially for 401k or other retirement accounts that have a penalty if you access them before some age.


Always contribute at least the minimum to get your company to max out their matching. For example, if your company does matching up to 4%, then at least put in that 4%.

  • For 2019, the 401k contribution limit is $19,000.
  • If your company offers both 401k and Roth 401k, then do a pre-tax contribution and invest the tax savings.
    • If you don’t think you’ll invest the tax savings, then contribute to the post-tax 401k.
    • On the other hand, if you’re currently in a high tax bracket (and have low expenses), then put the money in the pre-tax 401k.
      • Because you should have low expenses, therefore being in a low tax bracket.
  • Just set it, and check on it every year as the contribution limit changes, or your company changes their matching policy.

Index Funds


Federal Income Tax Brackets

CA Tax income brackets


It’s fun.

Aviation Decision Making

ADM, or Aviation Decision Making, are all of the decisions made surrounding flying - from whether or not to even go flying in the first place, to discontinuing flight or completing flight as planned.

This is taken from the PHAK, either verbatim, or adapted.

Steps for good decision-making:

  1. Identify personal attitudes hazardous to safe flight
  2. Learning behavior modification techniques
  3. Learning how to recognize and cope with stress
  4. Developing risk assessment skills
  5. Using all resources
  6. Evaluating the effectiveness of one’s ADM skills.

Risk Management

  1. Accept no unnecessary risk.
    Duh, flying has risk, but maybe don’t fly VFR in low visibility conditions? Or at least, do so with a CFI who has experience in those conditions, from whom you can learn.
  2. Make risk decisions at the appropriate level.
    PIC owns all the risk. Don’t let passengers bully you into violating 1, and don’t let ATC do so either. It’s always appropriate to tell ATC “unable” to a command.
  3. Accept risk when benefits outweigh costs.
    Don’t stack risks. Don’t fly an unfamiliar plane in MVFR conditions.
  4. Integrate risk management into planning at all levels.
    Not just in preflight planning, but at all stages of the flight. Maybe the weather goes to shit en-route. In which case, reconsider whether the increased risk is worth it, or maybe you can go somewhere else - or even just return back to where you came from.

Hazard and Risk

Hazard is a condition, event or circumstance (whether real or perceived) that a pilot encounters. Risk is the pilot’s assessment of the hazard. Note that different pilot’s can come up with different risks for the same hazard.

Hazardous Attitudes

Studies have identified 5 hazardous attitudes that can prevent making sound decisions:

Anti-authorityDon’t tell meFollow the rules, they’re usually rightAviation regulations are often written in blood, their’s a very good reason to follow them.
ImpulsivityDo it quicklyNot so fast. Think first.Like with everything in modern life, thinking before you act is always the correct thing to do. Actually doing that, though, is much harder.
InvulnerabilityIt won’t happen to meIt could happen to mePower loss on takeoff is a thing that only happens to other people right? Wrong. It could totally happen, and be prepared in case that does happen
MachoI can do itTaking chances is foolishDon’t take unnecessary risks. Don’t do things to prove to yourself/others that you can. You’re already a cool person by being able to fly, you don’t have to prove anything.
ResignationWhat’s the use?I’m not helpless. I can make a differenceThis is, to me, probably the most deadly of the 5 attitudes. Getting into an emergency situation via the other 4 is bad, but then deciding that there’s nothing you can do - especially when there often is something you can do - is what will kill you. Less dramatically, letting someone bully you into going along with unreasonable requests can also kill you. You are PIC, you are in charge. Act like it.

Risk Assessment Matrix

Also copied, more or less, is a matrix on deciding how bad a particular risk is. With likelihood on one axis, and severity on the other.

Likelihood is expected chance that event will occur:

  • Probable: Will occur several times
  • Occasional: Will probably occur sometime (expected at least once)
  • Remote: Unexpected to occur, but possible
  • Improbably: Very unlikely to occur.

Severity is expected consequences of the event happening:

  • Catastrophic: Loss of life or property
  • Critical: Severe injury/major damage (expensive to repair, insurance might declare plane totaled)
  • Marginal: Minor injury/minor damage (only a few AMUs of damage)
  • Negligible: Less than minor injury/damage
Likelihood Catastrophic Critical Marginal Negligible
Probable High High Serious Medium
Occassional High Serious Medium Low
Remote Serious Medium Medium Low
Improbable Medium Medium Medium Low

Mitigating Risk

The way to mitigate risk is one of either:

  • Cancel the flight
  • Delay the flight
  • Bring someone more experienced who can help you address the risk

Roughly in order of likelihood.

One suggested way to eliminate the “must go home” pressure is to always bring an overnight kit with you, so that if you do get stuck somewhere you’re at least fine for the night.

Remember, the general rule for choosing to fly GA to a place is:

Time to spare, go by air.


Air Traffic Control. The system of people and equipment designed to help keep you safe in the air.

Per FAR 91.125, light gun signals that ATC can send you in the event of lost comms are:

Color/TypeOn GroundIn Flight
Green, SteadyCleared for takeoffCleared to land
Green, FlashingCleared to taxiReturn for landing
Red, SteadyStopGive way to other aircraft and continue circling
Red, FlashingTaxi clear of runway in useAirport unsafe - do not land
White, FlashingReturn to starting point on airportN/A
Red & Green, alternatingExercise Extreme CautionExercise Extreme Caution


Checklists for airplanes I fly.


  • [] Airspeed - best glide
  • [] Best Field - keep looking for a better place to land.
  • [] Checklist
  • [] Declare
    • [] Squawk 7700
    • [] Mayday (121.5 or current freq)
  • [] Engine - Shutdown
  • [] Flaps - As required
  • [] Get Ready (for crash)
    • [] Seatbelts - Tighten
    • [] Sunglasses, headset - Remove
    • [] Passenger - Secure
    • [] Master switch - Off

Preflight checklists

Mnemonics and other things to consider, often before you even get to the airport


Mnemonic to go over before you even go to the airport. Covers how you are doing.

  • Illness: Are you sick or feel like you’re becoming sick.
  • Medication: Taking anything not allowed, or otherwise out of the ordinary?
  • Stress: Are you stressed/worried about other things? Want to be in a good headspace.
  • Alcohol: >8 hours bottle to throttle in the US.
  • Fatigue: Had enough sleep? Had enough to eat?
  • Emotion: How are you feeling? Are you mentally well enough to fly?

For me, “am I able to make the bike ride to SMO?” is usually enough to answer all of these.


Helps you perceive hazards and assess risks.

  • P: Pilot
    Am I ready for this flight? - in terms of experience, recency, currency, physical and emotional connections. See IMSAFE for answers to the latter 2.
  • A: Aircraft
    • Is this the right aircraft for the flight?
    • Am I familiar with and current in the aircraft?
    • Is this aircraft properly equipped for the flight? (Instruments, lights, navigation and communication equipment)
    • Can this aircraft use the runways available for the trip w/ margin for safety & weather?
    • Can this aircraft carry the planned load?
    • Can it operate at the intended altitudes?
    • Does it have sufficient fuel capacity for each leg?
    • Does it actually have the necessary fuel in it?
  • V: Environment
    • Weather
    • Terrain
    • Airports
    • Airspace
    • Nighttime
  • E: External Pressures
    Other things influencing the flight.
    • Someone waiting at the airport for the arrival of the flight
    • passenger the pilot doesn’t want to disappoint (See: hazardous attitudes)
    • desire to impress someone (see: hazardous attitudes)
    • desire to demonstrate pilot qualifications (see: hazardous attitudes)
    • desire to satisfy a personal goal (”get-there-itis”)
    • the pilot’s general goal-completion orientation
    • Emotional pressure/pride that you might not be as good as you thought you were.

Using an E6B Mechanical Flight Computer

Literally something you only need to do during primary training, to prove you CAN use an e6b. Sigh.

Density Altitude

For engine performance!

As you know, density altitude is pressure altitude adjusted for non-standard temperature. Standard temperature is 59° F, or 15° C at sea level.

Pressure altitude is true altitude adjusted for non-standard pressure. Standard pressure is 29.92 in. hg.

Pressure altitude is a really simple formula: pressure_altitude = (standard_pressure - pressure_setting) * 1000 + true_altitude

And while we can easily correct for non-standard temperature (the formula is: density_altitude = pressure_altitude + (120 * (outside_air_temperature - isa_standard_temperature))), it’s not as simple to do it manually, and an e6b is easy enough to use here.

Estimating isa standard temperature is ((true_altitude / 500) - 15) * -1

In the image below, the pressure altitude is 4500 feet, the true altitude is 4500 feet, and the outside air temperature is estimated to be 28° C. For the sake of the calculation, we bump that to 30° C. As you can see, the e6b then tells us that the density altitude is just over 7000 feet. Which we can double-check after a bit enough:

pressure_altitude = 4500 + (120 * (28 - (((4500 / 1000) - 15) * -1)))
 = 4500 + (120 * (28 - ((9 - 15) * -1)))
 = 4500 + (120 * (28 - (-6 * -1)))
 = 4500 + (120 * (28 - 6))
 = 4500 + (120 * 22)
 = 4500 + 2640
 = 7140

Which is about what the e6b tells us it us.

sample density altitude calculation

True Airspeed

True airspeed is effectively calibrated airspeed corrected for the density altitude.

So, in this case let’s re-use the earlier density altitude of ~7100 feet.

Calibrated airspeed then corresponds to the inner ring, and true airspeed the outer ring. That is, if your calibrated airspeed is 150 kts, then the true airspeed is ~167 kts.

sample true airspeed calculation

Remember to correct for decimal placement.

Ground Speed

For a given true airspeed and heading, you can easily calculate your ground speed.

For this example, let’s say you are heading 270 at 125 knots, with the wind at 300 at 15 knots.

Set the center “dot” on the rear of the circle to some value - I pick 100 because it’s a nice round number. Set the circle (true index) to the wind direction (so... 300). Now, mark (IN PENCIL) the wind speed relative to the value you chose (place a mark where it says 115). Like so:

sample wind marker placement

Now rotate so that the corrected heading is under the true index, and slide up so that the true airspeed is under the center dot. Like so:

sample ground speed calculation

Now we can read the ground speed (in this case, ~139 knots), and wind correction angle (~3 degrees, to the right - or fly heading 273).

When done, wipe the pencil lead off.

Time to travel distance

Now that we have a ground speed for this leg, we can calculate the time it takes to travel a given distance.

For this example, let’s say we have a ground speed of 100 kts, and the leg is going to be 13 nm.

Before we use the e6b, let’s do some quick mental math to estimate an acceptable range: 13 nm / 100 kts is just over 1/10 hour. Converting to minutes, it’s just over 6 minutes, but only just - this leg should take between 7 and 9 minutes.

First thing we do to calculate this on an e6b is to align the “60 RATE” box on the inner circle with the approximate speed you’re going. In this case, it should be aligned with the “10” at the top.

sample estimated time elapsed speed alignment

Next, we count the distance from the “10” marker - 13 ticks from the marker, and compare that with the corresponding value on the “time” scale. This gives us our estimated time. In this case, it’s just under 8 minutes. As expected.

sample estimated time elapsed calculation

Of course, since we have the time (and a calculator handy) to do this, the actual estimated time is:

distance = speed * time
time = distance / speed
time = 13 nm / 100 kts
time = 0.13 hrs
time = 7.8 minutes

As expected.

Fuel usage

Now that we have time to travel a given distance, we can use the known rate of fuel consumption to calculate fuel usage.

For this, let’s assume fuel usage rate of 6.8 gph at cruise - a somewhat efficient Cessna. I’ll also use a time from the previous calculation.

We’ll mentally move the decimal point to the right one, to place the “rate” indicator at 68 on the calculator.

fuel usage setting

Now, on that same minutes time scale we used earlier, we count 8 minutes - or the amount of time we plan to travel that leg.

The gph usage then corresponds to the other side of that 8 minute marker - in this case, 9 ticks above the 68 tick. Keep in mind to move the decimal point back to the left, so that we use 0.9 gallons as our expect fuel usage.

fuel usage calculation

Note that we could go the other way - if we had 10 gallons available, then we could move 100 ticks clockwise to get the number of minutes we can travel at that rate. (In this case, just under 2.5 hours). You can also compare to the inner “time scale” to get the value in hours instead of minutes.

Electric Plane

Lessons learned and consolidation for the electric plane I’m designing.

This might not materialize as a thing I do, mostly due to lack of space to build the thing in.

Base Plane

Most of my calculations have used a Sling 2 as the “base” plane - using their published figures for MGTOW, and applying the 50% rule, I can comfortably fit 100 kilowatt-hours of battery and still have weight for a passenger + light amount of cargo.

I’m still deciding between the tailwheel variant or not, because removing the drag from the nosegear is really tempting.

I’ve also run the numbers for other, more readily available, experimental aircraft, but the Sling still works out to be my best bet.

Battery System

Main page here

When I started this project, I thought I might use salvaged Tesla batteries. As I did more research I realized that the Tesla battery packs are severely over engineered for my needs.1. I can build a battery system that’ll be not as good as a Tesla system, but it’ll be good enough, and much lighter than a Tesla system.

Pack Design

I’m still working through this.

Current thought is to use LG MJ1 cells, which, as of early 2019, have the highest energy density (just under 260 watt-hours per kilogram) of any battery cell available. This might change by the time I get around to being ready to manufacture the battery packs.

Mounting the Batteries

Initially, I thought I’d mount the batteries where the gas tanks would go - there’s plenty of space, the wing spar will handle the load, etc. But, I realized that I need to be able to access the battery packs easily, and for that it’s much easier to place them firewall forward or otherwise in/around the fuselage. (Having to take apart the wings, or build in a folding hatch, was not appealing to me).

I need to CAD this up, but the current thought is to place most of the batteries in front of the firewall, with the rest behind the main seat, as weight and balance dictates.

One of the super nice things about an electric plane is that the “fuel” doesn’t slosh around, or otherwise change the weight and balance. Which makes weight and balance calculations much easier, as well as allowing me to better optimize weight distribution. Of course, this nicety is countered by the fact that I’m always running at the heaviest fuel load.


I’m still figuring this out, and once I have this figured out, I’m sure my battery system will change to suit this.

I’m aware of the existence of a standard for electric airplane charging, but I’m unaware of it’s contents.

So, instead, I’m thinking of integrating an automotive EV charger. These are designed for ~400V battery systems, so I should be able to get one to work with mine.

I still haven’t ruled out working with actual electrical engineers to design/build my own.

Ideally, though, I’d be able to integrate an aircraft charger.


Current thought is two Emrax 228 motors in a stack configuration.

The stack configuration is for redundancy and power reasons.

  • if one motor (or motor controller) dies, then the other can pick up the slack, with a lower max-power.
  • this reduces the strain on each motor, which should improve their longevity
  • For my desired voltage (400V), it’s much easier to find motor controllers that are rated for the lower power each motor will require.

Motor Controllers

Still researching this. Ideally, these’ll be air-cooled controllers that are rated for 100 A continuous at ~400V.

Solar Charger

While I’m not going to slap solar cells on the plane, I do want to build a folding solar array that can be stored in the plane.

sunelec is a place where you can buy PALLETS of solar panels for fairly cheap.

Things that won’t be on the MVP

Out of scope things that won’t be on the plane, at least, not initially.

Motorized Wheels

For making ground operations much more efficient, I’ve considered placing ebike motors in main gear of the airplane. The thought was to aid in taxiing (don’t use the propeller to move), takeoff (use motors + propeller to get up to speed), and landing (regenerative breaking). However, for reasons of simplicity, I’m not going to do that.

I still might build motors into the wheels, but not hook them up to anything, though.

Solar Wings

TL;DR: It’s not worth it. Yet.

For the planes I’m considering, I have about 130 square feet total wing area. With the most efficient solar cells available on the market, I expect to get approximately 25 watts per square foot, or about 3 kilowatts for the entire wing. For reasons, I expect to only be able to utilize at most 2/3rds of the total wing area. Reducing this down to 2 kilowatts at most (realistically, closer to only 1). This is not useful whatsoever for extending the duration of flight (It would add on the order of 10 minutes total duration), which means that it’s only useful for charging, either to supplement grid power (that’ll be a fun challenge), or when grid power is not available.

There’s other things I can do to increase the amount of solar, e.g. covering most of the fuselage & tail, but that’s not really worth doing.

Additionally, just adding solar cells on top of the wings will affect the aerodynamics, potentially in a way I don’t want it to.

Instead, I’m considering building a folding array that I can set up next to the plane and use to charge it. This’ll have a convenience and weight penalty compared to directly mounting the cells, but I’ll have much more surface area available, and it won’t interfere with the aerodynamics of the plane.

Other Electric Plane Builds

  • Helno’s Electric Motorglider

  • Farfle’s Electric Ultralight

  • 1

    This plane is going to be based in LA. The batteries won’t overheat from use (air-cooled), though they do need some cooling to protect them while the plane sits outside in summer. Heating won’t be required (see: the model 3 lacks a battery heater), but even if it does, then I can utilize the same environmental cooling system to heat as well as cool.

Battery System

Really, the pack design is going to be led by the BMS. Charging is the main unsolved problem in this. Once I figure out a solution to that, everything else should fall into place.

The current thought is 3 strings of 12s25p modules, with 9 modules in series each. For a total of 27 total modules, and 8100 total cells.

The parallel strings will introduce eddy currents, but I’d rather have the backup capacity available. I have thoughts of mitigating this by tieing each string together on the pack level, with a fuse or circuit breaker to disable individual modules.

TODO: add circuit diagram.


Has it’s own page


There exists a working group to design aircraft chargers - they want to come up with a single charger standard. Last I checked, they don’t have anything public released.

Instead, I’m thinking of either placing an onboard J1772 charger, or potentially even a tesla-compatible charger. While it would be awesome to have compatibility with the supercharger network, there are precisely 0 superchargers on airport ramps - making this useless to me 1


Am I going to land on a highway and then taxi to a supercharger? I’d be floored if there’s even 1 supercharger station where that could work.

Battery Monitoring System

Henry has the Battery Murdering System that he uses in the Quick-E. I could probably ask him about it.

I do not (yet?) have the skills necessary to safely design a BMS - I’d rather not do that.

This thread documents another manager-worker BMS that almost exactly suits my needs. I might put my effort into that instead.

Given that, this page aims to document my current progress and goals with my own BMS, starting from the worker-boards (I’ll build out the manager board later)

Worker Board

This is based around the LTC6811-1 chip1.

Manager Board

I’m likely going to go with the LTC6820 chip2.


Both the LTC6811 and the LTC6820 use the isoSPI standard to safely and reliably transmit large amounts of data between the chips. The idea here is that, instead of directly connecting multiple chips, as you might in traditional SPI, each chip is isolated via transformers. Data is then transmitted electromagnetically via these transformers. This is used in other technologies (twisted pair ethernet, for example), and this is mostly just adapting this to also work via SPI.


The front panel, and the overall pilot interface.

In addition to stock Sling (which is essentially dual Garmin G3X), I’ll likely need at least one custom display for the electrics information.

Motor Controls

If I don’t do motorized wheels, this is really simple - single throttle, which directly controls the amount of power to send to the motor controller.

Or is it? If I want to do some type of propeller regen (windmilling the propeller to draw power from the motor and slightly recharge the batteries), or even a reverse throttle, then what does that look like? I’ve had three thoughts on this:

  • The neutral point on the throttle is not when the throttle is fully let out.
    The idea here is to have leave some space in the “back”, which would control how much reverse throttle to do. Maybe even spring-load it so that the throttle returns to neutral unless pressure is applied to it. This has the benefit of keeping the simple “the motor power is controlled by one and only one input” thing that I’m used to, though it does introduce some complexities in that I really don’t want to make it easy to accidentally keep reverse throttle applied.
  • Using the brakes will apply reverse throttle. The idea here is that, if you apply both brakes at once, then forward power is cut, and reverse power is applied proportionally to whatever the least brake applied is (e.g. if left brake is only 25% applied, and right is 50% applied, then only 25% reverse throttle is applied). This is nice in that the only way to get reverse power is when you’re already trained to want it - when applying brakes. However, it does introduce control complexities in that the motor needs to know to cut power when brakes are applied. Additionally, this makes it annoying when you want to a small radius turn (apply power, apply only right or left brake, and turn on a point) if you accidentally apply opposite brake.
  • Switch to apply reverse throttle. This is probably the simplest in both hardware and software to do. Essentially, add a single (hardware?) switch that is binary forward or reverse throttle. This is likely what I’ll go with, but it will require additional overhead (essentially, another item to the checklist: ensure power direction switch is forward).

In all three cases, I think I will have it that neutral throttle will engage the windmilling regen.

With Motorized Wheels

However, if I do add motorized wheels, then I have to consider the dual (or even triple) controls. Because now there’s 3 motors - the propeller motor, and the right and left main gear motors. So far, I’ve come up with 2 control schemes:

  • Throttle lever for each motor. This is the simplest in hardware to do. The idea is that each motor gets its own throttle. Simple enough, except each motor has different roles, and I don’t want to make it easier to conflate propeller throttle with wheel throttle. I probably won’t do this, but wanted to list it out.
  • Combined propeller throttle and wheel throttle, brakes cut or reduce power to wheels. The idea is to keep the interface simple - single throttle to control forward power. When on the ground and not preparing to take off, then the throttle controls the max power going to the wheels. When a switch is thrown for flight mode, then throttle controls the propeller. Can even add a additional settings to that switch - e.g. a takeoff mode that sets throttle to apply forward throttle to both wheels and propeller. Power to the wheels would be muxed with the brakes to either use more regen on a wheel motor with brake applied, or, even to give more power to the opposite wheel motor from the one that has brake applied. I can see this becoming quite complex soon, so I might have to model and prototype something to see how it might work.

However, as the top-level electric plane notes, I’m not going to focus on this initially.

Flight Writeups

Writeups for flights I thought were significant. Sometimes with ATC logs and other tracks.


This flight was super fun. The plan was to take off from SMO, fly out out to Malibu, do some stalls along the way, and then do ground reference maneuvers over and around Point Dume. Afterwards, the plan was to head back do a little bit of pattern work around SMO.

Here’s the flight track for the entire flight


Shortly after takeoff, we went along the coast and did first some power off stalls, and then some power on stalls. My instructor and I briefly went over recovering from a stall (lower the angle of attack by pitching down and applying full power), before doing some clearing turns and entering those stalls. With the exception of the first (in which my instructor had a slip of the tongue and told me to reduce power - should’ve been reduce AoA - and I listened to him instead of saying “nah, you got it wrong”), these were all fairly decent. I still need to work my recovery - I pitch down too much - but he was really happy with them.

Additionally, in one of the power-off stalls, instead of step-by-step going from 30° flaps to 20° flags, then 10° and finally no flaps, I went cleanly to from 30° to no flaps, because I was distracted during that stall. Obviously, this is something I need to improve upon. Yay flight sims.

Ground Reference Maneuvers

I did really well on these. Even if you don’t take into account the fact that this is my first time doing GRM in a low-wing airplane.

Also, the wind wasn’t blowing all that much, which really helped with this.

Turns Around A Point

For this, we reduced our altitude to ~1200 AGL and tried to circle Point Dume. The first few attempts, my altitude control was way off - instead of maintaining altitude ± 50 ft, I varied by as much as 100 ft - I came quite close to breaking the at least 1000 ft above congested areas regulation. Additionally, I felt that I wasn’t able to correctly maintaining the desired radius for the circle - from my perspective, it felt more like an oval than a circle. However, after 2 or 3 tries at this, I finally set up an approach I liked, and while I still didn’t quite like the approach, the GPS track as recorded by foreflight shows almost a perfect circle around Point Dume. We made another circle around Point Dume before we went on to some S-Turns.

Obviously the low wing makes GRM harder - especially turns around a point, but another thing was the differing height of the surrounding terrain that probably also messed with my reference points. Either case, I should practice these in a flight sim.


We did S-Turns along Zuma Beach.

I nailed these. The first turn wasn’t my best - I wasn’t setting up to be perpendicular to the beach when we were crossing it - but after that, I was on the money with these. Again, rustiness that quickly went away.

These also are much easier to do on a low wing airplane than turns around a point are.

Return to SMO

We had planned to do some more stalls on the way back, but there was quite a bit of other traffic around us, so we elected not to. Instead, we went directly to SMO.

For this, I’m going to link to this recording from live ATC. For reference, the recording starts at 4:37:50 PM PDT (23:37:50 UTC). Hereafter, I’m only going to refer to timestamps on the recording (which are directly in the mp3 file as mp3 chapters.

For context, there were at least 3 aircraft returning to land from maneuvers at approximately the same time. Plus another in the pattern, in addition to other kinds of traffic. ATC was busy. My radio work isn’t perfect - as you can hear, I repeat more of the instructions than I absolutely need to, I talk more-or-less in complete sentences - more than I need to, and I didn’t repeat the runway number when we were given clearance to land.

Obviously, with the amount of traffic involved, we didn’t do pattern work. Though we did go around, that was more due to a mistake on both our and ATC’s end (ATC gave the instruction to turn base when we were way too high and way too close to the field to safely make the landing - we should have immediately responded with “unable” and continued with our downwind) - See timestamp 04:20. Because we were too high, the instructor took controls for the first and only time in the flight. He put the plane in full flaps, and attempted to slip it to the field. However, he quickly realized that the plane was way too high to make it, and we went around. He hands it back to me at approximately 05:40.

Another thing of note, after the downwind, we’re asked to make a right 360 for spacing. On the foreflight track, it turned out that I made an almost perfect circle, around an intersection.

The rest of the flight is uneventful, with a few funny moments on the comms.


Overall, I’m quite happy with the flight, and I’m very proud with how it turned out. Things to work on, though are:

  • Stalls: Get better at recovering, practice lowering flaps when you only have an electronic indicator of where the flaps are, not a mechanical.
  • Turns around a point: Practice them in a flight sim, especially in a low-wing airplane. I was sitting on the wing - unless I’m at a 45° bank, I won’t see the actual point I’m supposed to be orbiting. Recognize that and where you should expect the point just leading the edge to precess as you orbit.
  • S-Turns: Practice them, be perpendicular to the line as you are crossing it. That is also the only instance you should be at 0° bank.
  • Pattern Work: Actually, I’m pretty decent at it. Still need to practice landings, though.
  • Radio Work: I’m pretty good at radio work too. Could be more terse, but I’m very happy with my radio work.

2019-06-28 Pre-Solo Stage Check

This was a little bit of a wake-up call to me. As embarrassing it is to admit, I have 70 hours and still haven’t soloed. But, finally the stars are aligning correctly, and I’m ready to solo (I’m actually about ready for a checkride - my flight instruction has been weird). Only thing remaining was a check with another instructor to make sure I’m not a danger to myself and others.

I don’t remember the entire details of the flight (didn’t get around to writing this up until a few days after the flight), but these are what stand out to me:

  • When I transitioned to the sportcruiser, I made sure to redevelop a sight-picture needed to land safely, but I didn’t redevelop the sight-picture for other phases of flight - Vy, Vx, etc. Which led me to constantly readjust my pitch as a hunt for the correct airspeed.
  • For airplanes with control sticks, it’s much easier to control the plane when your hand is not at the top of the stick. Despite being where the trim & PTT buttons are, keep your hand lower on the stick except when using the radio or trimming the plane.
  • Apply trim when changing phases of flight.
  • Apparently, the Rotax 912 ULS likes to cruise at 5200 RPM, for cooling reasons.
  • When approaching a target altitude, lower the nose first, then reduce power.
  • For stalls, recovering is not so much as “push down”, as “let off the pressure”. Obviously, this assumes that you were trimmed for cruise flight, not for a stall.
  • My emergency procedures need work, massively.
    • This instructor made it painfully clear - to the point where I had trouble sleeping - that I needed to work on my emergency procedures. I feel this reflects negatively on the instructor, as there are ways to get this point across without causing me to lose sleep or otherwise feel like I’m a terrible pilot.
    • On my next flight with my regular instructor, this was all we did. 1.5 hobbs doing engine-out work. I now feel much better about this.

Overall, I did pass that stage check, but, obviously, I felt pretty terrible coming out of it.

2019-07-26 First Solo

This has been a long time coming. According to my logbook, I have 73 logged pre-solo hours. That’s almost absurdly high. But, I did it. So... whatever.

Flight track here.

ATC Logs here, which starts at 0122:28Z on 2019-07-27.


Before I soloed, my instructor and I did a couple laps in the pattern. Then we came back to the FBO, he got out, and I was going to go do at least 3 laps in the pattern.

I started off by instilling great confidence in him by starting the start engine checklist from the wrong point. Essentially, I skipped turning on the electrical system and went straight towards turning on the engine. I realized my error when the engine didn’t start after I turned the key to engage the starter. I did get it started after I had followed the checklist from the right starting point1. Once I got it started, copied the weather, etc. I advised ground that I was a student solo pilot and was ready to taxi. At the runup area, I snapped a photo of the empty right seat, before proceeding to do the runup. Something I noticed is that I needed to have the canopy partially open to visually verify that the rudder pedals controlled the rudder as expected - normally we have the canopy partially open anyway for cooling reasons. I left it closed because with only one human in there, it didn’t heat up nearly as quickly. There was no issues with the rest of the runup, informed ground that I was done and was allowed to taxi to the runway.

First Lap

This was fairly uneventful, despite being a monumental achievement for me. As everyone notes, the first thing I noticed was how much better the plane performs - even when you know the plane should perform differently without the extra weight, it’s still surprisingly to experience the difference having ~150 pounds less weight makes. I nailed the landing, and taxied back to the start of the runway for the next lap. As I was taxiing, I noticed my instructor in the observation area, who was waving to me and celebrating my success. Which made me feel super proud and happy for my achievement.

Second Lap

When my instructor gave me the plan for my first solo, he said that we’d first do 3 laps in the pattern, with 1 go around, then he’d get out and have me do 3 laps in the pattern. I either misinterpreted what he said, or I misremembered. I recalled 3 laps and a go around as what I should do with my solo. Because of that, before I even took off, I had decided to do a go around for this lap. The upwind and crosswind portions of the flight went well, but I probably was too high when I turned base, and I definitely wasn’t properly set up to land when I was on final - to the point where if I wanted to hit the numbers, I would have had to dump flaps and slipped in to maybe make the landing. In retrospect, I’m unsure if I had actually screwed up the approach, or if I had subconsciously screwed it up, because I knew I’d be going around anyway.

Third Lap

As I was entering the upwind portion of this lap, I was advised by ATC to extend the upwind, make right traffic, and turn at the shoreline. For spacing reasons. Weird flex, but ok. Just after I entered the crosswind portion I was asked to extend my downwind and that they’ll call my base. I repeated this instruction, but I was also confused - I had just entered crosswind, did they misspeak and instead want me to extend crosswind? I called up tower and asked for clarification. I think this is when they remembered that I’m a student. They assured me that yes, they meant downwind, and after I turn downwind to extend until they call my base. This ended up being the longest downwind phase of my life. I recall thinking that this would be an excellent time to take a photo of the empty right seat - which I didn’t do because I was busy flying2. I did play with trim and had it flying straight and level for a bit while I monitored traffic. Eventually I passed abeam of traffic on final, and shortly thereafter I got clearance for the option. I turned base and landed without much incident. The landing wasn’t particularly great - I started to level off earlier than I needed to, caught that but didn’t correct enough and ended up floating down the runway. I pulled off and contacted ground, then went back to for the fourth and last pattern.

When I landed and was taxiing back to start my fourth lap, I was able to watch my instructor dance because he was so proud and happy of how I handled that.

Fourth Lap

The fourth lap was also uneventful (just how I like them). The only thing of note is that this was easily the worst landing I had done that evening. I wanted to do another lap just to “redeem” myself and show that I can actually land by myself. I basically did the same mistake - I leveled off too early and ended up floating down the runway.


I went into this being very apprehensive about this. I came out thinking both “that was a monumental achievement” and “that wasn’t so bad, let’s do that again”. I’m very proud of myself, as I should be. When I was going into this apprehensive - not “oh crap, I’m going to kill myself”, but the idea of having no one to catch any mistake I might make was daunting - even with as many hours as I have, where I know I’m going to do fine.


I skipped the correct starting point because I was apparently thinking that the start engine checklist should be under it’s own header. Might create my own solo checklist that excludes some of the things you do when you have passengers in it. I should create my own checklist in general to clarify/simplify things.


That would be hilarious, first time taking the plane for a spin without supervision and I decide to text and fly. Which, funnily enough, is not illegal in VFR flight.

2019-08-01 Cross Country: KSBA

We flew from Santa Monica to Santa Barbara, and then back again. Starting up my cross-country1 training again.

Was asked to put together a paper flight plan, which I made the mistake of finishing a few days prior to the flight2. Oops. By luck, the winds happened to have not changed much since I made the initial calculations. But still. I kinda want to write a tool that takes in the GPS coordinates for each section of the flight, fetches the relevant winds aloft data, and outputs a filled-out flight plan for you. Of course, plenty of these exist already, so 🤷🏽‍♀️.

Ok, so my flight plan happened to be good. Called up flight services and asked for a standard briefing3. Wrote it down, determined that there should be no issues with the flight, we got in and went about our flight.

On the way up, things were mostly fine. It was quite hot, and the cylinder head temperatures were a bit high. To address this, instead of climbing directly for our target altitude (a measly 4500 feet), we leveled off at 3500 feet and ran the engine at 5200 RPM in order to maximize cooling. Once the CHTs were back in the green, we resumed our climb to 4500. This occurred roughly when we were past Malibu.

One thing I learned on the outward leg is that there is restricted airspace around Point Mugu Naval Airbase from the surface to space. I missed this completely on the sectional - to the point where I think that the previous time I flew up to Santa Barbara, I busted that airspace. Essentially, the way around this is to fly east of 101 when close to Point Mugu. Oops.

We continued on, tracking each leg. We were on time with the calculations I did prior to the flight - within 30 seconds of the calculated time.

As we approached Santa Barbara, approach asked us to stay east/north of 101 (inland). Which keeps us out of the approach path of jets for runway 25. When we got closer, tower cleared us for runway 15 Left, and asked us to report a 2 mile right base - continue with 101 off our left wing, and report 2 miles out. We were cleared to land on 15 Left when we were about 4-5 miles out, so we didn’t even have to report the 2 mile right base 4.

Santa Barbara’s 15 Left runway is not wide - at 75 feet, it’s half of Santa Monica’s 21 runway. Primary training warns us that thinner runways mess with your sight-picture - they lead you to think you’re higher up than you thought you were. I knew this, and perhaps overcompensated. I leveled out earlier than I should have, which made this the hardest landing I’ve made in months. Certainly the hardest landing I’ve had with my current instructor. At the very least, I didn’t break anything, and I didn’t hurt either of us.

We spent a while on the ground - mostly so that my instructor could set up the GPS with the flight plan. He wanted me to use a paper flight plan on the way out, but was ok using other methods on the return flight. We copied our flight following clearance, and was told to make straight out over the water - to get out of their airspace as soon as possible. We did that, flew over some oil rigs, and were then cleared to resume our own navigation.

As we were approaching Oxnard (as one of the points to fly over), we were advised of traffic above us and same direction. We caught sight of them, and it appeared we were flying faster than them. Which surprised both of us. We tried to keep them in sight, but at some point they ended up behind us and neither of us could position ourselves to keep them in sight without making a turn. Eventually, we caught them behind and to the left, as well as sufficiently far that we felt safe to climb up to our target altitude.

Afterwards, the flight continued without much incident. We landed at Santa Monica maybe a little bit overtime. For our next cross country, I’m going to book the plane for 4 hours.


  1. Perform performance/weather calculations as close to departure time as possible. Have general flight plans ready in case weather makes one route more desirable than another.

  2. Be better at examining airspace and recognizing the less common airspaces. In general, practice reading sectionals and TACs.

  3. Practice landing at other (specifically, differing widths) runways.

  4. Always watch for traffic. Be better at spotting traffic. There’s a reason we’re advised for 10º sweeps.

  5. 1

    A cross-country flight, according to the FAA, is any flight with a stop at least 50 nautical miles (straight-line distance). The FAA uses the “large amount of land” definition of “country”, not the “nation” definition of “country”.


    It’s obvious in hindsight, but, really. For a flight plan to be valid, you need accurate winds aloft/temperature forecast. These forecasts are only valid/useful for a few hours, so obviously you get better data if you delay retrieving that data for as much as possible.


    Call 1-800-WX-BRIEF, “Hello, I’d like a standard briefing. I’ll be flying from $START to $END, in $TAIL_NUMBER, at $CRUISE_ALTITUDE.” Mention you’re a student if you want them to make life easy for you.


    I’m 100% certain the “report 2 mile right base” instruction was in case we were flying a decently fast plane. However, the sportcruiser cruises at under 100 knots - slower than a Cessna 172. So we ended up being much slower than the expected, which I suspect is why they gave us the landing clearance when we were that far out.

Left Turning Tendencies

Single-engine airplanes in particular are susceptible to left-turning-tendencies, especially at low airspeeds. You’re typically taught 4 of these:

  1. Torque (Newton’s 3rd law from the engine)
  2. P-Factor (Gravity)
  3. Spiraling Slipstream (Aerodynamics)
  4. Gyroscopic Precession

These combine to produce a left turning tendency, all because the propeller is turning clockwise (relative to the pilot).

Cessna Chick had an excellent article on left turning tendencies

Standard disclaimer for my other flying notes: I’m not a CFI. Hell, as of this writing, I’m not even a private pilot. Don’t take this as flight instruction.


Pretty simple, Newton’s third law states that for every action, there’s an equal and opposite reaction to that. The engine of the airplane is causing the propeller to rotate, at a certain amount of torque. Newton’s third law states that there’s an equal amount of torque in the opposite direction. Because the aircraft is of much greater mass than the propeller, this is why the propeller rotates at a great rate, whereas the plane... essentially doesn’t, unless you’re at or near full power.


P (or Propeller) factor is, IMO misnamed. Essentially, the propeller blade that is going “down” has gravity to help it out, whereas the blade(s) that are moving horizontal or up don’t. This, fairly minimal extra force, helps the right side (because the propeller is spinning clockwise) to move more air than the left side. This extra force on the right side causes the plane to yaw towards the left.

Note that this is most noticeable at higher pitch angles.

Spiraling Slipstream

Now we turn towards trying to imagine the airflow. Because the propeller is turning, the wind is also turned by it (the propeller pulls the wind backwards and induces a rotation in the same direction as the propeller). This slipstream now rotates around the aircraft, until it hits a flat surface (typically explained as the rudder), which forces either a bank (wing or elevator), or a yaw (rudder).

Note that this really only happens at slow speeds, because at faster speeds, the wind is moving fast enough that it doesn’t hit the plane.

Gyroscopic Precession

Only really a thing in tailwheels, or while trying to change pitch angle.

Relevant xkcd


Basic flight maneuvers. Mostly notes from Chapter 4 of the Airplane Flying Handbook.

Standard disclaimer for my other flying notes: I’m not a CFI. Hell, as of this writing, I’m not even a private pilot. Don’t take this as flight instruction.

Defined Minimum Maneuvering Speed

This FlightChops video is... perhaps the most valuable FlightChops video I’ve ever seen. In it, Dan Gryder talks about applying a Defined Minimum Maneuvering Speed as the minimum airspeed to maintain in every phase of flight except for short final. He defines it as 1.404 * Vs1. For example, on a sportcruiser Vs1 is 39 kts. 39 knots * 1.404 = 54.756, or 55 kts. So, don’t fly slower than 55 kts in a clean condition.

The 1.404 number comes from applying a 1.3 buffer on top of Vs1, plus an additional 8% on top of that, to account for a potential 30 degree bank.


Stalls occur when the angle of the airflow relative to the cord of the wing exceeds a certain “critical angle”. When this critical angle is reached, the lift generated by the wing drops to near-zero1.

Relatedly, stall horns are devices that detect how turbulent the airflow on a wing is, and warn the pilot if it’s too turbulent - and thus is in or approaching a stall.

Power On Stall

A power on stall (also called a departure stall) is the kind of stall you might encounter immediately after takeoff or a go around. For this, you’d be in a takeoff configuration. (flaps as specified, gear down, etc.) Though, the Airplane Flying Handbook also recommends practicing power on stalls in a clean configuration (flaps and gear retracted). Set power to maximum (hence the name).

Enter a climb from a lift-off speed. Here is where you’d apply the desired power setting. Raise the nose during the climb, enough to start reduction of airspeed. Maintain straight flight during this (it’s really easy to enter a turning stall when trying to enter a power on stall). Eventually, you should enter a stall.

At this point, you’re going to:

  1. Pitch down. This’ll increase your angle of attack.
  2. Keep the wings level with ailerons & rudder.
  3. Apply full power (it might not have been full anyway).

Once the stall is exited, return to the desired flightpath (climb or straight and level), and return to the appropriate power setting.

Power Off Stall

A power off stall is the type of stall you might encounter during a landing. For this reason, you start in a landing configuration (flaps on, gear down, carb heat applied, throttle to near-idle).

Using just the stick/yoke, you keep the plane level - essentially, you’re pulling up to drop airspeed. As you do this, the plane will slow down. It’s important to keep the plane flying straight (otherwise you get a turning stall). Eventually, you’ll be going too slow and enter a stall.

At this point, you’re going to do a three things in quick succession. All of which with the goal to increase the angle of attack.

  1. Nose down
  2. level wings
  3. Apply full power (power as needed). This might require some right rudder to counteract some left-turning tendencies.
  4. Once the flying speed is back up, level out and climb back to starting altitude. Keep in mind that you don’t want to have dropped too much, because this could have actually been a landing, and dropping into the terrain sounds like not a good time.
  5. In the climb, go back to a flying configuration - gear up, raise flaps, trim as needed.

Ground Reference Maneuvers

Ground Reference Maneuvers are necessary for all sorts of flying.

For these, you should enter at maneuvering speed, and never go above a 45° bank angle. Ground Reference Maneuvers should be established from the downwind position, and you should always check/clear the area with two 90° clearing turns prior to performing a GRM.

For the most part, these are all the same thing - keep multiple reference points in sight, make small corrections for the wind, etc.

Some things to keep in mind:

  • Bank into the wind. That is, when in a crosswind that is blowing away from the point, you want a higher bank angle (because you need to compensate for the wind blowing you away from the point). Similarly, when the crosswind is blowing into the point you’re orbiting, bank less now that the wind is helping you in your turn.

  • Similar idea as above, have a slightly higher bank angle in the downwind - the wind is blowing you tangent to the turn, so you’re going to be in this portion of the turn less than in the upwind part of the turn. You will have the steepest bank angle when you have a direct tailwind.

  • In a low wing aircraft, the wing might conceal the object you’re supposed to be orbiting. This is fine, you should have other points you’re referencing anyway.

  • 1

    In a stall, the wing is still generating lift - this is why the plane doesn’t fall at 9.8 m/s^2 when in a stall - it’s just not enough to keep the plane from falling period.

Noise Abatement

It’s loud, here’s the noise abatement procedures for a few of the airports I’ve flown out of.

I’m not a CFI. Hell, as of this writing, I’m not even a private pilot. Don’t take this as flight instruction.

Hawthorne Municipal - khhr

Available here, we have:

  • Takeoff at Vy (best rate of climb). (Normal takeoff is Vx)
  • Upwind to at least the end of the runway.
  • Turn crosswind at 500 ft above field elevation OR by Hawthorne mall, whichever comes first. (Normal crosswind turn is at 800 ft above field)
  • Fly downwind over El Segundo BLVD. This means that your downwind will be much closer to the field than it otherwise would be.

Note that this is voluntary - but you should still follow it because aviation is already hated by the general public.

Santa Monica - ksmo

Available here. This is:

  • Takeoff runway 21, fly over the golf course (turn 10 degrees left at end of runway, then right to fly over the golf course).
  • Don’t turn crosswind until after you fly over Lincoln.
  • Turn base at/around I-405/When ATC tells you.

Also there are night procedure restrictions:

  • Monday through Friday, no engine starts or takeoffs between 11 pm and 7 am the following day.
  • Weekends, no engine starts or takeoffs between 11 pm and 8 am the following day.
  • That is, Friday evening, no engine starts/takeoffs from 11 pm until 8 am the next day.
  • Similarly, on Sunday evening, no engine starts/takeoffs from 11 pm until 7 am the next day.

Unlike Hawthorne, there’s an actual ordinance behind the Santa Monica noise abatements. Meaning that violating them is something you really don’t want to do.


Required Equipment

FAR 91.205 lists the required equipment for all flights.


  • Airspeed indicator
  • Altimeter
  • Magnetic direction indicator (compass)
  • Tachometer
  • Oil pressure gauge for each engine using a pressure system
  • Temperature gauge for each liquid cooled engine
  • Oil temperature gauge for each air cooled engine
  • Manifest pressure gauge for each altitude engine
  • Fuel gauge indicating the quantity of fuel in each tank
  • Landing gear position indicator, if retractable.
  • If certified after 1996-03-11, red and white anticollision light system.
  • If over water, and beyond glide distance: approved flotation gear, and at least one flare.
  • Approved safety belt for everyone > 2
  • If made after 1978-07-18, shoulder harness or restraint for each front seat.
    • If made after 1986-12-12, shoulder harness or restraint for all seats.
  • An ELT, as required for 91.207

Or, as a mnemonic (retrieved from Ask A CFI), TOMATOE A FLAMES:

  • Tachometer (for each engine)
  • Oil Pressure Gauge
  • Magnetic Direction Indicator (magnetic compass)
  • Airspeed Indicator
  • Temperature Gauge for each liquid cooled engine
  • Oil Temperature Gauge
  • Emergency equipment (beyond power off gliding distance over water) pyrotechnic signaling device, flotation device
  • Anti-collision Lights
  • Fuel Gauge for each tank
  • Landing gear position indicator
  • Altimeter
  • Manifold Pressure Gauge for each engine
  • Emergency Locator Transmitter
  • Safety Belts and Shoulder Harnesses

VFR Night


  • Fuses
  • Landing light, if operated for hire
  • Anti-collision light (beacon and/or strobes)
  • Position Lights – Nav Lights (Red on the left, Green on the Right, White facing aft)
  • Source of electricity (battery, generator, alternator)


Spins are essentially what happens when only half the plane is stalling. Or rather, half the plane is stalling significantly more than the other half. Because half the plane has more lift than the other, this forces the plane into a bank, very quickly followed a pitch down. Obviously, this is dangerous.

Spins happen when a stall occurs, with a yaw on the plane (not coordinated flight).

Standard disclaimer for my other flying notes: I’m not a CFI. Hell, as of this writing, I’m not even a private pilot. Don’t take this as flight instruction.

Recovering from a Spin

Essentially, this is stopping the rotation, and unstalling the wing.

  1. Power to idle
  2. Ailerons neutral
  3. Full opposite rudder
  4. Push down (exit the stall)
  5. Neutral rudder after spin stops
  6. Return to level flight (return to level flight)

Demonstrating a Spin

Approach a spin similar to a power-off stall. This makes sense, because a spin will send you hurtling toward the ground, and it’s better to do that with minimum power applied.

As the plane approaches stall, smoothly apply full rudder in the direction o the desired spin rotation, while applying back pressure (pull up) on the elevator. The airplane should yaw in the direction of the rudder and enter the spin. Thus, entering the “Incipient” phase of the spin. At this point, spin recovery techniques should be initialized.


  • Entry is pretty obvious
  • Incipient is just after entry - the plane is spinning, but it’s not yet following a vertical flightpath
  • Developed is when the plane is heading more-or-less straight down.
  • Recovery is when the rotation ceases and the stall is exited. It may take a few turns to exit recovery phase, depending on the aircraft.

Takeoffs and Landings

Taking off is optional. Landing is mandatory.

Going to describe the main styles of takeoffs and landings as taught in primary (private) training.



A “standard” takeoff looks like this:

  • Get on centerline of runway
  • Apply throttle, ensure engine instruments are in green
  • Release brakes, accelerating
  • Watch for airspeed to increase (to become “alive”)
    • If the airspeed doesn’t come alive, then abort the takeoff.
  • At rotation speed (Vr), begin rotation. (or, you know, when the plane starts to take off)
  • Climb out at Vy (or Vx, if noise abatement requires it). Note that Vy is usually a higher airspeed than Vx
  • At 500 AGL, reduce flaps to 0° if they weren’t already at 0°.

Short Field

Usually as “short field with 200 ft obstacle”. This is where we get to practice STOL with a non-STOL airplane.

  • Set flaps to 10°.
  • Get on the centerline, as far to the end of the runway as possible (use all available runway)
  • Hold brakes and apply full power. Release brakes.
  • Watch for airspeed to come alive.
  • Rotate at Vr
  • Climb out at Vx.
  • When at 200 AGL (over that 200 ft obstacle), pitch for Vy
  • Reduce flaps to 0° when at 500 AGL.

Soft Field

E.G. a grass field or a sandbar. Or really, any field that’s not concrete or tarmac.

  • Set flaps to 10°.
  • As you take the runway, apply full back pressure on the elevator, do not come to a stop on the runway.
  • Gradually apply full throttle.
  • Keep nosewheel off the ground, but don’t tailstrike.
  • Rotate at Vr
  • Stay in ground effect until you’re at Vy
  • Climb out at Vy
  • Reduce flaps to 0° when at 500 AGL.


Short Field

The point of a short field landing is to get the plane down as early as possible (on your landing mark). Without floating or taking more space than necessary.

  • Once you are on the runway:
    • Reduce flaps to 0°
    • Apply brakes as necessary (don’t destroy the brake pads if you don’t have to)
    • Once flaps are at 0°, pull back on the elevator - use drag as much as possible to reduce speed.

Soft Field

The idea here is to essentially try to keep the plane from landing as much as possible - don’t want it to catch on a sandbar or clump of dirt and cause the plane to flip or something.

  • Don’t idle the engine as you come in to land
    • Instead, wait until you’ve touched the ground to idle the engine
  • Keep the nosewheel off the ground as long as possible
  • Role off the field without applying brakes too much.


Understanding the joke behind this.


As simple as calling 1-800-WX-BRIEF.

There are 4 types of briefings: standard, abbreviated, outlook, and in-flight. In almost every case, when you call flight services, you’re going to ask for a standard brief.


Food is good.

I follow a mostly vegetarian diet, but I do enjoy meat on occasion.


I’m not a great cook, but I try my best.

Recipes from other people



Grilled Cheese



  • 1 oz unsalted butter
  • cheese (american singles, nom!)
  • Bread


Put cheese between two slices of bread. Melt butter on small-ish pan over medium-low (favor low) heat. Once melted, put sandwich on pan Heat for 3-4 minutes Flip Heat for another 3-4 minutes.


For extra deliciousness, enjoy the sandwich with chili.

Instant Ramen

The secret to decent instant ramen is to not use the ultra-cheap maruchan/cup noodles brand ramen. At the very least, get the instant ramen that’s stocked in the asian aisle. This’ll run at ~$1/packet, or about 10x more expensive, but you get way higher quality ramen out of it. If you go to an asian market, you’ll be able to get even better instant ramen there.

The other trick is to use at most 2:1 proportions packet:flavoring ratio. Otherwise it’s way too salty. Even better is to avoid using that flavoring packet and create your own flavoring.

So, bare minimum, we got:

  • Bring the liquid to a boil on the stove. At the most basic, this is water. If you’re feeling extra, use a broth. Bonus points for using home-made broth as that’s the cheapest.
  • Once the liquid is boiling, add the ramen. 1 packet per person is a good serving size.
  • Cook the ramen for 3 minutes. Drain the liquid from the ramen.
  • Add seasoning. This can be as simple as the flavoring packet, or something even better.
    • I’ve found pepper + italian seasoning to be pretty decent.
    • Also a little bit of salt.
  • Serve.

In addition to the above, you can also add some vegetables and other stuff to improve the ramen, such as:

  • sauteed mushrooms
  • seaweed
  • sriracha
  • green onions

And probably more, but this is all we’ve tried.

Mac and Cheese

Really, this is cheesy pasta, because you don’t have to use macaroni.

This is infinitely better than box mac and cheese, and just as simple.

(By the way, for box mac and cheese, the best is Annie’s white cheddar shells).


Fairly simple

  • Cheese (whatever you have is fine - it needs to be shredded before it goes in, or even better: grated.)
  • Milk (any kind, even the vegan milks)
  • Pasta (shells, rotini, though I guess any kind of pasta should be fine, I like having smaller noodles, though)
  • Butter (4 oz or so)


  • Fill a medium-sized pot or saucepan with water, remembering to salt it enough to taste like seawater.
  • Get it to a boil, then add the pasta.
  • Boil the pasta for however long the packaging says.
    • While this is going, prepare the rest of the ingredients.
      • get the right amount of butter.
      • shred or grate the cheese.
      • get the milk out.
  • Drain the pasta, put the butter in the same pan and get it to melt.
  • Add the cheese and milk. Cheese first.
    • I eyeball the milk - pouring for about half a second or a second is usually enough. You’re aiming for about 2 oz.
  • Stir until everything comes together. There’s a decent chance there’s not enough heat left to melt the cheese entirely, that’s fine.
  • Add the pasta back in and stir.


Roasted Potatoes


  • Potatoes. Not baking kinds, you want “harder” potatoes. You want enough to fill a “serving” bowl.
  • Olive Oil, about 2 to 4 oz.
  • Salt
  • Pepper
  • Other seasoning, if you want


Wash the potatoes. Of course.

You’ll need o bowl for mixing the oil and chopped potatoes, and a cookie sheet. Cover the cookie sheet in aluminum foil, and spray it with non-stick spray.

  • Preheat oven to 450 F
  • Pour the oil, and seasoning into the serving bowl, and mix them.
  • Chop the potatoes into cubes. About a quarter to half an inch on each side or so is fine. You’ll figure it out as you make these.
  • After every 2 potatoes, put them in the serving bowl, and mix them enough so that each cube is coated in the oil. Then put them on a cookie sheet. Potatoes should only be in a single layer.

Put the potatoes in the oven for 20 minutes or so. I set 3 timers at 18 minutes, 20 minutes, and 22 minutes. Check on the potatoes as each timer goes off (use the oven light, you don’t have to open the oven up). They will finish cooking after you pull them out, so if they look “done”, then it’s too late. They should look like they’re starting to finish.


Simple Soup

Soups are super easy. You can make a soup simply by tossing a bunch of vegetables into a pot and let them boil for 20 minutes. This is a simple vegetable (or beef) soup I like to make.

The only real way to screw up soup is to let it sit/cook for too long. Mushy soup isn’t good. I’ve learned that when I do make soup, I need to commit to finishing it by the next night, or else it’s not something I’m going to enjoy finishing.

Ingredients (Vegetarian)

  • Beans (Pinto, Kidney, or Red) ~1lb
  • Assorted vegetables, here’s what I enjoy:
    • potatoes (use the smaller red/yellow potatoes, don’t use russets/baking potatoes. You want a “starchy” potato)
    • carrots
    • celery
    • bell peppers
    • radishes
    • onions
  • Vegetable broth (I use 2 16oz containers)
  • Noodles/Pasta. Only do 1 package, here’s what I’ve used/liked
    • Egg noodles are good
    • shells (the smaller the better)
    • rotini
  • Seasoning to taste. I typically do pepper and italian seasoning.

Ingredients (Non-Vegetarian)

This is the same as the vegetarian, with the following replacements.

  • Ground Beef (~1lb) instead of beans
  • Beef Broth instead of vegetable broth


You’re going to use medium heat for most of this, unless otherwise specified.

If you’re making the non-vegetarian variant:

  • Season the meat, roll into balls. Cook these in the bottom of the pot with no liquid until they’re entirely brown on the outside.
  • Pour in the first container of broth.

If you’re making the vegetarian variant:

  • Pour in the beans + broth at the same time.


In order of density of the vegetables (denser vegetables take longer to cook), cut and add them to the pot.

Add seasoning as desired.

Cook, covered, for about 10 to 15 minutes, or until the vegetables are close to being banned.

Add the noodles, more seasoning, and the other broth container.

Cook, covered, for another 10 minutes or so.


Spanish Rice

this recipe is the easiest Spanish rice recipe I’ve ever made. It might not be terribly authentic, but it’s easy and it’s good.


Hardware-based projects I want to build.

Wearable Computer

Look like an 80s computer geek! Inspired by this project on adafruit, the idea is to build a much sleeker version of this. Potentially for use while cycling, flying, or otherwise as I think about it.


Phoebe is an ARRMA-RC Raider BLS rc car that I’ve spent the past 4 years off and on making semi-autonomous.


Phoebe came stock with an arrma-rc BLS ESC & Motor combination. This is a sensorless brushless motor, which is not ideal for a robot, and I’ve been on the hunt for a suitable sensored replacement.

From the specs, the motor is:

Shaft Length13mm
Shaft Diameter3mm
Motor Speed4000kv

Any replacement motor, to fit on the car, needs to match the physical dimensions. To keep a similar performance (I don’t care to replace the gearbox - I might as well buy a new platform if I do so), I also want it to have a similar speed as the stock motor.

Control System Mounts

Attaching to Phoebe

There are 4 screw holes - 2 on each side. They are symmetrical.

  • It appears that the first 2 set are 24 mm from the back.
  • The next set are 83 mm from the first
  • Allow at least 5 mm vertical clearance from the screws to the bottom of the control system platform
  • From the screw holes to the bottom of the chassis for Phoebe is 34 mm.
  • m3 screws.
  • The holes aren’t threaded - use nuts + lock washers on the other side to hold them in.
  • The batteries for the electronics will likely be 16mm thick.


  • The stock ESC is 54 mm long, 38 mm wide, and 21 mm tall.
  • The (m3) screws are 46 mm apart (center to center)
    • 17 mm from one edge
    • 21 mm from the other (this one is the side with the cable leading to the switch)
  • It should be mounted opposite the side the facing the micro USB port for the raspberry pi

PWM Servo Driver

Phoebe uses a SunFounder PCA9685 PWM Servo Driver to interface a raspberry pi 3 to the servo and ESC.

  • It is 62 mm long, 26 mm wide.
  • The (m3) screws are 19 mm apart (centers inset by 3.5), and are 56 mm apart.

Raspberry Pi 3 B+

  • Mechanical drawings for a raspberry pi 3 b+
    • Length: 85 mm
    • Width: 56 mm
    • Screws:
      • m3 screws
      • 49 mm width between screw holes.
      • 58 mm length between screw holes
      • Inset 3.5 mm from width of board.
      • Trailing screws are inset 3.5mm from length of board.
    • The micro USB/Power input is 8 mm wide, and the center is 10.6 mm from the edge of the board.
      • Doing some math, the center of the power input is 4.6 mm from the nearest screw hole - the nearest edge of it starts 0.6 mm from that screw hole.
      • It is ~1.5 mm tall. The hole in the side of the mount should allocate 15 mm width + 10 mm height for the cable.
      • In other words, the hole for the mount should start just past the screw holes and continue for 15 mm. It should allow sufficient height plus/minus for the micro usb cable.
    • The USB A ports extend approximately 2 mm beyond the length of the board.


My mobile OS of choice.


Spreadsheet program for iOS and macOS - the iOS version


This apple support page shows a list of current shortcuts available in the iOS version of numbers.


Renaming a Sheet

This is surprisingly non-obvious. You double-tap it to select the text, then you can edit it from there. I thought there would be something involving a long-press, but that only allows you to move the sheet around. Similarly, a single-tap brings up an edit menu that allows you to cut (remove and place in pasteboard), copy (place in pasteboard), duplicate, or delete the sheet.


Renting Checklist

Things to check/verify when checking out a place to rent. Borrowed/Combined with this lifehacker article, and this comment on the article, which is much more useful.

HVAC & Utilities

  • Central or Wall AC
  • Heating? What’s that like?
  • Water Heater? How many apartments share it? Where is it?
  • Water
    • Verify all sinks, showers, and toilets work.
    • Check how how they take and how long it takes sinks & showers to get hot.
    • Verify shower pressure
    • Verify no toilet backup
  • Laundry


  • Shared? Or are we the only ones with access?
  • What’s it wired for? (220 V AC would be great for EV charging)
  • Bays?


  • Stove/Oven:
    • Gas or Electric?
    • Age?
    • Do burners work?
    • Smell?
  • Verify that all cupboards are clean. Shine a light in them.
  • Dishwasher


  • Verify fan works
  • Check for mold


  • Outlets/room.
  • These should all be grounded.
  • Capacity. Can we run everything all at once?


  • What’s their general policy on this?
  • Who’s responsible?
  • Do they have recommendations on who to call?

Previous/Current Tenants

  • How long were they there?
  • Why are they leaving?
  • What has the interest been since it’s been listed?
  • What have prospective tenants found concern with?

Neighborhood and Surrounding Area

  • Check a crime heat map.
  • How far is the nearest grocery store? Is it good? How far is the nearest decent grocery store?
  • What is there to do near the place?
  • Where are the nearest coffee shops?
  • How’s the commute like to your jobs and other common places to go?
  • How are the other people in the area? Mostly people renting or owning? Students? etc.


  • “General care and upkeep: are old nails, window hardware painted over a million times? Did previous painters mask the light fixtures, or just paint over them? Indicates they use bargain handyfolk”
  • “Potential weekend-wakers: church nearby, early gardeners outside, children, garage door under unit, streetcar/bus line, construction”
  • Flooring
  • Verify all locks work.
  • Noise from neighbors: Above, Below, next door.
  • Did you notice any bugs?
  • History of rent increases.
  • Where to store bikes.
  • Does landlord visit often? What are there expectations/policies when visiting?


The only desktop OS worth using. (iOS being the only mobile OS worth using).


BitBar is a really neat menubar app for macOS that lets you write simple command-line programs as separate menu bar apps.


LaunchD and LaunchAgents is an excellent resource for using launchd and creating launchagents.

Remotely shutting down

There’s essentially two ways to do this from a terminal: sudo shutdown -r now will reboot the machine, now. Apps don’t get the chance to stop this.

Alternatively, you can use applescript, with commands like:

  • osascript -e 'tell app "System Events" to shut down' will shutdown the machine.
  • osascript -e 'tell app "System Events" to restart' will reboot the machine.

All of these can be halted by other apps, though.

See this stackoverflow answer for other examples.


Applescript is pretty terrible, but very useful for scripting mac apps. You could use javascript for this, as of 10.11, but there’s no documentation for using javascript to script OSX applications.

Concatenating strings

Use the & operator to concatenate strings. Unlike + as in almost all other languages.

"Something " & "something else" -> "Something something else"


Quick refreshers on math.


Matrices are two dimensional arrays of numbers, e.g.

\[ \begin{bmatrix}1 & 2\\3 & 4\\5 & 6\end{bmatrix} \]

describes a 3 by 2 matrix.

A matrix is described by the number of rows, then the number of columns.

Matrix Multiplication

Matrix Multiplication is the process of multiplying two compatible matrices together. Unlike scalar multiplication, matrix multiplication is not commutative - that is, if a and b are matrices, \(a * b\) is not guaranteed to produce the same matrix that \(b * a\) produces.

In order to be compatible, the number of columns in matrix a must equal the number of rows in matrix b. This will produce a matrix that has the same number of rows as matrix a and the same number of columns as matrix b.

See this explanation until I write up a better one.

Note that this is also called the dot product.


Creating your own mdBook-based Knowledge Repository

The short version of this page.

I maintain a second knowledge repository for work-specific things, these are the instructions I used for setting that one up.

Machine Setup

mdBook requires rust to use, so we first install rust. This is done via rustup.

  • curl --proto '=https' --tlsv1.2 -sSf | sh

Next, we install mdbook itself: cargo install mdbook.

Optionally, we can install my mdbook-generate-summary tool, which means we don’t have to maintain the file that mdbook requires. If you don’t want to install that, then you also need to add an entry to the file each time you create or move files around.

  • cargo install mdbook-generate-summary

This is all that’s required to setup the machine.

Setting up the Repository

To set up the repository itself, you need to create a book.toml file, an initial src/ file, and (if not using mdbook-generate-summary) a src/ file.

For reference, this repository’s book.toml file is:

title = "Knowledge Repository"
authors = ["Rachel Brindle"]
description = "Rachel's second brain"

preprocess = ["links", "index"]

curly-quotes = true
no-section-label = true
mathjax-support = true
additional-css = ["css/custom.css"]


The only special thing is that this repository also uses my mdbook-api backend, in order to export things for use with my client side tooling.

Building the Repository

If you want to view the repository locally, you can use mdbook build, and open book/index.html in your web browser. If you’re doing interactive work, then you can use mdbook watch.

Note that if you’re using mdbook-generate-summary, you should run that every time you create, delete, or move a page.

How This is Setup

This is setup using mdBook. It’s hosted as a repository on github. I set up a pipeline in concourse to build, check that things work, and then push new versions once things are set up.

TL;DR, check out these instructions

Repository Layout

This is a simple mdbook, the actual content files is under src/. is missing, because I have tooling to automatically generate one automatically.


The pipeline1 is relatively simple:

  • Check for new pushes to master
  • Generate a for the book.
  • Build the book (using this mdbook docker image)
  • Test that the generated book isn’t broken (mostly verify the links work) using html-proofer, via this docker image.
  • rsync the updated book to the server hosting the contents.

Server Setup

The server hosting this is a linode VPS. It gets deployed to/managed via an ansible playbook. The current setup is pretty bad/full of bad patterns, but needless to say that playbook manages setting up nginx, getting letsencrypt set up, and configuring nginx to serve the static files for this repository.

On Sol, the repository containing this playbook is located at ~/workspace/Apps.

Offline/Development Setup

For making changes and doing a local preview (or just simply running locally), the following setup is recommended/required:

  • Rust/Cargo: Install rustup
  • mdbook-generate-summary: cargo install mdbook-generate-summary will get you an out-of-date version. The CI uses a dockerimage for this, but that docker image is not yet set up for local usage. The “best” way to get an up-to-date version is to download the source, and run cargo install --path .. Which isn’t the best way to distribute software. 🤷🏻‍♀️
  • mdbook: cargo install mdbook


mdbook-generate-summary will build a file for you. This way, you don’t have to maintain one.

mdbook watch will build your sources, watch for any changes to the src/ directory, and serve up the book on localhost:3000.

I do this for my work repository, which I want to keep separate from my personal stuff.


After noticing an embarrassing amount of spelling errors on this (one of the drawbacks to editing this mostly in vim), I spent time looking into how to spellcheck markdown files.

Regardless, I’ve used markdown-spellchecker (which I discovered via this article) to locally spellcheck this, using this command:

mdspell --ignore-acronyms --ignore-numbers --en-us "**/*.md"

Future Work

  • Automatically add a “last updated” line immediately below the title for a page, to make it obvious when I’m looking at outdated information. Could get the last updated information from git (git --no-pager log -n 1 --pretty=format:%ci path/to/file).

    • It would be extra cool to this for each subsection on a page.
  • On a per-section basis, add other lines to show up for all pages in that section (e.g. I want everything in my flying section to have the “This is for my own use and is not flight instruction” disclaimer).

  • Figure out a way to actually support checkboxes and such, as how github does. This might require a change to upstream mdbook.

  • 1

    The pipeline definition looks like this:

- name: rsync-resource
  type: docker-image
    repository: mrsixw/concourse-rsync-resource
    tag: latest

  # Knowledge Wiki
  - name: knowledge_source
    type: git
      uri: https:/
      branch: master
  # Task info
  - name: tasks
    type: git
      branch: master
  # Book Server
  - name: book_server
    type: rsync-resource
      server: {{book_server}}
      base_dir: /usr/local/var/www/knowledge/
      user: you
      disable_version_path: true
      private_key: {{BOOK_SERVER_PRIVATE_KEY}}
  - name: build_knowledge
      - aggregate:
        - get: knowledge_source
          trigger: true
        - get: tasks
      - task: spellcheck
          platform: linux
            type: docker-image
              repository: tmaier/markdown-spellcheck
              tag: latest
            path: sh
            - -c
            - |
              cd knowledge_source
              mdspell --ignore-acronyms --ignore-numbers --en-us "**/*.md"
            dir: ""
          - name: knowledge_source
      - task: generate_summary
          platform: linux
            type: docker-image
              repository: younata/mdbook-generate-summary
              tag: latest
            path: sh
            - -c
            - |
              cd knowledge_source
              mdbook-generate-summary src/ -v
              cp -r * ../generated/
            dir: ""
          - name: knowledge_source
          - name: generated
      - task: mdbook
        file: tasks/tasks/mdbook.yml
          code: generated
          concourse: tasks
          book: book
      - task: test
        file: tasks/tasks/html_proofer.yml
          code: book
          concourse: tasks
        params: {DOMAIN: ""}
      - put: book_server
        params: {sync_dir: book}

Client-Side Tooling

Some tools I wrote to help make my usage of this repo easier.

Mostly, this is the Second Brain iOS application I wrote. This simple tool stores a copy of this locally, and uses the spotlight hooks available on iOS to allow searching through the contents of it.

Software Engineering

I’m a software engineer by trade. Most of what I know is related to that.

Continuous Integration

I have a lot of thoughts on CI/CD.

My preferred CI system is concourse. Notes on that are here.


Inline task definition

You shouldn’t make a habit of doing this, but here’s a link to a script that’ll inline task definitions, for the rare case when you want a one-off task definition.

Concourse on Linode

Some notes on running Concourse from a linode box:

  • You can run the web command and the worker command on the same machine. The web machine can be on a 1GB ram linode, it doesn’t take that much resources.
  • While doable on the 1GB ram plan, you should really run the workers on at least the 2GB ram plans. This is more for storage than anything else.
  • Using a linode is a better plan long term over getting a NUC so long as you stay under the 16 GB plan. Depending on your usage, the other benefits (not having to care about hardware issues) might even extend this to that.

As with the other services I maintain, the setup is managed inside of an ansible playbook.


I discovered the hard way that using the 1GB “nanode” plan was not a good plan. The disk very quickly filled up, in addition to everything being slow as molasses. Once I migrated the machine to the 2GB plan, I ran into issues with the volume space not being resized (concourse creates a worker volume logical volume with $TOTAL_DISK_SPACE - 10GB of space), then further issues with the system thinking that a volumes which were deleted in fact weren’t, etc.


See this issue

Remove $CONCOURSE_WORK_DIR/garden-properties.json before each time a worker starts.

Resizing the Worker Volume

See this issue.

# On a machine with fly
fly -t $TARGET land-worker -w $WORKER_NAME

# On the worker
sudo systemctl stop concourse_worker

# Back to fly
fly -t $TARGET prune-worker -w $WORKER_NAME

# Back to the worker
sudo umount -f /opt/concourse/work_dir/volumes
sudo sync
sudo losetup -d /dev/loop0
sudo rm -rf /opt/concourse/work_dir/volumes.img
sudo reboot

Pruning the worker (which really only needs to happen before the reboot) tells concourse to ignore any volumes that may or may not exist. Invoking land-worker may or may not actually do things.

Darwin Worker

I wrote something on this a few years back. Which is, of course, out of date (at least, in regard to houdini).

Here’s my current launchagent (~/Library/LaunchAgents/com.rachelbrindle.concourse.worker.plist):

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">

And the corresponding

#!/bin/sh -l

cd /Users/you/concourse
/usr/local/bin/concourse worker \
    --work-dir /Users/you/concourse/darwin_work_dir \
    --tsa-host $CONCOURSE_HOST:2222 \
    --tsa-public-key /Users/you/concourse/keys/web/ \
    --tsa-worker-private-key /Users/you/concourse/keys/worker/worker_key


Docker is cool.


FROM is needed at the top of the Dockerfile, this specifies the image you’re building on.

RUN will run a shell command at image-build-time. Use these sparingly, to reduce the amount of layers created.


Need to tag the image in your dockerhub username

e.g. docker build -t younata/my-image .

Need to login: docker login

And push: docker push younata/my-image:latest

iOS Development


UISplitViewController is a really neat class that can do both the neat “on ipad, show the main/detail paradigm”, and showing the regular “navigation controller with main, then go to detail if you tap somewhere”.

On iPad, to get both the main and the detail to show up, set the preferredDisplayMode property to .allVisible.

You should also make sure you’re displaying a navigation controller in your detail, so you can include the UISplitView displayModeButtonItem there.


On iOS, a UITableView is a subclass of a UIScrollView. While I understand why this is the case (99% of the time, you want a scrolling tableview), I’ve been starting to think that the OSX/cocoa approach of having them be separate classes is actually a better approach. But, I can see that this might have made the implementation of UITableView much easier to just always assume it’s in a ScrollView.

Refresh Control

It used to be that you had to use a UITableViewController and it’s refreshControl property to get a pull to refresh behavior. This is no longer the case. As of iOS 10, you can set UIScrollView's refreshControl property to get a refresh control on any scrollview (and any subclass). Of course, on earlier versions of the OS, you can also add the refreshcontrol as a subview of the scrollview, and it’ll still work. This is just less magical in how you do it.

Dismiss keyboard on scroll

The old (pre iOS 7) way of dismissing the keyboard when you scroll is to use UIScrollViewDelegate methods to be notified when the scrollview scrolled, and then call -resignFirstResponder from the scrollview.

The new way is to set the keyboardDismissMode property to either .onDrag on .interactive

Common Crashes

unrecognized selector sent to instance 0x8000000000000000

At first glance, the crash log will read like you tried to access an object that had already been deallocated. However, the giveaway is that 0x8000000000000000 address. This suspicious address tells you that you have a concurrent write bug. Somewhere, you have a race condition with multiple threads writing to the exact same address at the same time.


Accessibility is making it usable to normal people. Everyone eventually ends up using the accessibility features in one form or another.


NSHipster has a great, if old, article on this.

See also the current list of accessibility traits, direct from the source.


You can do either view-based animations, or layer based animations.

Layer-based animations are more customizable (you can do 3d effects), but are harder to work with as a result.


As of iOS 10, the new preferred way to do view-based animations is to use the UIViewPropertyAnimator class.


There are at least 3 ways to animate CALayer properties.

  1. Implicit animations.
  2. Implicit with CATransaction
  3. Explicit with CAAnimation

Implicit Animations

Implicit animations are fairly magical - set desired property on the layer to what you want it to be, and CoreAnimation will figure out how to animate the layer to reflect that.

This has the downside of being far less configurable, as well as being less obvious that an animation is actually happening.

You can also group animations using CATransaction, which also allows you to specify things like duration and such. It appears that CATransaction need to be wrapped inside of UIView animations.

CATransaction works by wrapping implicit animations up, and allowing you to modify their properties

You can call CATransaction.setDisableActions() with true in order to disable animations.

For testing reasons, even if animations are disabled, you still need to spin the runloop in order for the completion block to be called. Just call Date(timeIntervalSinceNow: 1e-3)).

Explicit Animations

CAAnimation is a cruftier API for handling animations. Most of CA hasn’t really been updated for recent objective-C, or even swift happenings.

For the most part, you’re going to use CABasicAnimation, for which you can specify a keypath to animate.

Note that the delegate for a CAAnimation is retained by the animation object. That is, it’s a strong reference, not a weak one (as others are). Be careful with that.

This does provide the nice benefit of adding block-based end notifications, with the following bit of code:

class BlockAnimationDelegate: NSObject, CAAnimationDelegate {
    private let onComplete: (Bool) -> Void
    func animationDidStop(_ anim: CAAnimation, finished flag: Bool) {

    init(onComplete: @escaping (Bool) -> Void) {
        self.onComplete = onComplete

// [...]

let animation: CAAnimation // [...]
animation.delegate = BlockAnimationDelegate { finished in

Maintaining Position Post-Animation

One of the things noted in the CAAnimation Documentation is that the layer’s data model is not updated as part of the animation. This means that, by default, once the animation finishes, it’ll immediately go back to it’s starting position.

Fixing this is interesting, you can tell the animation to stick around for a bit, but why?

as Ole notes, you should instead set the fromValue property, so that the animation knows where to animate from, instead of letting it figure that out from the data layer.

For example:

let originalState = layer.position.y
let desiredState = CGFloat(50)

layer.position.y = desiredState

let animation = CABasicAnimation(keyPath: "position.y")
animation.toValue = desiredState
animation.fromValue = desiredState
layer.add(animation, forKey: "position")


For multiple themes in an app, I like using a ThemeRepository paradigm. When I only care for a single theme, then that’s overkill, and I’ll use UIAppearance as much as I can.

Styling UINavigationBar

(Adapted from this post).

UINavigationBar.appearance().barTintColor = navColor
UINavigationBar.appearance().titleTextAttributes = [.foregroundColor: textColor]
UINavigationBar.appearance().tintColor = navButtonColor

Styling the Status Bar

Note: This is deprecated as of iOS 9.

UIApplication.shared.statusBarStyle = .lightContent


Introduced in iOS 11, and massively improved with basically each new major release (and a few minor releases) since then. Apple has been promoting ARKit heavily, much to the chagrin of developers, who haven’t really found a use case for it apart from the sherlock’d-in-iOS 12 “use ARKit as a ruler”.

As for actually using it, the easiest way is to place an ARView (requires iOS 13) in your view hierarchy, and tell it’s associated session to run with an ARConfiguration.

Be sure to update your info.plist with an appropriate string for NSCameraUsageDescription, e.g.:

<string>ARKit uses the camera</string>

Alternatively, if you don’t use the camera, you can set the cameraMode property on the ARView to .nonAR.

Integrating with SceneKit

ARView, by default, integrates well with SceneKit, with it also hosting an SCNScene.

Rending UIViews in SceneKit

You can set a UIView as the contents of a SCNMaterialProperty (specifically, the diffuse material property of the node’s SCNMaterial. This isn’t supported all that well - the view needs to be the view for a UIViewController in order to work, and a number of things don’t work well if you do this. Perhaps in a later iOS version this will be better supported.

Placing objects relative to the camera

Placing something relative to the camera is done easily enough. Possibly in response to a tap on the view, you first get the transform for the camera is in the scene, and then multiply it by a matrix for where you want the object placed, as well as possibly rotating for whether the device is portrait or landscape. Something like this generates the transform:

guard let camera = self.arView.session.currentFrame?.camera else { return }

var translation = matrix_identity_float4x4
translation.columns.3.z = -1

let rotation = matrix_float4x4(SCNMatrix4MakeRotation(Float.pi/2, 0, 0, 1))

let objectTransform = matrix_multiply(camera.transform, matrix_multiply(translation, rotation))

You then use the objectTransform matrix as the simdWorldTransform of the SCNNode you’re adding to the scene (assuming SceneKit))


Made With ARKit is a blog featuring some of the really cool things people have done with ARKit. Sadly, it hasn’t seen an update since December 2017.


Supplementary views

Procuring a supplementary view is the responsibility of the collection view’s data source. However, actually init’ing one of those supplementary views (which must be subclasses of UICollectionReusableView) MUST be done by the collection view, via the dequeueReusableSupplemantyrView(ofKind:withReuseIdentifier:for:). It’s better to fatalError() than it is to return a UICollectionReusableView() - at least the error is easier to track when you fatalError().

If you decide you don’t want to show a view for that particular indexPath, instead have your layout object not create attributes for that view, OR create the view, and then set the isHidden property to 0, or set the alpha property to 0. Alternatively, if you have a UICollectionViewFlowLayout as the collection view’s layout, then have the appropriate method on the delegate (either collectionView(:layout:referenceSizeForHeaderInSection:) or collectionView(:layout:referenceSizeForFooterInSection:)) return

Either approach is valid and will work.

Core Data

Core Data Programming Guide

Setting up

Apple’s documentation seems to be fine.

Persistent Store types are here, you’ll mostly be using NSSQLStoreType or NSInMemoryStoreType (for testing).


NSManagedObjectContext is the way to read/write objects to/from core data. Create a managed object context with a given concurrency type (either mainQueueConcurrencyType or privateQueueConcurrencyType), and only operate on it within blocks passed to perform(_:) or performAndWait(_:) calls. Be sure to only have one managed object context for your persistent store coordinator, or you’ll encounter strange crashes.

Additionally, keep in mind that NSManagedObject subclasses are not thread-safe (there are only a handful of properties/methods that are safe to access outside of a perform(_:) or performAndWait(_:) call). Instead of passing instances of NSManagedObject, pass around the object’s NSManagedObjectID (obtained from the managed object’s objectID property.

My preferred approach for accessing core data is to convert the NSManagedObject instance into another, thread-safe model object. This has the advantage of not leaking implementation details and concerns about my database layer to other layers of my app. Which, in addition to being good design, also means that I can switch out (or ignore) databases as makes sense for what I’m trying to do.

Storing Records

In my experience, using MyNSManagedObjectSubclass(managedObjectContext: context); context.insert(myCreatedObject) doesn’t work. Instead, use the older NSEntityDescription.insertNewObject(forEntityName:into:) to create and insert new objects.

Fetch Requests

Fetching by property with type URI

I ran into issues figuring this out. The approach you want is:

fetchRequest.predicate = NSPredicate("url.absoluteString = %@", urlToFetch.absoluteString)


Multiple NSEntityDescriptions claim an NSManagedObjectContext subclass

I encountered this in tests, where I was initiating up the Core Data stack from scratch with each test. Turns out that, because CoreData creates new classes when you bring up the context, you’ll end up seeing this warning with every new test.

The solution is to not create so many ManagedObjectModels - that is, instead of bringing up a new stack with each test, bring it up once, and then delete every object between run.

Core Graphics


CGFloat is a word-size agnostic way to express a floating point number (on 32 bit devices, it’s a float. On 64 bit devices, it’s a double).

CGFloat.leastNormalMagnitude is effectively the same as FLT_MIN (or DBL_MIN, depending on the device). It is less than or equal to all positive “normal” numbers. Subnormal means that “they are represented with less precision than normal numbers”. Note that zeros and negative numbers are also less than CGFloat.leastNormalMagnitude.

Core Image


Handling RAW Formats

You can read RAW formatted images by invoking either the init(imageURL:options:) or the init(imageData:options:) CIFilter initializers. You can then read the image by asking for the outputImage.

Note that, at least for iOS 13 beta 1, the simulator can’t read some (all? I only tried with Canon RAW format files) RAW images. However, using macOS allows this to work.

RAW Format Options

With the RAW format CIFilter initializers, you can optionally pass a dictionary of how to read the image. The documentation for those keys is here.

Core Spotlight

Making app content searchable!

In general, you should prefer to batch update the index. However, keep in mind that the default() index doesn’t support batching - you’ll need to create your own.


Add items with indexSearchableItems(:completionHandler:), and remove them with one of the deletion methods.

Opening an item that was searched for

Once you have your stuff in the index, you need to handle what happens when the user searches for and selects one of those items.

Doing this is the same codepath as continuing from a deeplink. Only, this time, the activity type will be CSSearchableItemActionType, with the item identifier (you should have picked one that actually refers to your item) as value for the CSSearchableItemActivityIdentifier key under the userInfo property. See Apple’s documentation on doing this.


There are 2 system ways to do layout in iOS.

  1. Frame-based
  2. AutoLayout

Don’t use frame based layouts unless you have to. Especially when it comes to supporting multiple size classes and such, that’s way more effort than it’s worth.

In general, I prefer this for laying out code:

  1. Nibs w/ AutoLayout
  2. Code w/ AutoLayout
  3. Code w/ frames


From NSLayoutConstraint’s api:

Each constraint is a linear equation with the following format: item1.attribute1 = multiplier * item2.attribute2 + constant

Apple-Provided APIs

  • NSLayoutConstaint is the underlying api for specifying layout constraint. Everything else essentially gets converted to these when you use them.
  • NSLayoutAnchor, introduced in iOS 9, is a factory class that makes it way nicer to specify layout constraints, without having to resort to visual format language.
  • NSLayoutConstraint Visual Format Language, is used in a class constructor for NSLayoutConstraint.

Third Party Frameworks

  • PureLayout provides a declarative interface for creating and installing layout constraints. It works as categories on NS/UIView and NSArray.


Using localized string and such.


Sometimes, there are differences in the different localized versions of your app, and you need to test that in a unit test.

Here’s a fairly hacky way to do that:

private var bundleKey: UInt8 = 0
func setBundleLanguage(_ language: String) {
    let path = Bundle.main.path(forResource: language, ofType: "lproj")
    objc_setAssociatedObject(Bundle.main, &bundleKey, path, .OBJC_ASSOCIATION_RETAIN_NONATOMIC)
    object_setClass(Bundle.main, AnyLanguageBundle.self)

Viewing Long Strings

Some languages (German is notorious for this) end up with much longer translations than others. This can cause undesirable ellipsing or clipping of text.

One way to check for this without actually setting the language to German (which you might not have localizations for/be able to read) is to modify the “Arguments Passed On Launch” for your target to include -NSDoubleLocalizedStrings YES.

Note that this isn’t always reliable, because apple, and it only applies to strings that go through NSLocalizedString

Network Link Conditioner

NSHipster describes how to install and use this.

This is a useful tool for seeing how your app works under different network settings.

A side effect of using Network Link Conditioner is that you can also identify when a test is mocking out the network by using a custom NSURLProtocol. Because those tests will also be affected by the network link conditioner. This is part of why if your unit test makes a network call, it’s not a unit test. Even touching the URL loading subsystem is making a network call.

By the way, NSHipster also has an excellent article on NSURLProtocol, because it is useful for mocking network requests for integration-style tests and the like.

URLSession and URLRequest



Apple provides a really nice built-in way to do caching, using URLCache. You can configure a URLSession object to use your specific cache via URLSessionConfiguration.urlCache.

Once configured, all requests through that session will use that cache, though it’s possible to override for specific requests, or for all requests from that session.

Note that URLSession.shared is configured to use URLCache.shared by default. This is transparent to the user (that is, there’s no easy way to determine whether or not the request actually used the network or returned cached data).

ETag and Manual Caching

Sometimes you want to manually cache responses. Because URLSession uses a cache by default, we have to tell our requests to not do that. There are a few ways to do that:

  1. Use a URLSession that isn’t backed by a cache (by creating one with the configuration’s urlCache property set to nil)
  2. Use a URLSession with a cache policy that ignores the cache (set the configuration’s requestCachePolicy to . reloadIgnoringLocalAndRemoteCacheData
  3. Have all your requests individually specify .reloadIgnoringLocalAndRemoteCacheData as their cachePolicy

Once you have the caching behavior set, you need to implement manual caching yourself. I’m going to describe using ETag because that’s better (and what my nginx server did for me).

The ETag header is one way to determine whether or not a resource has changed from when it was last served. It’s essentially a hash of the resource information (as opposed to using something like Last-Modified for a time-based approach). You pair this with the If-None-Match request header to have the server calculate whether the data has changed (HTTP 200 response) or not (HTTP 304 response).

So, the algorithm for doing this is:

  • Make initial request
  • Record the value for the ETag (or Etag) header in the response you send.
  • In subsequent requests to the same url, include the If-None-Match request header, with value set to whatever you received for that Etag header.
    • If you receive a 304:
      • Use the older data you had (no cache update needed).
    • If you receive a 200:
      • Overwrite the etag you had with the newer etag header
      • Use the new data you received in the body of the response.


Local and Remote (push) Notifications, not NSNotifications.

This is going to describe the newer UserNotifications framework introduced in iOS 10, instead of the older UIKit-based way of doing notifications.


UNNotificationContent provides read-only access to information shown to the user about a specific notification. For setting information (e.g. when preparing to send a local notification), you’d use the UNMutableNotificationContent class.


Either kind of notification can be an actionable notification.


As of iOS 12, there are four kinds of notification: Calendar, Time, Location, and Push. The first 3 are used with local notifications, while the last is only used for push notifications.

  • Calendar triggers for a specific date: “Today at 7 pm”, or “every day at 8 am”.
  • Time triggers in a set time from now: In 30 seconds, or every 30 seconds.
  • Location1 triggers when the user either exits or enters a specific region. You can set to send the notification for both entry and exit.
  • Push is used to detect whether the notification you received is a push notification or not.

Types of Notifications

Local Notifications

Local Notifications are notifications generated entirely on the device. These would be things that appear when you enter or leave an area, at a certain time, etc.

The way to send a local notification is to create a UNNotificationRequest, with an identifier, content, and a trigger, then ask the current UNUserNotificationCenter to add(_:withCompletionHandler:) the request.

Remote Notifications

Also called Push Notifications. Push notifications are sent from some external server to your app.

As of iOS 7, you can also send “silent” or “content-available” notifications. These notifications do not present an alert to the user and instead wake up your app so that you can do something in response to the notification (usually update your content cache so when the user next opens the app they already have up to date information). See this apple documentation.

Sending Push Notifications

Push notifications need to be signed in order to be sent. There are two ways to do this: with a certificate pre-installed or with a jwt.

This script is a simple curl-based script for sending test notifications. It requires modifications for your specific key and such, and you should change the $curl variable to us what you got from running brew install curl-openssl.


This requires location permissions, but not always permissions. Apparently, this is due to the system handling the monitoring as opposed to the app. I’ve never tried this, though.


NSUserActivity is a class to facilitate deeplinking into your app. The original (public) purpose was for handoff, it’s now been adapted for facilitating search and siri integration.

Setting up activities

You create one with an appropriate activity type, set the title, enable other properties as it makes sense, then finally call becomeCurrent().

Note that if you assign a user activity instance to a UIViewController‘s or UIResponder‘s userActivity property, then you don’t need to worry about calling the becomeCurrent or resignCurrent methods - these are handled for you.

Activity Types

These are strings, usually in reverse-DNS style, that describe the domain and the particular type of activity - e.g. com.rachelbrindle.second_brain.read_chapter describes opening/reading a chapter for com.rachelbrindle.second_brain. The activity types your app supports MUST also be mentioned in the Info.plist file, see NSUserActivityTypes.


Set the isEligibleForHandoff property to true.


This allows spotlight to present more optimized results to the user, as well as allowing the user to search for an activity they were previously engaging in.

Set the isEligibleForSearch property to true. If you want to help search results for other users, you can set isEligibleForPublicIndexing to true.

Note that your app must maintain a strong reference to any activity objects used for search results. Also, don’t use this to index all the app’s contents, that’s what the much more power core spotlight apis are for.

Continuing from a deeplink

(This only covers handoff, search and siri might be different)

The simplest way to continue from a deeplink is to implement application(_:continue:restorationHandler:) on your app delegate. Optionally, if your app might take a while to set things up (e.g. need to retrieve data from the network), then implementing and having your app delegate respond to application(_:willContinueUserActivityWithType:) will provide a nicer user experience.

Operation and OperationQueue



As the docs note, there are four things to override for your asynchronous swift subclass:

  • -start()
  • isAsynchronous
  • isExecuting
  • isFinished

And that you must send KVO notifications for the 2 properties (usually isAsynchronous is hardcoded to be true, so sending KVO for that is a non-issue).

Sending KVO means sending -willChangeValue(forKey:), then changing the value, then sending -didChangeValue(forKey:), see the following sample implementation:

class MyAsyncOperation: Operation {
    override func start() {
        self.willChangeValue(forKey: "isExecuting")

        someAsyncWork {
            self.willChangeValue(forKey: "isExecuting")
            self._isExecuting = false
            self.didChangeValue(forKey: "isExecuting")

            self.willChangeValue(forKey: "isFinished")
            self._isFinished = true
            self.didChangeValue(forKey: "isFinished")

        self._isExecuting = true
        self.didChangeValue(forKey: "isExecuting")

    override var isAsynchronous: Bool { return true }

    private var _isExecuting: Bool = false
    override var isExecuting: Bool { return !self.isFinished && self._isExecuting }

    private var _isFinished: Bool = false
    override var isFinished: Bool { return self._isFinished }


UIPopoverPresentationController is the new (as of iOS 8) way to do a popover. This replaces the older UIPopoverController, and should be used for anything recent.

Arrow Directions

If you only ever want to show your popover from a given direction, you can control this with the permittedArrowDirections property. According to the documentation, you can only do this when configuring, not after it’s been presented1. As the name suggests, this controls where the arrow on the popover shows, not where the popover is relative to the sourceRect/sourceView.

When the device rotates

You can also control where the popover comes from by updating the sourceRect/sourceView. This can be done after receiving viewWillTransition(to:with:) on the presenting view controller. Be sure to call view.layoutIfNeeded() on the presenting view controller before updating this, otherwise the sourceRect might be outdated2.


I haven’t tested this myself to see what happens if you do try to change that property after it’s been presented.


I’m unsure if you have to call layoutIfNeeded() if you use sourceView instead of sourceRect.


New in iOS 13, is scenes.

Can have each scene be entirely independent. Can have each scene be dedicated to a specific task.

This uses UIWindowScene and UISceneSession.

UIWindowScene goes between the UIScreen and UIWindow level.

Scenes contains UI, created by system on demand, destroyed by system when unused.

Going to adopt UIWindowSceneDelegate.

Basically, moving a lot of UIApplicationDelegate methods into UIWindowSceneDelgeate methods.


UISegmentedControl‘s are radio buttons.

There’s no way to animated changing the selected segment.

Styling iOS Apps

Apple’s HIG on color for iOS apps.

New in iOS 13: Semantic Colors


UITableView-specific things.

Section Titles

Section titles are asserted by checking the headerView(forSection:). You can then check the textLabel.text properties to get the displayed text. Note that this needs the section header to be within the view in order to be non-nil. Otherwise you need to scroll to show that.

Actually setting this is done by implementing the tableView(_:titleForHeaderInSection:) method on the dataSource.

Scrolling under test

In order to scroll to a row, you invoke the scrollToRow(at:at:animated:) method. You also need the view to be within a visible window. This is also animated, so you’ll to wait a bit before you do the next assertion.

let window = UIWindow(frame: CGRect(x: 0, y: 0, width: 320, height: 480))
window.rootViewController = subject

let indexPath = IndexPath(row: 3, section: 1)

subject.tableView.scrollToRow(at: indexPath, at: .middle, animated: false)


Context Menus

In iOS 13, we get context menus. These replaced the previous “listen to 3d touch events on the entire table view, and from there figure out which cell was pressed” stuff we had to do before (or at least, had to do in iOS 9 - when I last implemented that behavior).

The minimum delegate methods required to implement this behavior are:

-tableView(:contextMenuConfigurationForRowAt:point:) is used to set up the menu for that item (what happens when you long/force press on the tableView). It returns an optional UIContextMenuConfiguration, which is used to set up the view controller to show, and a UIMenu to show with it.

-tableView(:willPerformPreviewActionForMenuWith:animator: is then used to commit that.

For example, see this example from my rss reader:

extension ArticleListController: UITableViewDelegate {
    // ...
    public func tableView(_ tableView: UITableView, contextMenuConfigurationForRowAt indexPath: IndexPath,
                          point: CGPoint) -> UIContextMenuConfiguration? {
        guard ArticleListSection(rawValue: indexPath.section) == .articles else { return nil }

        let article = self.articleForIndexPath(indexPath)
        return UIContextMenuConfiguration(
            identifier: as NSURL,
            previewProvider: { return self.articleViewController(article) },
            actionProvider: { elements in
                return UIMenu(title: article.title, image: nil, identifier: nil, options: [],
                              children: elements + self.menuActions(for: article))

    public func tableView(_ tableView: UITableView,
                          willPerformPreviewActionForMenuWith configuration: UIContextMenuConfiguration,
                          animator: UIContextMenuInteractionCommitAnimating) {
        guard let articleController = animator.previewViewController as? ArticleViewController else { return }
        animator.addCompletion {
            self.markRead(article: articleController.article, read: true)
            self.showArticleController(articleController, animated: true)


WKWebView is the view you should use to display web content inside of an app.


WKWebView is somewhat unique in that it has two delegate methods and protocols - uiDelegate and navigationDelegate


Implement a WKNavigationDelegate to respond to url navigations - starting a navigation, authentication issues, errors, etc.

Don’t open links.

Say, for example, you don’t want clicked links to be opened in the webview. You’d implement webView(_:decidePolicyFor:decisionHandler:) to detect if it’s a link, and then call the handler with .deny, like so:

func webView(_ webView: WKWebView, decidePolicyFor action: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) {
    switch action.navigationType {
    case .linkActivated:

You might instead choose to open the link elsewhere.


Implement a WKUIDelegate to respond to UI requests - javascript UI panels, upload panels, force touch.

Context Menus

In iOS 13, we got context menus. These replace the previous WKPreviewItem-based delegate methods, instead with 4 (currently undocumented) callbacks to implement:

If you do nothing, when you long/force-press on a link, the view will present an SFSafariViewController configured to show that link, along with a few items. When that view controller is committed, the user is taken out of your app and into the Safari app.

Otherwise, to intercept that behavior, you only need to implement -webView(:contextMenuConfigurationForElement:completionHandler:) and -webView(:contextMenuForElement:willCommitWithAnimator:).

-webView(:contextMenuConfigurationForElement:completionHandler:) is used to decide what to show to the user. If you call the callback with nil, then it defaults back to the default action previously mentioned. Otherwise, you can use the linkURL property on the given WKContextMenuElementInfo object to get the link, and then call the callback with a custom UIContextMenuConfiguration configured for whatever view controller you want.

-webView(:contextMenuForElement:willCommitWithAnimator:) is then used to commit that view controller into your stack. Be sure to present the view controller as part of a completion for the animator. Otherwise, your app gets stick in an infinite loop as it tries to present a view controller even when one is already being presented.

For example, if you wanted to present a SFSafariViewController, but keep the user in the app (that is, present that view controller in your UI), then you might implement these methods like:

extension MyViewController: WKUIDelegate {
    func webView(_ webView: WKWebView, contextMenuConfigurationForElement elementInfo: WKContextMenuElementInfo,
                        completionHandler: @escaping (UIContextMenuConfiguration?) -> Void) {
        guard let url = elementInfo.linkURL else {
            return completionHandler(nil)

        let configuration = UIContextMenuConfiguration(
            identifier: url as NSURL,
            previewProvider: { return SFSafariViewController(url: url) },
            actionProvider: { elements in
                guard elements.isEmpty == false else { return nil }
                return UIMenu(title: "", image: nil, identifier: nil, options: [], children: elements)

    func webView(_ webView: WKWebView, contextMenuForElement elementInfo: WKContextMenuElementInfo,
                        willCommitWithAnimator animator: UIContextMenuInteractionCommitAnimating) {
        guard let viewController = animator.previewViewController else { return }
        animator.addCompletion {
            self.present(viewController, animated: true, completion: nil)

UI Testing with iOS Devices.

XCUITest, introduced in iOS 9, is a technology for automating acceptance tests. It works by running your app in a separate process from the test, with the test communicating to the app using a form of IPC (Inter-Process-Communication). Elements are identified via accessibility IDs/values.

A pretty decent introduction/reminder of what all is involved.


You can fetch a group of elements matching a predicate by calling element(matching:) on any XCUIElementQuery. Most objects in XCUITest are XCUIElementQuery’s.

Anything that conforms to XCUIElementAttribute can be queried as part of one of these queries.


  • Finding text on a Cell
    Honestly, I had more luck with app.tables.cells.element(boundBy: 0).firstMatch.staticText[LABEL_ACCESSIBILITY_ID].

Dismissing a popover

Popover are dismissed by tapping... basically anywhere outside the popover. There’s a specific element to tap that’ll do this:



Java is terrible.


Jackson is a java library for (de)serializing json.

If you create a JsonDeserializer subclass that is an inner class of another class (like below), you need to mark that inner class as static, or else you’ll get a $CLASS has no default (no arg) constructor error.

public class Foo {
    public static class FooDeserializer extends JsonDeserializer<Foo> {
        public Foo deserialize(JsonParser parser, DeserializationContext ctx) throws IOException {
            return null;


Setting up a single-user jupyter notebook server.

  1. Create and activate a virtualenv on the server: python3 -m venv jupyter && . jupyter/bin/activate
  2. Install jupyter. pip3 install jupyter
  3. Setup for creating a public server. jupyter notebook --generate-config
  4. Set the password for the notebook: jupyter notebook --generate-config
  5. Modify the port to be something specific. Put all the configuration in the ~/.jupyter/jupyter_notebook_config.json file. In ~/.jupyter/jupyter_notebook_config.json: "port": 9999
  6. Setup nginx to forward to that port:
    server {
      listen         80;
      server_name    $SERVER_NAME;
      location       '/.well-known/' {
        default_type "text/plain";
        root         /usr/local/var/www/letsencrypt;
      location / {
        return              301 https://$server_name$request_uri;
    server {
      listen 443 ssl;
      server_name $SERVER_NAME;
      ssl on;
      ssl_certificate /etc/letsencrypt/live/$SERVER_NAME/fullchain.pem;
      ssl_certificate_key /etc/letsencrypt/live/$SERVER_NAME/privkey.pem;
      ssl_session_timeout 5m;
      ssl_dhparam /usr/local/etc/nginx/dhparam.pem;
      add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
      location / {
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header Host $http_host;
          proxy_pass http://$HOST_IP:$HOST_PORT;
          proxy_redirect http:// https://;
      location ~ /api/kernels/ {
          proxy_pass http://$HOST_IP:$HOST_PORT;
          proxy_set_header      Host $host;
          # websocket support
          proxy_http_version    1.1;
          proxy_set_header      Upgrade "websocket";
          proxy_set_header      Connection "Upgrade";
          proxy_read_timeout    86400;
      location ~ /terminals/ {
          proxy_pass http://$HOST_IP:$HOST_PORT;
          proxy_set_header      Host $host;
          # websocket support
          proxy_http_version    1.1;
          proxy_set_header      Upgrade "websocket";
          proxy_set_header      Connection "Upgrade";
          proxy_read_timeout    86400;
  7. Setup letsencrypt:
    domains = $SERVER_NAME
    rsa-key-size = 4096
    server =
    email = $EMAIL
    text = True
    authenticator = webroot
    webroot-path = /usr/local/var/www/letsencrypt
  8. Configure DNS to direct $SERVER_NAME to your machine.
  9. Restart nginx
  10. Run certbot sudo certbot certonly -c /path/to/letsencrypt/config
  11. Set it up to automatically run.


On OSX, we’re going to set this up as a LaunchAgent, so in ~/Library/LaunchAgents, add:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">

Note that $HOME should be expanded to your home directory, not included in the plist.


Linux Administration Notes and resources.


Service files

This is a pretty basic resource on service files


For developing cocoa-based programs.

Dark Mode

Dark mode, because black is the new black.


In CSS, this is done via the prefers-color-scheme media query.

See this mdn page.

Image Capture

The ImageCapture API is an old api on macOS for controlling a camera connected over USB.

Indigo has an implementation for canon cameras using that api.

NSOutlineView and NSTreeController

View nested lists easily!

NSOutlineView is a subclass of NSTableView that provides a way to display hierarchical data. For example, file hierarchies (though, you’d actually use an NSBrowser object for a file hierarchy).

NSTreeController is a controller that works with NSOutlineView and NSBrowser to manage the data that they display.

In cocoa, controllers are super powerful because they allow you to bypass implementing a lot of the really boring delegate/datasource stuff that you’re forced to do in iOS.


This is a much better explanation of how to set up bindings correctly than I’m currently able to do.

Delegate Methods


Implement outlineView(_:tooltipFor:rect:tableColumn:item:mouseLocation).


Opening a URL

This is super simple, call with the url, and it’ll open in the user’s default browser.


Really nice, convenient, and powerful programming language.


Lightweight DSL for creating simple webapps.

Static files

Sinatra serves up static files from the ./public directory. This can be changed with: set :public_filder, File.dirname(__FILE__) + '/static'.

Note that the public name is not included in the URL - e.g. the file at ./public/foo/bar would be at http://server/foo/bar.

Rendering stuff

When inside of a URL pattern, you can render an erb with:

get '/' do
    erb :index # Renders and returns the .erb file at views/index.erb


rspec is the original BDD testing framework.

Asserting json

Turns out, that if you do something like expect(JSON.parse '{"foo": "bar"}').to eq({"foo": "bar"}), you’ll get a really confusing failure, something like expected to equal {:foo => "bar}", got {"foo" => "bar"}. Which is really confusing, until you remember that ruby automatically converts string keys to symbols in hashes. However, the json module doesn’t do this unless you tell it to. So, the correct way to do this is to add symbolize_names: true to your call, like so:

expect(JSON.parse('{"foo": "bar"}', symbolize_names: true)).to eq({"foo": "bar"})

which passes as it should.


Rust is a language that I’ve been in love with for ages.

It’s also one of the most frustrating languages I’ve ever used. This is because I’ve never written enough rust to actually be good at it.

It also has the best documentation of any language.

Serializing json in rust.

Follow this guide using serde.


Add to Cargo.toml‘s [Dependencies] section:

serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"

Make your struct derive Serialize, and pass it to serde_json::to_string()

struct Thing {
    x: i32

fn main() {
    let thing = Thing { a: 1 }
    println!("{}", serde_json::to_string(&thing).unwrap());



Per wikipedia, SOLID is an acronym for 5 design principles in object-oriented-programming. These principles are:

  • Single Responsibility
  • Open-Closed
  • Liskov Substitution
  • Interface segregation
  • Dependency Inversion

Single Responsibility

A class/interface should have one and only one responsibility.


Classes should be open for extension, but closed for modification.

Liskov Substitution

AKA Design by contract.

All objects that conform to a given interface should be treated as interchangeable.

Interface segregation

It’s better to have 20 different interfaces/protocols, than 1 god interface/protocol.

Dependency Inversion

Depend upon interfaces, not a specific concrete implementation.


ABI Stability in libraries

AKA the @frozen attribute.

Per Swift Evolution 260, you can enable “library evolution mode” (with the -enable-library-evolution command-line argument), which will make it a non-ABI-breaking change to modify fields in a struct or add new enum cases. (These are “resilient” types).

Also, on a per-type basis, you can specify structs and enums to be @frozen, which means that the stored instance properties of a struct will not be changed (added, removed, reordered), nor will the cases of an enum change (add, remove, reorder). @frozen only really applies if library evolution is enabled, and is assumed to be the default if not (however, libraries compiled without library evolution mode enabled are not ABI-stable).

Swift and Objective-C


NS_REFINED_FOR_SWIFT is a macro that helps you write “swifty” API for objective-c code.

In the header file, you tag the method declaration with this macro, and in a swift extension, you write a swift implementation that uses it. This method prepends two underscores (myMethod:whatever: -> __myMethod(_: whatever:)) to most methods, though in the case of initializers, it appends to the first argument label.


NS_SWIFT_NAME is a macro that lets you specify the swift name for an objective-c method.

In the header file, you tag the method declaration with this macro, giving it the swift method name as it’s argument, e.g.: -setFoo:(Foo *)foo NS_SWIFT_NAME(set(foo:));

Patterns and Pattern Matching

Last Update: Swift 5

Pattern refers to “the structure of a single value or composite value”. Here are the list of patterns

Wildcard Pattern (Underscore)

This is what _ means. It matches and ignores any value.

for _ in 1...3 {
    // do something three times

Identifier Pattern

This is the basic assignment pattern. let someValue = 42 is an example, someValue is an identifier pattern that matches the value 42 of type Int.

Value-Binding Pattern

Value-Binding is something on top fo identifier, it’s one of the first cases of pattern matching you might find, e.g.

let someTuple = (4, 5) // Tuple Pattern
switch someTuple {
    case let (x, y): // Binds x and y to the elements of someTuple
        // do something with x and y

Tuple Pattern

Refers to “a comma-separated list of zero or more patterns, enclosed in parentheses.” Note that parentheses around a single element are effectively ignored.

The following are valid example of Tuple Patterns:

let aTuple = (1, 2)
let (a, b) = (3, 4)
let (a) = 2 // Not a Tuple Pattern

Enumeration Case Pattern (Enum)

It matches the case of an existing enum type. They appear in switch case labels, as well as if, while, guard, and for-in statements.

Using this enum:

enum AnEnum {
    case foo
    case bar
    case baz

let myEnum =

switch statement:

switch myEnum {
    case .foo:
        // do something
    case .bar:

if statement:

if case .foo = myEnum {
    // do something.

while statement:

while case .bar = myEnum {
    // do something

guard statement:

guard case .baz = myEnum else {

Optional Pattern

This matches optional values. Uses the ? syntax sugar to match things.


let someOptional: Int? = 32

// Matches using enumeration case.
if case .some(let x) = someOptional {
    // do something with x

// Matches using the optional pattern.
if case let x? = someOptional {
    // do semithng with x

This also works with for-in, and switch statements:


let arrayOfOptionals: [Int?] = [1, nil, 3, nil, 5]
for case let number? in arrayOfOptionalInts {
// prints "1", "3", "5"


let someOptional: Int? = 32

switch someOptional {
    case 32?:
        // something.
        // something else.

Type-Casting Patterns

This is the is and as patterns. is is used as a conditional (if foo is Int), or in switch statement case labels (case foo is Int: // do something with foo as an Int). as is used to change type to a related one, as needed (foo as String).

Expression Pattern

This represents the value of an expression. These appear only in switch statement case labels.


let point = (1, 2)
switch point {
case (0, 0):
case (-2...2, -2...2):

You can also overload the ~= operator to provide a custom expression matching behavior.

func ~= (pattern: String, value: Int) -> Bool {
    return pattern == "\(value)"

switch 3 {
case "3":
    print("This actually matches")

Sequence and Array

Reminding myself to think outside the map, reduce, and filter boxes.


allSatisfy(_:) works like python’s all. Returns true if each and every item in the array passes the given block.

contains(_:) and contains(where:)

contains(_:) is available only if the Element conforms to Equatable. This effectively is the same as calling contains { $0 == element }, though I imagine the implementation is slightly more optimized than that.

contains(where:) works like python’s `any. It takes in a block, and returns true if that block returns true for least one item in the receiving array or sequence.


SwiftUI is the new UI hotness from WWDC 2019.

Swift UI Cheatsheat


Vapor is one of two swift web frameworks to have gained traction (the other is Kitura, from IBM). It appears that vapor has slightly more documentation available that Kitura does, so I use vapor.

However, Vapor still has PLENTY of rough edges that make it a pain in the ass to develop against.

Specify the http status error

To the best of my knowledge, there are two easy ways to return a custom http status error: throw an AbortError, or return a Response. (The other way is to create your own type that conforms to ResponseDecodable, and have it set the http status in encode(status:headers:for:))


AbortError is a protocol, which means you have to create your own instance of it in order to return one. Simple enough, but still annoying. Your custom implementation needs to have 3 properties: status, reason, and identifier. As the name indicates, you throw your error from the request handler.

Return a Response

From your asynchronous request handler, you can chain on .encode(status:for:) to set the status. (The second parameter is the request object your request handler was called with).


I haven’t gotten around to writing a microframework to do this, but here’s my Application extension I add to every vapor project I do:

import Vapor

@testable import App

extension Application {
    static func testable() throws -> Application {
        var config = Config.default()
        var services = Services.default()
        var env = Environment.testing
        try App.configure(&config, &env, &services)
        let app = try Application(config: config, environment: env, services: services)
        try App.boot(app)

        return app

    func sendRequest<Body>(to path: String, method: HTTPMethod, headers: HTTPHeaders = .init(), body: Body?) throws -> Response where Body: Content {
        let httpRequest = HTTPRequest(method: method, url: URL(string: path)!, headers: headers)
        let wrappedRequest = Request(http: httpRequest, using: self)
        if let body = body {
            try wrappedRequest.content.encode(body)
        let responder = try make(Responder.self)

        return try responder.respond(to: wrappedRequest).wait()

This is used as:

let subject = try Application.testable()

let response = try subject.sendRequest(to: "/my/path", method: .PUT, body: Optional<String>.none)


If you’re not practicing TDD, your code is wrong. If your code happens to work without tests, then it’s still wrong.

What is TDD? At it’s simplest, it’s test-first. That is, write down what you expect the code to do, then write the code to get the test to pass.

Why test

Why even test? Surely just manually running the code is enough to see that it works, right?

No. Tests provide automated and repeatable use cases for the code. Without them, to get the same quality of code, you need to write down exactly how to verify the code, and then follow that procedure each time the code (or one of it’s dependencies) changes. Compounding that with the other parts of the code, this eventually presents a mountain of work to do just to even verify small changes.

With automated, repeatable tests, the only difference is that the verification procedure is written in code. This allows your computer to follow those steps, which it can do in orders of magnitude less time than you can, with much higher attention to detail than you can continuously give it. Additionally, it allows you to more tightly control all the inputs and outputs, so you know precisely what caused a bit of code to go wrong.

Additionally, anyone else who works with you now has a simple script they can run to verify that your changes work, instead of having to look up and follow your documentation to try to figure out what you did to test it. This can even be generalized into an external environment that automatically runs the test script to determine whether or not your changes are good - something which is called continuous integration


So, testing has it’s values, sure. Why test first? Why is that so much better than writing tests after the implementation code is written?

  1. It forces you to write down, in code, what you expect the implementation to do. Writing this down will also force you to write down branches of the code as it moves through.
  2. This bypasses the whole “yeah, we ran out of time to write tests” issue - always write tests, even when something like a time crunch makes it painful.
  3. It’s much more scientific.
    TDD essentially applies the scientific method to programming.
    1. You take the observation (what the code should do)
    2. You take the hypothesis (what the code is now)
    3. You write down tests to verify the hypothesis against the observation
    4. You continuously run those tests against, modifying the hypothesis until it matches the observation.
  4. It’s more relaxing.
    Once you’re in the mindset of “the code is done when the tests pass”, this becomes more like a game to get the tests to pass.


For iOS, I’m a big fan of Quick and Nimble.

This generalizes to me being a big fan of rspec-based testing frameworks. I find that this better allows me to express the branching behavior of tests, as well as makes it more obvious the different effects a given action (method or function) can have.


Bryan Lile’s TATFT lightning talk expresses a lot of the same philosophy that I do.

Web Development

Web Technologies. HTML, markdown, css, maybe some javascript.



Used to represent tabular data, the <table> element can display a 2 dimensional table of data.

Permitted elements are, in order:

  • <caption> (0 or 1)
  • <colgroup> (0 or more)
  • <thead> (0 or 1)
  • Either of:
    • <tbody> (0 or more)
    • <tr> (1 or more)
  • <tfoot> (0 or 1)


The <caption> is used to provide a title or caption for a table. It should always be the first child of a <table>. Styling and physical position can be adjusted using the CSS caption-side and text-align properties.


The <colgroup> element allows you to define columns inside of a table. You can then place <col> elements inside to define those specific column groups. It should appear after <caption>, but before other child elements of the <table>.


See this cheatsheet.

Remember that you can still inline html inside markdown, as markdown, generally, is compiled down to html anyway.


Creating and editing them.


Specifically, what you can put in the d attribute of a path.

See this mdn page.


I have a Model 3.


The optional keyfob is an extra $150, because of course it is. It’s actually kind of nice because you can use it to remotely lock/unlock the car without using your phone (I don’t trust the “lock when walking away” to lock soon enough). Don’t expect it to be great for valet - the Tesla keyfobs are odd enough that key cards are easier for a valet.

Summon with the Keyfob

As of 2019.7.11, you can use the keyfob with summon. It requires “Requires Continuous Press” in the summon settings to be off, but once that’s a thing, you press on the “roof” button of the car until the emergency lights and such turn on, then you press on the frunk (forward) or trunk (backward) button to move the car. Press on the roof button again to stop.


Aerodynamics is awesome. When drafting behind a vehicle, there are at two zones where the air currents help you out: The zone immediately behind the leading vehicle, which gives the most drafting advantage - at the expense of being incredibly dangerous. This then drops off quickly into a very turbulent and aerodynamically harmful zone - to the point where it’s better to not draft than be in that turbulent zone. Afterwards, there’s another zone where you get pretty decent performance. This zone, once found, is the area where you can balance safety (you’re safely behind the leading vehicle) and efficiency. With the autopilot feature, you can then let the car keep you in that zone of maximum efficiency.


The goal here is to use autopilot at specific follow distances to find a local efficiency maxima. This works best on a very long stretch of mostly straight, same-grade (so... flat) road. If the leading vehicle maintains the same speed throughout this, then it’s even better (ideally, they’d also stay in the same position in the lane, but that’s a bit to hopeful).

Essentially, set follow distance to 7 (the max), pull up the energy monitor, set it to show the consumption rate (so you can view the current efficiency), and follow the vehicle for 5 miles. After 5 miles, note the energy usage for that and repeat with a lowered follow distance. Eventually, you’ll reach a follow distance where the energy usage not only increased, but it increased dramatically (I’ve seen jumps from 180 Wh/mi to 220 Wh/mi). This means that you’re in the turbulent zone. Once you have that, set the follow distance to whichever had the highest efficiency/lowest energy usage.

Note that the particular follow distances depend greatly on the type of leading vehicle, and the speed they’re traveling. In general, the faster the leading vehicle is, the more elongated each zone is.

Data/Previous Results

Leading Vehicle KindSpeed (mph)Follow Distance
Semi (w/ Trailer)605