commits-by "rileyjshaw"


Add control mode à la tmux for text alignment rileyjshaw/write

To enable control mode, hit meta key + e. From there, the next key you
click will run a control command.

There are currently two valid control commands:

  • ArrowLeft: nudge text alignment left for the active editor
  • ArrowRight: nudge text alignment right for the active editor

Remove Google Analytics rileyjshaw/

Who needs 'em…

feat: 🎸 Initial public release rileyjshaw/

After much dawdling, I'm publishing my new site on the last day of the
decade 🎊 This commit removes the "new" subdomain from My old site is already mirrored at, and will remain there.

This commit also upgrades project dependencies, and adds a whole bunch of
commit linters and hooks to keep commits consistent. Specifically, I added

to keep commits easy and consistent. I'm writing this description through a
CLI – it feels like the future. Get used to that feat: tag in the commit

Eventually I'd like to add an automated Changelog with
Semantic Release, but
I'll save that for another day. For now,
enjoy my new, faster, shinier, live on the internet, totally rad new site!

BREAKING CHANGE: Moves the site from to

Tidal <3 Vim rileyjshaw/.supermac

Add some resources rileyjshaw/tech-ethics-yvr

Adds the following resources:

Initial commit rileyjshaw/plop

plop (working title)

Scriptable text expansion microtool.

Status: pre-alpha / prototype / I thought of this an hour ago

demo of the tool printing "hello world"
demo of the tool sequentially printing the song "99 bottles of beer on the wall"
demo of multiline text entry


# Only works on MacOS for now.
git clone
cd plop
npm i
npm start

While going about your business, hit Cmd + Shift + Space. Write some valid
JavaScript. The result will be typed into whatever program you have open.


\n triggers a simulated Enter press. Please be kind, don't spam your group

Add pagination to the blog! rileyjshaw/rileyjshaw-new

This is an exciting commit for the new site. It collects blog posts
from multiple sources (for now:,, and, orders
them by date, and paginates them into a nice on-site list.

In the past, my blog has always seemed stale and outdated. By allowing
it to gobble content from across the web, I can point it to whatever
platform I'm currently publishing on!

This commit is going to show up as a post on my blog, which feels
rather meta.

Bootstrap an independent data scraper rileyjshaw/rileyjshaw-new

Project scraper

The projects on my site are automatically scraped and formatted at publish time
using the scripts in this directory. Read more about my reasoning below, or
skip to the directory structure.


Gatsby's source and transformer plugins are powerful, and I used them in the
initial development of this site. I eventually decided that separating my
collection process would be good for flexibility, control, and offline work.


GraphQL's filters and transforms are powerful, and Gatsby's APIs add more
options for how data is fetched, cached, and transformed. However, complicated
or non-standard data transforms and sanitization are much easier outside of
Gatsby's ecosystem. For instance, the API starts to feel clunky for one-off
treatment of specific content nodes,


I've had a good experience with Gatsby but I may decide to migrate my site to
another platform or format someday. Keeping my data entirely separate from the
from the site's framework makes migrating my data as easy as copy/pasting this
directory. It's just a few JS files!


Gatsby stores requests made through its source plugins in the .cache
directory by default. The .cache directory is deleted after:

  • gatsby clean is called.
  • package.json changes, for example a dependency is updated or added.
  • gatsby-config.js changes, for example a plugin is added or modified.
  • gatsby-node.js changes, for example if a new Node API is invoked.
  • …etc.

I found I was frequently triggering .cache wipes during development. At best
this meant I was pinging APIs and atom feeds more than necessary. At worst, it
made working offline with project data impossible.

Directory structure

Here's how the scraper is organized for now:

	The megafile to replace Gatsby's source plugins. This pulls project data
	from all online sources and saves them into `_generated/`.

	Files generated by the `scrape-projects.js` above. DO NOT EDIT THESE FILES
	MANUALLY! They will be overwritten.

		Not quite the raw response, but pretty close. This file
		contains all the data that I may decide to use someday, but
		haven't yet. Organized by `type` in a nested object.

		Standardized into a smaller format that can be smashed together with
		`curation/` data. Flattened into an array with `type` annotations on
		each node, as well as unique, unchanging project IDs (`UID`).

	This is where all custom curation and processing go, eg. tagging content.
	Projects are modified based on their generated UID.

		Mainly for one-off changes eg. fixing formatting errors from immutable
		online sources. This file can also be used to apply changes on groups
		of files.

		TODO: figure out where `tags`, `lastTagged`, and `coolness` data are
		going to live.

	Offline data files and collections to compliment the online data cached in

		TODO: Move these over from the `src/data` directory.

	Custom tools to help classify, organize, or edit project nodes without
	opening a text editor. Custom tools are only built for data that is too
	difficult to keep updated or standardized manually.
	TODO: Hook these up to a Node server so they edit the JSON files directly.

		Finds untagged or incorrectly tagged projects, as well as projects
		that were last tagged before a new tag type was added. Provides an
		interface to preview and re-tag each project.

		TODO: sort or insert nodes based on their "coolness".

	Quick test files to ensure data is downloaded without any dropped nodes,
	UIDs are unique, etc.

Archive pre-2019 Heroku site; Update rileyjshaw/xoxo-bingo

Excerpt from the new README:

## timeline
2015: first bingo! [eli]( and i used the attendee
directory to generate a unique card for everyone (twitter login kept it private
🔒). squares on your card were other attendees - if you met someone on your
card you got to check it off. we made it cuz we’re shy. most of it is in the
`pre-2019` folder!

2016: we made the cards prettier by pulling in people’s twitter photos and
doing imgmagick to them 🔮

2017: no xoxo, no bingo… missed u all

2018: xoxo was in the midst of changing their infrastructure, so i lost access
to the attendee directory. [hannah](,
[jason]( and i met in a cafe before the kickoff
ceremony and designed a static version with input from the community. hannah
and jason made 25 icons in like two minutes, it was incredible!!!

2019: i've been too cheap to get in previous years, but
[andy]( noticed a thread on slack and hooked
us up! thx andy.

leading up to xoxo2018, i realized we wouldn't have access to the new
attendee registry. andy and i
discussed ad-hoc private access and other ways to make it work, but it
was too much. so hannah, jason and i made a static version with
"achievements" sourced from the slack community.

it was fun to get excited about things specific to that year, like the
podcast airstream and the blue ox. and it's gonna be that way from now
on! feel free to create an issue or msg on slack if you have ideas for
this year's bingo squares.

since it's staying a static site, i moved everything off of heroku.
all site content will live in 2019-and-on/.

Firehose: proof of concept rileyjshaw/rileyjshaw-new

I'm experimenting with auto-generating nodes for from a variety of data sources. This
project may eventually replace

This is the initial commit, completed quickly as a proof of concept.
There's nothing much to show, but I want to deploy ASAP so I can test
the full pipeline.

So far, everything has worked! Data from a variety of sources is
already appearing on my local server. To reproduce:

So far, I'm surfacing data from:

Setting this up was EASY, which makes me excited for the future of
this experiment :)

Add an index for each individual project rileyjshaw/canvas

I started this repository in the spirit of OpenFrameworks and
TouchDesigner: I wanted all the libraries I might need close at hand,
with a simple, abstracted API for drawing to , SVG, etc. I
wanted a personal playpen / pigpen to test ideas in.

For that reason, I didn't need nice features like routing or pages.
:) if I wanted to see an old sketch, I'd change the root component
and re-render. It worked for me!

But I planned to eventually make an easier way to browse existing
experiments. It would benefit me a bit, and casual viewers a lot.

I haven't updated this repository in nearly two years, and I honestly
never expect to again. I'm doing less browser-based creative coding
these days, and trying to stretch my work in other directions.

About an hour ago, I decided to create an index page or dropdown
to close this project out and keep it accessible in perpetuity. When I
cloned the repo and started looking at the build pipeline, I almost
noped the whole idea. I built this with create-react-app, so even
adding new pages for each project the recommended way involves:

  1. Installing some sort of React-compliant router.
  2. Spending… hours? figuring out which version of which router
    works with the project's outdated dependencies, OR,
  3. Upgrading the entire project, likely involving major upgrades to
    Webpack, Babel, etc.
  4. Installing something called react-snapshot,
    which apparently builds static files for you? But there's still
    a pushState history API? The README listed some tutorials, so I
    opened them.
  5. …once I'd reached this point, I realized I'd need another method if
    I wanted to be done within the hour.

At that point, I could have searched the web for "create-react-app
static routes 2017 easy" and gone down that rabbit hole before giving
up. OR, I could have given up immediately. Or I could do what I did,
which was a good idea:

I changed the root component 38 times by hand, typed "npm run build"
into my terminal by hand, and dragged the built files BY HAND into
unique directories that I created BY HAND.


I spent another minute in my editor surrounding the output of ls -d
with anchor tags for a root index. (yes, by hand)

The most time-intensive part of the process was writing this commit
message. I'm confident if I'd tried to automate the process or rebuilt
the project "the right way", I'd be at this for a few more hours.

The result is a little sketchy. Namely, I'm sure the total payload of
each page is a bit bigger, and caching takes a hit. But I think an
extra kilobyte will be tolerated by the 3 people who ever visit this
corner of my website.

And wow it was so easy. And if I ever decide to add a new sketch, I
can do the same simple steps by hand. No dependency mismatch with my
local versions. No reading old docs. Just build, drag, repeat,

I guess I'm writing this as a reminder to myself: it's usually
possible to break back out to 1995 in a pinch.

Add Dwitter data and some initial scraper options rileyjshaw/

I love the tidal wave of projects on /lab,
and I want to emphasize that for v3.0 of the website. I update pages
across the web daily; Glitch, Codepen,, Dwitter,
Hackster, etc. Plus there's social media…

I'm okay with manual curation for the most part, but for websites like
Dwitter where contributions are inherently unpolished / untitled, it
doesn't make sense for me to hand-pick and manually update a giant

Also: I'm not sure how long Dwitter will be around for. Periodically
saving the underlying code / images / etc. gives me more ownership
over the presentation and preservation of my data. It changes my
relationship with these sites from content hosts to publishing
platforms. That makes me feel more secure with my zillion links.

TODO(?): Automatically fetch new content during the publishing step?

Add gallery posts and some cellular automata rileyjshaw/

I've decided to upgrade my website! The
blog and lab
are moving to the same page. I'm also going to add
more content types, like songs, galleries, and
videos. As I migrate things over, I'll be
backfilling my blog with content to test with.

The CA post is an example of filler content.

Simplify min, max logic and increase range rileyjshaw/Servo

Different servo models can accept a wide range of pulse widths. Even different servos of the same model might vary a bit. Currently, the Arduino Servo library has a severely restricted hard limit on the pulse widths that can be sent to servos. Specifically:

  • Minimum pulse width must be between [32, 1052].
  • Maximum pulse width must be between [1888, 2908].

Many popular servos have min/max pulse widths that fall in that unavailable range between (1052, 1888). For instance, the Parallax Feedback 360° High-Speed Servo operates between [1280, 1720].

Before this commit, each instance of Servo stored their min and max values as int8_t. Since that only leaves room for values in the range [-128, 127], it can't store meaningful servo pulse widths, which are typically in the ~[1000, 2000]µs range. To compensate, min and max store the distance from the default values, divided by 4…

There are two problems with this:

  • The first, mentioned above, is that you can never stray more than 512µs from MIN_PULSE_WIDTH and MAX_PULSE_WIDTH.
  • The second is that unexpected and unnecessary rounding occurs.

Simply storing min and max as uint16_t and using the values directly solves this problem, and reduces the complexity involved in working around it. This commit makes the library faster, and allows it to work with a wider range of servos. It also fixes some subtle bugs where the minimum value was hardcoded to MIN_PULSE_WIDTH.

Tested on an Arduino Uno with a Tower Pro Micro Servo SG90, and a Parallax Feedback 360° High-Speed Servo.

Update `` compatibility table to match rest of article rileyjshaw/browser-compat-data

As mentioned in

Firefox started rounding to 1 millisecond in Firefox 60.

This commit updates the compatibility table to match the rest of the article.

Fix #3: Update min and max to sensible defaults rileyjshaw/Servo

R/C servos have a standard pulse width range of 1000 to 2000µs1, with the zero point between the two at 1500µs. Currently, Arduino's Servo library sets:

This causes a lot of confusion2, especially since [the docs say write(90) should correspond to the mid-point] (; in actuality, it results in a call to writeMicroseconds(1472)3.

This change adjusts the defaults to align with R/C standards. Specifically,

  • write(0) now corresponds to the standard min pulse width of 1000µs.
  • write(90) now corresponds to the standard zero point pulse width, and aligns with the library's DEFAULT_PULSE_WIDTH variable.
  • write(180) now corresponds to the standard max pulse width of 2000µs.

Tested on an Arduino Uno with a Tower Pro Micro Servo SG90, and a Parallax Feedback 360° High-Speed Servo.

1: For example,

2: For instance:

I also see a lot of posts on about this.

3: There is actually no way to set a standard servo to the zero-point using write(angle); the closest you can get is write(92), for a pulse of 1504µs.

Add an /about page rileyjshaw/

Well, it's time.

When I made this website, I decided against adding an /about page. The
site was a sandbox to dump anything I happened to build / write /
imagine while attending Hacker School, and an /about page seemed
limiting. As the site grew, it grew weirder. I decided that its lack
of context was an asset. I have some free recordings
from 2015 that prove the site made no sense. I loved them.

But the real reason I never had an /about page is that I didn't want
to talk about myself.

Exactly a year ago, I retired my portfolio page.
I've recently begun applying for grants, and having some context on
who is behind this site is important.

The real problem is that this site is old; an /about page is an easy
stopgap. This site does not represent me well anymore; I don't know
why I still have links to my blog, for example. I would like to strip
the site down, and think about the intended audience. But that sort of
thing takes time, and I'm making the conscious decision to not
prioritize my personal site for now.

There are many technical goals I would keep in mind if I were to
rebuild my website:

  • Cut dependencies and bundle size to make things faster.
  • Move off of Ruby, Jekyll, Bower, and Grunt.
  • Do not center particular frameworks or technologies on the new site.
    • Reorganize the file structure to be more modular and declarative. _data/lab/* is a great example.
  • Have it available over the dat:// protocol, and accessible offline.

And I'd like to keep it weird.

Initial commit miseryco/curriculum

the basement flooded.. :( herlifeinpixels/voxels

v2.0.0 rileyjshaw/average-color

The previous version of this library assumed use on RGB native
platforms, eg. web browsers. This had some consequences for HSL/HSB
devices (eg. Philips Hue lightbulbs), where 100% lightness does not
necessarily imply white light.

v2 uses some trig functions that are slower than the v1 algorithm. For
super fast averaging, v1 is still your best bet.

Remove some unused libraries rileyjshaw/

...Including jQuery!? That was easier than expected.

Enable multiple Editors in the same note rileyjshaw/write

Meta Key + click to create a new text box anywhere on the page, then
just start typing!

Add # of commits ahead / behind to the git prompt rileyjshaw/.supermac

Guess I'm never going back to a terminal that doesn't support unicode…

1 ahead, 0 behind

1 ahead, 0 behind

5 ahead, 2 behind

5 ahead, 2 behind

Change git prompt colors to reflect LoC delta rileyjshaw/.supermac

I'm about to go through a series of refactors, and want a quick visual
indication of whether I'm adding or removing lines of code overall.
This change shows whether I'm red or green from the upstream branch
and from HEAD.

Annotated screenshot of what the new prompt colors represent

It's a bit slow. I'd like to speed it up if this works out.

Fix standalone Mac link rileyjshaw/SVG-to-GCode

URL encodings - they%27ll get ya.

Initial async release! rileyjshaw/node-timsort-async

Wow. This, big time:

Test Plan:

  • npm run lint
  • npm run test

Add Titania style rileyjshaw/LineMenuStyles

Tested on latest IE, Firefox, Chrome, Safari.



It's responsive, too!

Responsive screenshot

Rewrite extension and update version to 3.0.0 rileyjshaw/dark-theme-everywhere

Initially, this extension grabbed the content of a CSS file with an XMLHttpRequest and injected it into the bottom of the page. This had a few advantages:

  1. Text content could be easy processed and manipulated (admittedly, I wasn't using this for anything).
  2. Toggling styles was as easy as adding and removing a element from <body>.</li> <li>In theory, this strategy would beat out almost every other style rule (some inline styles excepted). <a href="">User <code>!important</code> rules used to override author <code>!important</code> rules</a>, but Chrome <a href=";view=revision">no-longer does user stylesheets</a>. I figured an aggressively <code>!important</code> author stylesheet added at the very bottom of the page was pretty solid.</li> </ol> <p>After some testing, I realized that <code>!important</code> styles from <code>content_scripts</code> injection (along with chrome.tabs.insertCSS) actually <em>do</em> take precedence over author stylesheets. Since 3) was the key consideration for my original decision, I re-wrote the extension to inject a stylesheet from <code>content_scripts</code>.</p> <p>This change in architecture had pros and cons.</p> <pre><code>+ Improved chance of dark theme winning out over author styles. + Allowed styles to be applied before any other DOM is constructed, substantially reducing time-to-darkness. + Simplified the callbacks between background.js and client.js, reduced code, and made the entire extension easier to reason about. - With 1) above, I could&#39;ve handled variant rules (eg. specificityHelper) with a few regular expressions. Locking into a static stylesheet added some huge copypastas, tripling the size of main.css. - Injected stylesheets aren&#39;t accessible once they&#39;ve been added. Rather than &quot;turning the styles off&quot; like in 2), the best option was to add a toggle class to &lt;body&gt;. - Rewrites take time. </code></pre> <p>This commit was essentially a full rewrite, so I changed some smaller things while I was at it:</p> <ul> <li>Styles now look for :not(.off) instead of .on. This makes the default dark and avoids a Blinding White Flash before the class changes.</li> <li>Added id specificity helpers; it&#39;s discussed further in client.js:24.</li> <li>Renamed some files for clarity.</li> </ul> <p>I came across some unfortunate Chromium bugs while working on this, which caused me to dive into that project. It&#39;s huge! Lots of fun to poke around :)</p>

Remove numkey layout bindings rileyjshaw/.supermac

1, 2, 3, and 4 are useful for app-specific bindings and should thus be reserved.