Tagged: design-patterns

Graph Paper Lindenmeyer Systems

  • Posted in dev

When I was a kid I learned about Lindenmeyer Systems and the fun tree patterns they create. I even followed a Twitter bot that generated and posted random pretty lsystem renders.

These are similar to fractals, except unlike traditional fractals they don’t usually expand within a fixed space, they either continue to grow or loop back intersecting themselves.

Meanwhile I spent most of my time in school and needed something to occupy my hands with. I did a lot of notebook doodling, except people notice those. But I did have graph paper. And graph paper gives you enough structure to draw lsystems by hand. So I did, a lot.

graph paper photo

I probably went through a notebook every two years, and I can’t remember once ever drawing an actual graph. This was when I was a child in childish ways and hadn’t yet learned about dot paper. Oh, misspent youth....

There are a couple interesting things about this mathematically.

First, there’s only really one interesting space-filling pattern to draw on graph paper, which is forking off in two 45 degree angles. Anything that doesn’t fit on the coordinate grid (like incrementally decreasing line lengths) becomes irregular very quickly. The main option to play with different patterns is by setting different starting conditions. (Seems like it’d get boring quick, right? I spent the rest of the time trying to apply the Four color theorem in an aesthetically satisfying way. Still haven’t solved that one.)

Second, drawing on a coordinate grid means using Chebyshev Geometry, where straight lines and diagonal lines are considered the same length. So a circle (points equally far apart from a center point) is a square. This is also sometimes called chessboard geometry, because it’s how pieces move on a grid.

distances visualization1

Also, because I’m drawing this by hand, I can only keep track of visible tips. So if tips collide I count them as “resolved”, even if mathmatically they’d pass through each other.

But I kept hitting the end of the graph paper.

I was bored in a work meeting this week and ended up doing the same thing, and it made me wonder about the properties of the pattern. Did it repeat? Did it reach points of symmetry? I thought it would be fun to whip up a tool to run my by-hand algorithm to see.

There are lots of web toys for lsystems but none are designed with these graph paper constraints. So just as an exercise I built my own from scratch, and it’s pretty good at drawing my high-school pattern.

generation animation

The answer to the pattern question is: it depends on the starting position! If you start with one line you start to see radial symmetry, but if you start with two lines facing away from each other (as in the animation) it’s more interesting. Neat.

Each generation is considered a state frame that keeps track of lines (for rendering), buds (for growing the next iteration), bloomed_buds (for bud collision) and squares (for line collision).

For the Chebyshev geometry I just keep every line at a 1.0 length and let the trigonometry functions round up.

I didn’t actually implement a parser for formal Lindenmeyer syntax, I just defined everything recursively in pure Javascript, using a definition to encapsulate details about the pattern:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
const definition = {
  at(bud) {
    const lines = []
    const buds = []

    for (let angle of [
      bud.r - 0.25,
      bud.r + 0.25
    ]
    ) {
      const rads = Math.PI*angle
      let len = 1
      let xy = [bud.x + len*Math.sin(rads), bud.y + len*Math.cos(rads)]
      if (chebyshev) xy = xy.map(Math.round)
      lines.push(xy)
      buds.push({x: xy[0], y: xy[1], r: angle})
    }
    return {lines, buds}
  }
}

Then rendering is done recursively:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
function get_frame(generation) {
  if (generation < 0) throw Error(`Invalid generation ${generation}`)

  if (!frames[generation]) {
    const frame = structuredClone(get_frame(generation - 1))

    const old_buds = [...frame.buds]
    const new_buds = []
    const new_squares = []
    for (const i in old_buds) {
      ...
    }
    frame.bloomed_buds = [...frame.bloomed_buds, ...old_buds]
    frame.squares = [...frame.squares, ...new_squares]
    frame.buds = prune_buds(new_buds, frame.bloomed_buds)

    frames[generation] = frame
  }

  return frames[generation]
}

There’s a slight complication to doing this cheaply with lines: on paper, there are cases where lines intersect, and in those cases I stop at walls, even if it means drawing a partial line. Doing that kind of collision detection here in vector space would be very complicated! And without it the pattern loops back on itself and doesn’t look as good, and definitely doesn’t match my notebooks:

missing collision

The solution for this was very simple collision detection. The final program keeps tracks of which squares are “occupied” and breaks branches any time they’d collide.

I uploaded it to my stash if you want to play with it. As an added constraint I wrote everything by hand in one single-file html page.

The one problem with the tool (besides being bare-bones) is it renders the output as an SVG instead of a bitmap on a canvas. This “feels right”: they’re lines, might as well make them vectors. But on high values browsers have performance issues rendering very large quantities of DOM elements.

I’d rewrite the renderer to draw to a canvas instead except I like the idea of the infinite SVG canvas specifically. The point was to escape the edges of the graph paper, remember?

But I’d already abstracted line drawing so the line coordinates were stored in state frames, so it was easy to write an alternate renderer using the canvas API.

Leo reminded me that svg paths don’t have to be continuous strokes and take path commands, so this could all be done in one DOM node. So the SVG renderer does that now, but I kept the canvas option.

https://stash.giovanh.com/toys/lsys.html


  1. Illustration by Cmglee - Own work, CC BY-SA 4.0, Link â†©

A Hack is Not Enough

  • Posted in cyber

Recently we’ve seen sweeping attempts to censor the internet. The UK’s “Online Safety Act” imposes sweeping restrictions on speech and expression. It’s disguised a child safety measure, but its true purpose is (avowedly!) intentional control over “services that have a significant influence over public discourse”. And similar trends threaten the US, especially as lawmakers race to more aggressively categorize more speech as broadly harmful.

A common response to these restrictions has been to dismiss them as unenforceable: that’s not how the internet works, governments are foolish for thinking they can do this, and you can just use a VPN to get around crude attempts at content blocking.

But this “just use a workaround” dismissal is a dangerous, reductive mistake. Even if you can easily defeat an attempt to impose a restriction right now, you can’t take that for granted.

Dismissing technical restrictions as unenforceable

There is a tendency, especially among technically competent people, to use the ability to work around a requirement as an excuse to avoid dealing with it. When there is a political push to enforce a particular pattern of behavior — discourage or ban something, or make something socially unacceptable — there is an instinct for clever people with workarounds to respond with “you can just use my workaround”.

I see this a lot, in a lot of different forms:

  • “Geographic restrictions don’t matter, just use a VPN.”
  • “Media preservation by the industry doesn’t matter, just use pirated copies.”
  • “The application removing this feature doesn’t matter, just use this tool to do it for you.”
  • “Don’t pay for this feature, you can just do it yourself for free.1”
  • “It’s “inevitable” that people will use their technology as they please regardless of the EULA.”
  • “Issues with digital ownership? Doesn’t affect me, I just pirate.”

Verification on Bluesky is already perfect

  • Posted in tech

Bluesky has very quickly become a serious social media platform. This means it’s having to deal with all the problems social media platforms have to deal with, including impersonation. A lot of people flocked to Bluesky from Twitter, and so recreating something like Twitter’s verification system seems like a natural step.

But you don’t need to do that! Bluesky’s current verification system is actually very good and does what verification is supposed to do.

In 2022 I wrote a retrospective essay about the “verified account” design pattern on Twitter, which tried to preempt this conversation a little bit, but unfortunately got bogged down a little with Elon breaking Twitter verification. This piece will talk about a lot of the same ideas, but applied more specifically to Bluesky’s ecosystem.

Lies, Damned Lies, and Subscriptions

  • Posted in cyber

Everybody hates paying subscription fees. At this point most of us have figured out that recurring fees are miserable. Worse, they usually seem unfair and exploitative. We’re right about that much, but it’s worth sitting down and thinking through the details, because understanding the exceptions teaches us what the problem really is. And it isn’t just “paying people money means less money for me”; the problem is fundamental to what “payment” even is, and vitally important to understand.

Human Agency: Why Property is Good

or, “Gio is not a marxist, or if he is he’s a very bad one”

First: individual autonomy — our agency, our independence, and our right to make our own choices about our own lives — is threatened by the current digital ecosystem. Our tools are powered by software, controlled by software, and inseparable from their software, and so the companies that control that software have a degree of control over us proportional to how much of our lives relies on software. That’s an ever-increasing share.

Jinja2 as a Pico-8 Preprocessor

  • Posted in dev

Pico-8 needs constants

The pico-8 fantasy console runs a modified version of lua that imposes limits on how large a cartridge can be. There is a maximum size in bytes, but also a maximum count of 8192 tokens. Tokens are defined in the manual as

The number of code tokens is shown at the bottom right. One program can have a maximum of 8192 tokens. Each token is a word (e.g. variable name) or operator. Pairs of brackets, and strings each count as 1 token. commas, periods, LOCALs, semi-colons, ENDs, and comments are not counted.

The specifics of how exactly this is implemented are fairly esoteric and end up quickly limiting how much you can fit in a cart, so people have come up with techniques for minimizing the token count without changing a cart’s behaviour. (Some examples in the related reading.)

But, given these limitations on what is more or less analogous to the instruction count, it would be really handy to have constant variables, and here’s why:

1
2
3
4
5
6
7
8
9
-- 15 tokens (clear, expensive)
sfx_ding = 024
function on_score()
  sfx(sfx_ding)
end

function on_menu()
  sfx(sfx_ding)
end
1
2
3
4
5
6
7
8
9
-- 12 tokens (unclear, cheap)

function on_score()
  sfx(024)
end

function on_menu()
  sfx(024)
end

The first excerpt is a design pattern I use all the time. You’ll probably recognize it as the simplest possible implementation of an enum, using global variables. All pico-8’s data — sprites and sounds, and even builtins like colors — are keyed to numerical IDs, not names. If you want to draw a sprite, you can put it in the 001 “slot” and then make references to sprite 001 in your code, but if you want to name the sprite you have to do it yourself, like I do here with the sfx.

Using a constant as an enumerated value is good practice; it allows us to adjust implementation details later without breaking all the code (e.g. if you move an sfx track to a new ID, you just have to change one variable to update your code) and keeps code readable. On the right-hand side you have no idea what sound 024 was supposed to map to unless you go and play the sound, or label every sfx call yourself with a comment.

But pico-8 punishes you for that. That’s technically a variable assignment with three tokens (name, assignment, value), even though it can be entirely factored out. That means you incur the 3-token overhead every time you write clearer code. There needs to be a better way to optimize variables that are known to be constant.

What constants do and why they’re efficient in C

I’m going to start by looking at how C handles constants, because C sorta has them and lua doesn’t at all. Also, because the “sorta” part in “C sorta has them” is really important, because the c language doesn’t exactly support constants, and C’s trick is how I do the same for pico-8.

In pico-8 what we’re trying to optimize here is the token count, while in C it’s the instruction count, but it’s the same principle. (Thinking out loud, a case could be made that assembly instructions are just a kind of token.) So how does C do it?

The Failure of Account Verification

  • Posted in cyber

The “blue check” — a silly colloquialism for an icon that’s not actually blue for the at least 50% of users using dark mode — has become a core aspect of the Twitter experience. It’s caught on other places too; YouTube and Twitch have both borrowed elements from it. It seems like it should be simple. It’s a binary badge; some users have it and others don’t. And the users who have it are designated as
 something.

In reality the whole system is massively confused. The first problem is that “something”: it’s fundamentally unclear what the significance of verification is. What does it mean? What are the criteria for getting it? It’s totally opaque who actually makes the decision and what that process looks like. And what does “the algorithm” think about it; what effects does it actually have on your account’s discoverability?

This mess is due to a number of fundamental issues, but the biggest one is Twitter’s overloading the symbol with many conflicting meanings, resulting in a complete failure to convey anything useful.

xkcd twitter_verification

History of twitter verification

Twitter first introduced verification in 2009, when baseball man Tony La Russa sued Twitter for letting someone set up a parody account using his name. It was a frivolous lawsuit by a frivolous man who has since decided he’s happy using Twitter to market himself, but Twitter used the attention to announce their own approach to combating impersonation on Twitter: verified accounts.