how this website works

As Chief of Information Technology in the arts sector, I've written many web-based Content Management Systems from scratch, with databases, control panels, and WYSIWYG forms. When creating this website for myself, I didn't need any of that, having the skills to edit code directly. Also, I'm very lazy. So this website is just PHP. No frameworks.


Content is defined as objects/pages with random 64-bit IDs, reflected in their hex URLs. The request first hits Apache's mod rewrite, wrangling the URL into final form, forcing HTTPS, stripping excess 'www.' and trailing slashes. A number of manual URL redirect entries ("humanly named pages") are also defined there, so I can share specific pages without remembering their random IDs. Lastly it checks if the page has already been rendered and can be served from the cache, without evoking PHP at all.

Failing that, we transfer control to the first PHP file, the router, and begin by importing my little toolbox I've accumulated over the years. A configurable object is set as the "front page" should the query lack a target. Next some special logic handles serving images and auto-generated thumbnails of them, along with error pages and such. There is one goto in the entire project, here. Once we're left with the ID of a dynamic page to render, import the template.


Besides the shared HTML, CSS, and JS, the template performs a few common operations on all pages, such as finding which tags it has, including the special 'explore' and 'next' functionalities. I wanted linear sequencing of some pages, non-hierarchial organization of others, and also a kind of introductory "hey click here to see cool stuff!" button. All implemented with a tagging system. Each page is tagged with at most one sequence tag. Since the 'explore' function is a separate button, it doesn't clash with the sequence.

The tag index is defined as a PHP file with a data array inside. When considering which serialization format to use, I realized the simplest approach is to just use PHP's own PHP parser. Thus to load data I simply require() it and access the named array.

For ease of development, I inject the content hash of the style sheet into its import URL. This forces clients to re-download it when it's changed.


On the other hand I wanted the content pages to execute arbitrary code, so that works differently. It's a hack achieved by output buffering and variable scoping. Each page consists of metadata and content. When generating page listings, I crawl all pages but only want the metadata. Every page required() halts its own execution after defining its metadata into variables, which are then accessed via get defined vars(). When rendering content, halting is redefined as no-op, so the page falls through its metadata section and starts outputting its content. This is captured with buffering and inserted into the correct spot on the template, while populating it with the metadata.

Most pages are just HTML. Some have PHP, like this page which generates the site's Atom feed. Caching can be disabled per page for always-live dynamic ones, though most are cached.


The front end I hand-crafted from design principles, starting with an HTML-only page that renders content semantically. Then I added the style sheet which adapts to screen size and orientation, always presenting the menu buttons on the side along the longer axis of the screen, leaving maximal space for actual content on the shorter one. The page title scrolls with the view, columns go side by side only with enough space, and images scale to fit the screen in one way or another. The layout is simple, with a focus on main content at all times. Alongside and in-between I can present imagery. Main content shifts on top of side content when space is tight.


Typography probably looks odd at first glance, but I claim method to my madness: When your eyes read to the end of a line of text, they have to jump to the beginning of the next one. Left-justified text causes all lines to start at the same vertical, making your eyes sometimes miss the jump and begin reading a wrong line. This is why you're sometimes reading the same line twice by accident. By right-justifying instead, the left edge of the text is made variying, and your eyes have an easier time picking the correct next line.

For those articles which call for it, I've meticulously glued words together with   to ensure sensible line-breaking during text flow. While invisible, this technique greatly improves readability. Words tightly related, such as adjectives to their subjects, articles, names, etc. are glued. This page, for example, is largely not touched as such, and you can see lines breaking at totally random places.


There is a tiny bit of progressive enhancement JavaScript, for example in the gallery thumbnails: For a visitor with JS enabled, the images fill the screen on tap/click and go away with another. Careful consideration was given to not break default browser behavior: Middle-clicking the thumbnail still opens the image in a new tab; The current URL and page history are updated accurately in all cases.


As a tech-aware artist I was faced with the dilemma of image compression. Obviously I want to display my work in high quality. Equally obviously, I can't just serve the originals or the site would load awfully slow. The two options were either downscaled resolution in high quality, or full resolution in reduced quality. In preparation for 4K screens I chose the latter, aiming for about 1 megabyte per work.


I develop on XAMPP localhost and upload changes via good old FTP. The only difference between the local and remote instances is a config file. Sometimes it's a bit of work to maintain synchronization but not enough to automate deployment.