Monday, 23 March 2026

Building a Spectrophotometer

In the autumn of 2025 I attempted to build a spectrophotometer by myself.

A spectrophotometer is a scientific instrument that measures the amount of electromagnetic radiation, or light as it is commonly known, that is absorbed by a sample. As different molecular bonds absorb light at different wavelengths, the absorption of light says something about your molecular composition. The most practical use for this is the determination of the quantity of a known substance in a sample.

In order to measure the absorption accurately, the light that passes through a sample is ideally only comprised of a single wavelength. This is a major difficulty in the design of the instrument, which can be overcome by something called a monochromator, of which the Czerny-Turner monochromator is the most common design.
In a Czerny-Turner monochromator, light from a white light source is aimed a concave mirror, which sends the light towards a movable grating that diffracts the light and breaks it up into individual wavelengths. These are then focussed by another concave mirror and aimed at a narrow slit, which in theory only lets one wavelength of light through. This light then passes through the sample and the reduction in intensity of the light is measured:

So while the principle of a spectrophotometer is simple, manufacturing its parts to analytical standards requires high precision, and therefore these machines are expensive. Brand new instruments are several thousand euro's and even decades-old equipment can still fetch prices of several hundred euro's. For example, this beauty from the 1970's is listed for 500 euro's today:


Because its principles are relatively straightforward and can be observed with the naked eye, the spectrophotometer is often an early introduction to scientific equipment within an educational context. Indeed, many teaching kits are commercially available, to make it possible to see the inner workings of the instrument and freely manipulate the individual components around. However, such teaching kits still aspire to the same level of quality as the commercial equipment and so the prices are still high, often exceeding a thousand euros for a basic model.

Due to the accessible nature of the machine's workings, there have also been many published instances of spectrophotometers built out of simple(r) materials and on a small(er) budget. Examples include Peiera, et al. (2019), Kovarik, et al. (2020), Shin, et al. (2022), Osterheider, et al., (2022), and Poh, et al. (2021).
However I noticed a pattern in these suggestions. They tend to be either limited in functionality, restricted to light of a single wavelength, or are only applicable to a small number of known analytes.
The more general purpose designs I've encountered on the other hand tend to incorporate at least one 'cheap' component that is nevertheless a considerable expense, such as a professional grade grating mirror, access to a 3d-printer, or a smartphone equipped with a camera. While such items are somewhat commonplace, if one has to purchase one specifically for the project, it quickly drives up the cost to 100+ euro's. 

With this in mind I went to make a spectrophotometer of my own design, based on the Czerny-Turner monochromator. My first goal was to make a functional general purpose spectrophotometer and the second goal was simply to spend as little money as possible.

In order to achieve this I aimed my attention at the cheapest materials I could think of that could perform the required function in my design.
For the monochromator I therefore used a rechargeable LED-flashlight as the light source. The price of the flashlight was seven euro's. Its light was reflected by two plastic make-up mirrors and broken up with a cd, for a total cost of another seven euro's. The whole thing was made of recycled wood, with the slit being a cut in a thin piece of veneer, attached by some tape. The wood consisted of scrap material from other projects, but let's value it at a generous five euro's.
The detector consists of an Arduino board with a € 0,40 phototransistor and a two-line lcd-screen. Together with a breadboard and some other bits and pieces this came in at a total of € 15,05.

The total cost of my spectrophotometer, if one had to build it from scratch, thus came in at 34 euro's and five cents. For this money you get a design that is compatible with standard (disposable) cuvettes that are used throughout the industry:

And an overhead view of the instrument in operation:

Of course the instrument I built is not plug and play and there are a few things I learned about its limitations.
The light yield is low due to the low quality of the mirrors that lack a uniform focal point. Therefore the amount of diffusion is high towards both ends of the visible spectrum, making the instrument the most effective in the green to orange colour range.
I also found out that a phototransistor was much more sensitive in this case than a photoresistor, and the transistor also had a more consistent output throughout the whole spectrum, while the sensitivity of the photoresistor I tried was greatly reduced above ~600nm.
Also the slit in the veneer is still somewhat broad, even if it was cut with a sharp scalpel. Therefore one can only measure the absorption in a broad-ish range of about 50 nm instead instead of a single wavelength.

In terms of its practical use, the calibration calculations have to be performed by hand. First the maximum absorption of the sample is determined, before a blank and a series of standards are measured against that point of maximum absorption.
As there is no (reliable) way to record this specific maximum, repeatability of experiments is a possible issue, as the measurement cannot be repeated exactly with known wavelengths.

Nevertheless, I found the instrument to be accurate and reliable with solute concentrations as low as 1 mg/mL. This is much less sensitive than commercial models, which can often reliably detect concentrations of 1 mg/L or even lower, but it's perfectly useable for my personal applications.

By not attempting to adhere to modern analytical standards, I have thus been able to build a functional general purpose spectrophotometer compatible with standard single-use cuvettes for about the same price as a single package of these cuvettes.

Saturday, 28 February 2026

Ab die ecke und schäm dich

It's physically impossible to see all the exhibitions in today's global art world. Therefore much of the goings-on at exhibition venues are photographically documented and distributed on websites like contemporaryartdaily and contemporaryartlibrary.org, which gives people the impression that they have seen all there is to see and are in the know. I've personally disliked the widespread prevalence of this practice for over a decade now, because the installation views used on these websites tend to give an illusion of an overview, the art equivalent of omniscient narrator, where works are depicted in relations that are impossible for any physical visitor to encounter.
In this method it is forgotten, or ignored, that a visitor to an exhibition space is a physical entity, and one that takes the world in with their eyes. Eyes that are different from a camera lens because they have a wide field of view, but a narrow and continuously shifting focus. The body has ears, it has a nose, it has legs that walk and arms that reach. 
And so a picture hung at a height of 110 cm appears completely different from a picture hung at a height of 155 cm. Navigating through a room that is 4 by 5 meters is very different from navigating a room that is 16 by 20 meters. Yet the prevailing standard of documentation has the camera at the height level of the pictures and depicts the room from corner to corner, making such marked differences appear identical.

A recent example of this that I encountered was the show Day for Night by João Maria Gusmão at Sies+Höke. I had seen installation views of the exhibition on the gallery's website, showing the works neatly arranged in a single line and in a clean space, like one is 'supposed' to hang a proper gallery show:

When I visited the exhibition, however, the installation of the works was strikingly peculiar to me, with all the works being hung very low. The top of works were hung slightly below shoulder height for me, being 186 cm tall. I therefore took the following photo with the camera at my eye level:

This snapshot is much closer to the reality of my experience in the exhibition. The works no longer appear as grand statements like in the gallery's documentation, but rather as small and humane hand-made experiments, full of flaws and imperfections.
And this is just one example of when 'good and proper' documentation leads to a distorted view of what the exhibition factually is. Such practices undermine the intellectual honesty of art, and so debase the entirety of art as a noble pursuit.

At the same time, this practice of showing an, at times physically impossible, overview has influenced on how artists and curators alike install their exhibitions. When photographing exhibitions, the camera is often placed in the corners of the room in order to obtain such an overview. Consequently, I've seen many curators, and artists, 'instinctively' walk to the corners of a space while installing exhibitions.
There is little rhyme or reason to this practice, as I've never seen any visitor to any exhibition voluntarily stand in the corner like a punished child, so what the exhibition looks like from that vantage point should be of little concern.

Yet this is a common occurence to the detriment of all exhibition making. I personally encountered a very clear example of this practice in the 2023 exhibition Channeling at the MMK in Frankfurt am Main.
In one room of this exhibition the very wide spacing of the works made no sense to me as a visitor walking inbetween the works. It was then that I realised that the curators must have only considered the show from the corners of the room. I then proceeded to take photographs from all four corners and indeed the placement of the works appeared to make more sense from there.
I later compared these photographs to the official documenation found on the MMK's website and this confirmed my suspicions. In the following images the official documentation is overlaid on my own documenation from two opposite corners of the room:


As you can see, both of the official documentation photographs are simply two narrow views of what can be seen from the corners of the space. The compression that's especially present in the first photograph also shows that their photographs were taken with a short telephoto lens. Thus the experience the curators were apparently aiming towards in the exhibition was for the visitor to stand in the corner of the space and look at the works with a pair of binoculars...
The wall I was leaning against was also empty in a bad way. This can be clearly seen in a photograph taken from where the security guard was standing in the first photograph:


Instead of a short telephoto lens, the official documentation is now all of a sudden shot with a wide angle lens with a larger field of view than the human eye. It's physically impossible to see both works simultaneously like in the MMK's documentation. Their photograph therefore presents a view of the exhibition that no visitor to the space has factually experienced.
But of course, in todays art world the 'proof' of the documentation is more important than any physical reality, so one of the clearly incompetent curators of the exhibition has since moved on to become the chief curator at the MUMOK in Vienna.

I personally believe that documentation of an exhibition should attempt to capture the experience of walking through the exhibition as accurately as possible.
Unfortunately, few institutions attempt to adhere to reality, prefering the polished and standardised appearance that provides them with greater opportunities founded on ever greater falsehoods.

Friday, 6 February 2026

Tuesday, 30 December 2025

Middlemen

In both art and science, the products of (small groups of) individuals are disseminated to the world by other companies. In the world of art, these companies are the galleries representing artists. In the world of science, they are the publishers and their journals.
In both these situations there is a clear distinction between those who produce the goods and those who distribute them to a wider audience. The presence of such middlemen is common in many industries, but an uncommon aspect found in both art and science is that the financial benefits to the intermediary are far greater than those of the producer. Scientific publishing is now a multi-billion euro industry and the largest of the art galleries have turnovers in the range of tens of millions of euros.
The curious similarities between the two fields are the result of imperfect information on the consumer side, combined with some leftovers from an older world where the financial risks were differently distributed and legally arranged. 

For both scientific publishing and art galleries the most valuable asset is the firm's reputation.
For example, the price of an artwork is linked quite directly to the standing of the gallery it is shown in. Similarly, a scientific discovery is generally considered more impactful if it's published in a journal of significance. It's therefore imperative for both galleries and scientific journals to become, and remain, reputable. It's also easy to see that for both fields there is simultaneously no inherent and necessary connection between the quality of the work and the social standing of the middleman. The intermediary does not change any intrinsic property of the final product. That the perceived quality of the intermediary is nevertheless seen as a useful indication of the quality of the good is due to a characteristic that economists call imperfect information.
In both art and science, there is no information about the quality of a good that is both reliable and readily available. The causes of this imperfect information are different in each field, but over the course of the last century they have led to a similar outcome where the intermediaries have a disproportionate influence on both the kind of goods that get produced as well as which consumers have access to it.

Any consumer needs information about a good in order to make a decision about what is worth spending their money on. They can either have full access to all necessary information, which is called perfect information, or limited access to one or more characteristics of the good, which is called imperfect information.
In both art and science, information about the quality of a good is difficult to ascertain for a large number of interested buyers. Quality in the arts is next to impossible to quantify and subject to changing cultural perceptions. And while scientific merit can be checked in principle, this requires an impossibly large amount of time, money and other resources, so in reality it is unfeasible for any one party to make an objective judgement based on their own experimental knowledge about the quality of all articles published in all journals.
Hence, for both art and science, there is a lot of effort that goes into convincing a potential customer of the value of the good that is being sold. As the goods themselves don't provide accurate clues to their genuine value, this is done through less direct means that convey a perception of longevity. Such (signs of) longevity can ostensibly only be reached by consistently providing quality goods.

Galleries and artists today try to provide credible signals of value by demonstrating a long-term commitment to each other. Until the early 20th century, this meant that dealers were directly buying (nearly) all of an artists output, thereby putting their money where their mouth is. If nothing else, this would at least demonstrate to a potential client that the dealer has a strong belief in the artist. And the dealer is only able to aquire those works today if they have made sound financial decisions in the past. These days the commitment is less strong, expressed by 'representation' of the artist by the gallery. The financial capabilities of the gallery are generally demonstrated through large, and mostly empty, spaces in expensive buildings in desirable locations, as well as participation in ludicrously priced art fairs.
I've written elsewhere on this blog how this shift is likely partially caused by changes in anti-trust legislation in the Western world in the first half of the 20th century, so I won't delve any further into this subject here.

The publishing of scientific works likewise underwent significant changes in the last two centuries.
A scientist is usually employed by a university or some other institution, and when they've made a discovery, they write up what they've done to try and make it known to others what they have discovered. It is of course difficult to reach a broad audience, even in the age of the internet, so this is one thing a publisher can help you with. A publisher possesses ways to reach an audience that any single scientist doesn't have. A publisher also has access to infrastructure. Although these days more and more of scientific publishing is done digitally, physical printing and distribution of materials has historically been a venture with large upfront costs, combined with specialised knowledge and equipment. These upfront costs carry significant financial risks, which can only be borne by a large company that is able to spread such risk over multiple ventures. 

It's also a well known fact that most scientific literature has a very small readership. Current estimates on the audience size of the average journal article range from single to triple digit numbers.
Yet at the same time, there is a great number of scientific articles that are published every day. With such fragmented readership, there is little possibility for scientific texts to gain widespread attention in the same way a newspaper article or a viral video might. Therefore, to a broad audience it is virtually unknown what the value is of any given article relative to all the other articles that are available.
As already stated, some of this uncertainty is remedied through the reputation of the journal the article is published in. This reputation is mostly based on the reception of the works that were published by the journal in the past, as well as the academic standing of its current editor(s). There have been attempts at quantifying this reputation by metrics like a journal's impact factor, which essentially measures how often articles from a journal have been cited by other scientists. But as Goodheart's law states, any measure that becomes a target seizes to be a good measure, so such undertakings merely repackage the problem instead of solving it. 

Both industries thus have a small customer base and these customers ought to be sceptical of the goods they provide and the high prices they ask for them.
So how do these middlemen leverage their position to create profits for themselves?
In the arts it is a simple question of gallerists charging very high commissions for their work, so that a handful of sales can provide an adequate amount of turnover, especially when their risk is spread over a number of artists.
In scientific publishing, exorbitant profit margins only arrived around the turn of the millennium, and to see why this is the case requires a short history lesson on copyright law, and in particular how such laws were implemented in the United States of America.

The foundation of today's copyright legislation was laid at the Berne Convention in 1886. This type of copyright is based on an idea of author's rights, where the creator of intellectual property is also automatically the owner of intangible rights relating to that work. These reproduction rights could then be licensed to a third party, such as a publisher. This can happen in different ways, but it must be noted that a perpetual exclusive license to reproduce the work is an option, even when the author retains the copyright in such a case.

This is in contrast with the common law idea of copyright, which is much more focussed on the economic right to publish and distribute. The United States, which legal system is based on common law, was thus late in incorporating the principles of the Berne Convention. In the early 20th century, copyright for individuals did not exist, but a publisher could register the publication of a work at the copyright office to obtain its copyrights.
It's a bit of an oversimplification, but it wasn't until the Copyright Act of 1976 that the intellectual property laws of the United States became more closely aligned with those of most other countries.

This change has quite directly lead scientific publishers to mandate their authors to sign over their copyrights to the publisher, instead of licensing their papers. At best this can be seen as a good-natured attempt to retain the best publishing standards possible, but it's much more likely that this decision was aimed at retaining control over the substantial captial the publishers had ammassed up until that time.
For example, in the 1966 edition of the Handbook for Authors of Papers in the Journals of the American Chemical Society, the section on 'Liability and Copying Rights' is only half a page long and simply states 'The Society owns the copyright for any paper it publishes'. This was true under the federal copyright laws of the USA at the time, which required registration at the copyright office.
Interestingly, the section on 'Liability and Copying Rights' of the 1978 edition of the Handbook for Authors of Papers in the Journals of the American Chemical Society was nearly twice as long as the previous edition. It now contained the following phrase: 'Under the terms of the Federal copyright law, effective January 1, 1978, scientific publishers who wish to obtain copyright ownership of papers in their journals are required specifically to obtain such ownership from the author of each paper. Since it is necessary for the widest possible dissemination of scientific knowledge that the society own the copyright, authors are required to transfer copyright ownership before publication of their manuscript.'

This last sentence is simply not true. A perpetual, even exclusive and non-restricted, license to publish poses no practical objections. However, such a license would still leave the ultimate ownership in the hands of the author, so that the publisher could not license the work out to third parties. Transfer of copyright ownership thus is an issue of control of the work beyond the any direct publishing efforts in their own journals.

However, it might have been vital for publishing companies to protect those interests. In the 1970's, publishing was still a complex and costly business, with large upfront costs and little or no guarantee that anybody would be interested in the final product. The publishing industry had a high risk of failure and the small number of scientific publishers that have survived, only survived because they originally published books that turned out to be of particular significance and relevance to other scientists. Unlike most of their publications, these tomes had several reprints and made a healthy profit, which could offset the cost of the many failures.
It is, however, impossible to predict which publications become a hit, and if the publisher didn't own the copyright, such a reprint would have most likely have to be renegotiated with the author. This author by then has of course seen how well their book is selling and so they'll likely want a bigger cut for themselves, or could even let the reprint be done by a different publisher altogether. This is therefore debilitatingly risky to a publisher during that time, and so the transfer of copyright ownership may have been a reasonable request in the 1970's. 

This all changes towards the 2000's and the advent of widespread internet access. Through more than a century of publishing and consolidation in the industry, a handful of scientific publishers are in possession of enormous archives and because of their insistence on copyright ownernship, they have full control over them. Through digitisation these materials are now also easily searchable and through the internet they can be distributed at negligible cost to the publishers. In other words, the publishers' material capital is now more valuable than ever, while their operating costs have fallen dramatically.
As a result their profits have risen to extraordinary heights. To illustrate this fact, two of the top ten highest paid CEO's in the Netherlands are scientific publishers. 

In summary, the presence of middlemen is necessary in both art and science to create credible signals about the quality of goods. And in both these markets, the middlemen have understood the necessity of their presence and found ways to leverage that power into great profits by essentially exploiting the weak negotiating position of their suppliers and in some cases those of their clients.
Such predatory practices are much lamented in both industries, yet I'm unaware of any proposed solution that could remedy the problem. Many of such initiatives are focused on the (financial) inequality of the artist/scientist and gallery/publisher, but I believe a solution can only be found in making information about the quality of goods readily available to end users.

As a final remark, it must be noted that book publishing in the arts is a market that functions remarkably well, considering the difficulties that exist in scientific publishing and the sale of artworks. In art publishing, there is a healthy market of buyers and sellers, while risks and profits are usually shared in reasonable terms between the artists and the publishers.
The reason for this is simply that an art book can quite literally be judged by its cover. When selecting which art book to buy, an interested buyer usually can find the books that appeal to them by considering the design of their covers. Art books also retain their value rather well, so that even if a mistake is made, a buyer can still resell the book at only a minimal loss. Therefore information on the quality of a good is widely available, while the cost of misinformation is marginal.
This is the exact opposite of scientific publishing and the market for artworks, where credible information is hard-won and the costs of getting it wrong can be extremely high.

Monday, 29 December 2025

'Use your imagination to find a way into level 7'

When a friend of mine was studying 19th century literature, she and her classmates complained to their professor that there were too many books on the compulsory reading list. They argued that they wouldn't have the time to read multiple 400+ page books in just a few weeks. The professor simply replied that if they would live how the people lived during the period the books were written in; without TV, without radio, without computers and phones, then they would find it easy to consume that much literature.
The takeaway was thus that in order to appreciate something created in a certain time, one also needs to understand the broader context of its creation and reception.

I was reminded of this anecdote when I recently started playing the original The Legend of Zelda. Designed by Shigeru Miyamoto in 1986 for the Famicom, or Nintendo Entertainment System, the game is famous for its open world exploration that helped to shape the way games are made today.

I had already acquired the game about ten years ago, yet I had quickly given up on playing it then. I couldn't get a grip on the game and found it too frustrating to figure out what to do and where to go. My mistake was that I tried to play the 40-year old game like one would approach a modern game; by going in blind, without any prior knowledge, without even reading the instruction manual. Unsurprisingly, the abundant limitations of 1980's technology were unable to properly impart to me the subtleties of the games' design.

So, when I decided to retry the game, I aspired to adhere as closely to the experience and expectations that someone would have when the game first came out.
The internet wasn't yet a presence in people's homes, but printed media were a vital and abundant source of information. I therefore sourced a copy of the game's printed instruction manual and read it thoroughly. Magazines like Nintendo Fun Club News also contained maps of the game that showed the location of many (hidden) aspects, as well as tips about how to approach traversing its landscape. Miyamoto had also meant for kids to collaborate on beating the game by exchanging information, so I found it acceptable to consult a modern internet guide for the beginning of my journey. This meant that I wouldn't spend a long amount of time finding vital items to aid me in the early parts and I could focus my energy on exploring the bulk of the game by myself.

In this manner, I found the game surprisingly forgiving and accommodating to the player. The present-day consensus is that this is a difficult game, but this thus seems to be principally an issue of knowledge. Going in head first is not always the answer, yet learning, or developing, some strategies to overcome obstacles is nevertheless easier than in some later games.

As for the game itself, you play as a boy named Link, trying to rescue princess Zelda and destroy the evil forces of antagonist Ganon. You do this by traversing the world, discovering useful items and weapons in underground labyrinths, and defeating the villains you find there.
Practically, this means that the game is broken up into an overworld, together with nine 'levels', which in theory can be played in any order. 

The overworld, the entrance to a level, and a level, or 'dungeon'.

The manual, however, mentions that 'if Link does not fight in the right Level order, he might meet a miserable end at the end of the labyrinth.' A player is thus warned from the start.
The location of the first two levels, or dungeons as they are now known, are shown in the instruction manual. The third dungeon is easily found, or stumbled upon, by going left instead of right at the starting screen.
In the third dungeon the player also obtains the raft, which according to the instruction manual can be used to 'float across seas and lakes when Link launches this from a dock'. There are only two docks in the game and the closest one to the third dungeon leads directly to the entrance of the fourth.

The first four dungeons are thus straightforward to find, and complete, with only the information found in the instruction manual. It was only at the fifth dungeon that I encountered my first difficulty that had me seek further advice from a guide.
In the fourth dungeon one finds a clue telling you to 'Walk into the waterfall'. With some wandering around the overworld, I found the only waterfall in the game and walked into it. There I was greeted with an old woman who gave me another clue: 'Go up, up, the mountain ahead'. As had I arrived from the right, I proceeded left and there I didn't find any path that led me up any mountain.
Being confused, I found that the shortest route to this point would have had me enter from the left and that the road up the mountain lay on the right, the place where I had come from. If I were a kid in the 1980's, I would have had more spare time and probably figured this out with a bit more trial-and-error, or else I would have seen the position of the fifth dungeon in a map found in the third issue of Nintendo Fun Club News...

Dungeon number six is once again easy to find if one wanders into the new area of the overworld that has become accessible by the acquisition of the ladder in dungeon five, which 'lets Link cross holes or rivers that are as wide as he is', and its workings are demonstrated immediately after it comes into the players possession.
The dungeon is however the first real 'difficult' part of the game, as a greater number of powerful enemies are found in it, as well as an enemy that will eat your defensive shield. Although the player has probably died a few times getting to this point, this is the first part where a number of attempts will be required before proceeding.
However, with the patience, and spare time, of a kid in the 1980's, replaying the dungeon and progressing a little further each time is simply part of the fun. All it takes to beat this challenge is a little bit of practice that comes from a few repeated attempts.

In contemporary commentary on the game, dungeon seven is often considered one of the easiest in the game. I beg to differ and would argue it's the most difficult by some margin.
Dungeon seven is the most puzzle-centric dungeon in the game. Its clues are cryptic, if they are present at all, and they aren't covered by either the instruction manual or the maps found in Nintendo Fun Club News. Even the detailed 108-page book The Legend of Zelda: Tips and Tactics (available for Fun Club Members for $4.99) has only three pages dedicated to dungeon seven and provides no solutions to any of its puzzles.

Finding the entrance to the dungeon is the first chore. A clue is found in dungeon six, which tells you that 'there are secrets where fairies don't live'. Such a location is easily found, as there are two identical ponds where a fairy restores Link's health, while a third pond exists that has no fairy. These ponds are useful places that the player has surely found and noted at this point.
However, nothing happens when the player uses the strategies that have so far led to the discovery of secret passages, like bombing walls or burning bushes. Even the detailed Tips and Tactics book merely tells you to 'use your imagination to find a way into level 7'. If one proceeds to try anything and everything, you'll find that the whistle, which otherwise summons a whirlwind that transports you to different parts of the overworld, now drains the water in the pond and exposes the entrance to the seventh dungeon. Or as the instruction manual clearly states: 'The whistle is the most mysterious of all the treasures in this game. [...] People say it opens up paths for Link.' 
Such leaps of logic are exactly what gives games of this era their punishing reputation a few decades later.
Unfortunately, this is only the first curveball that dungeon seven throws at the player. In previous dungeons, hidden rooms could still be seen on the map of the dungeon. Yet in order to progress to the end of dungeon seven, you need to find the entrance to a room that according to the map doesn't exist. There is also a room with an enemy that is impossible to pass, and a text that says 'grumble, grumble'. The solution here is to use an otherwise completely optional item that can only be bought, at a rather high price, in some of the shops found in the overworld.
The boss of the dungeon is located at the end of a tunnel. The entrances to such tunnels are found by pushing blocks that thus far followed a few clear patterns. Although the location of the room with the tunnel entrance is hinted at in the dungeon, no mention is made on what kind of 'secret' is to be found there, even in the Tips and Tactics book. The frustration of finding this entrance is further exacerbated by the presence of multiple enemies that are difficult to avoid and transport the player back to the beginning of the dungeon.
There is no f-ing way that I would have figured any of this out with the materials that were available to me in the 1980's. The only redeeming quality of this dungeon is that its enemies are easy to beat, which makes the constant retracing of your steps somewhat bearable. Otherwise the dungeon undermines all patterns that the game has shown the player so far and demands of them to make these mental leaps without any external help.
The only reason this dungeon is considered easy today is that its enemies don't put up much of a fight. Therefore it's a straightforward walk to the end if you already know where to go and what to do, but if you don't, then navigating its rooms is a Herculean task. 

Conversely, dungeon eight has a hidden entrance that isn't mentioned in any official materials in- or outside the game. Yet its location looks sufficiently out of place that I deduced its presence when I entered the screen for the first time, just after finishing the second dungeon.
This dungeon is a straightforward fight with many tough enemies. Like dungeon six, it has a reputation for being difficult, but all it takes is some practice and repeated attempts. 

I had some trouble finding the entrance to the ninth, and final, dungeon, because the clue I had been given is that 'spectacle rock is an entrance to death'. Apparently the rock formation is meant to resemble spectacles or something?

Spectacle Rock

In either case, I was intent on finishing the game without any further hints. To my surprise, even the daunting labyrinth of the final dungeon was not too difficult to navigate with some repeated attempts and the aid of a notebook and a pen. The first time I entered the final dungeon I had died 37 times. After recovering all the items hidden in the dungeon and defeating the final boss Ganon, I had only died a further nine times.

So with about ten hours of playtime, I had finished the game that a few years earlier I had given up on in the first ten minutes. By playing the game in the way it was intended to be played, I was able to complete it with relative ease. Everything needed to complete dungeons one through four, or the first half of the game, can be found in the instruction manual that accompanied the game. Dungeons five and six are manageable with some determination or the aid of widely distributed maps. Dungeon eight can be found by simply being observant and the final dungeon is tough, but far from impossible.
The only part of the game that is then poorly designed by any standard is the 7th dungeon. Its puzzles are too esoteric, its logic is too convoluted and there are too many variables to brute force a solution. This is the only point where any player would throw up their hands in frustration if left to their own devices.

Puzzles in a video game are notoriously difficult to design, as it's tricky to imagine the kind of connections a player is (un)able to make with the information they have. For this reason games today are extensively playtested during development. At the time of The Legends of Zelda's development, only a handful of people worked on a game and they were literally figuring out how these things could designed and implemented. It was clear that the developers wanted the players to use their own investigative skills to solve the mysteries of the game, yet at times they ask a more from the player than is reasonable. Their ambition, combined with the novelty of the experience, has left The Legend of Zelda with some flaws that are difficult to ignore and make the game unpalatable to modern audiences.

But in the end such observations are irrelevant. Shigeru Miyamoto himself has said that the inspiration for The Legend of Zelda was the feeling of adventure he had while exploring the forests of the Kyoto countryside as a child. Seen this way, it doesn't matter if you beat the game or not, and it doesn't matter if you discover all of its secrets.
In The Legend of Zelda, there is a strong sense of things to discover and there are genuine obstacles to overcome. No matter how far you progress through the game, it will be an adventure, and getting anywhere at all will leave you with a sense of accomplishment. In 1986, when most games were high score chasers mimicking those designed to gain profits in the arcades, it was an pioneering project and an experience that would leave a lasting impact in the mind of any child who decided to spend their time on it.

Friday, 5 December 2025

'Gazing Dreamingly Into the Distance'

In the interest of greater inclusivity, many museums are attempting to see how they can better accommodate people with various disabilities. Of particular, and peculiar, interest to me are the attempts to improve the experience for people who are (legally) blind. The mechanism of information transfer in the visual arts is, well, visual in nature, so that these endeavours are likely to fail. That being said, knowing that people are able to recall visual imagery, such attempts might have some value to individuals who lost their sight later in life. Nevertheless, to those born without vision the well-intended efforts of various institutions often only show their own lack of understanding of how others might experience the world.

The title of this post, 'Gazing dreamingly into the distance' is taken from the audio description of a photographic portrait of writer Arthur Rimbaud as made by the MMK in Frankfurt, Germany. This is an evocative description to anybody who knows what such an expression looks like, but a person who was born without sight will have zero reference to what this might mean. The audio description was made with the aim of 'removing barriers on our [MMK's] website by means of alternative image descriptions'. The audio descriptions made by the MMK consistently introduce such poetic phrases that are potentially gibberish to their target audience. This could have also easily been avoided by changing their frame of reference ever so slightly. A sentence like 'He appears involved in thought and disconnected from the world' would cover the same load, for example, without including such intangible references to sighted phenomena.

The MMK is a museum with an international reputation and a strong self-proclaimed focus on accessibility. When I visited the museum two years ago, however, I found their accessibility provisions lacking and even insulting to a degree.

My first introduction to their accessibility program was their 'Leichte Sprache' exhibition guide. Leichte Sprache, or easy language, is a program initiated by the German government to create texts with shorter sentences of commonly understood words so as to make hermetic or difficult to understand topics more broadly accessible. As I'm of the opinion that too many art texts are full of aggrandising and obfuscating bullshit, I naturally welcome these efforts.
On page six of the Leichte Sprache booklet I encountered a QR-code that will take you to a webpage with 'audio descriptions of the artworks for people with visual impairments'. It goes without saying that any visually impaired person is never going to find a printed QR-code on page six of such a booklet, let alone take a picture of it. So even before they got started the MMK already failed to provide an accessible environment that people with disabilities are able to navigate independently. My criticism of the MMK's accessibility program thus could have ended here, but unfortunately it is only the start.

Before we continue it must be noted that the QR-code in question is only found in the Leichte Sprache exhibition booklet. In the regular exhibition booklet no reference is made to these accessibility options. This in turn implies that visually impaired people are also unable to understand the normal exhibition text, or that all kinds of disabilities are meant to be grouped together and separated from what is considered 'normal'. This is of course absurd, as any visually impaired person who desires to overcome the extremely high barriers to better understand visual art must naturally be an intellectually curious person and probably has a lot more experience with text than your average adult.

But if at this point they haven't walked out of the museum in disgust and actually had somebody help them navigate their phone to find the audio descriptions, then they would find that it only gets worse. When I scanned the QR-code with my phone using a screen reader, I heard the following:

What you're hearing is the loading of the webpage and then me, as a sighted person, selecting the play button. What follows is the screen reading software simply rattling off the consecutive numbers of the time indicator. The result of this is that you can't even hear the audio file the MMK provided. I also tried to navigate the website using screen reading software on desktop computers and it they all had some kind of problem with navigating the playback of the audio description, if I could get the software to find or select the play button at all.

These problems are the direct result of the way the MMK chose to set up their website. Its design is sparse and relatively easy to navigate visually, but because of the code that makes this possible, it's almost impossible to navigate with a screen reader. And if you can't navigate a website with a screen reader, then it's going to be impossible for any blind person to find the required information on that website.
The Web Accessibility Initiative provides a Web Content Accessibility Guideline. In version 2.1 of this guideline, under section 1.2 related to time-based media, the broad reaching advice is that a website should 'provide alternatives for time-based media'. In other words, time-based media, like the pre-recorded audio description the MMK provides, should be avoided, because it often, if not always, creates accessibility problems. The approach the MMK has chosen is thus patently wrong. An audio description, if meant to be played back on a user's own device, should have been made available in text format, preferably with some kind of high-level hierarchy for navigation purposes. In this way text to speech software could process it without any problems, thereby providing the user with the information in a way they are familiar with.
Which brings us to another shortcoming of the audio descriptions of the MMK. People using screen readers are often used to the specific flat intonation of the software and are able to listen to it at very high speeds. For their English audio description, the MMK used somebody who speaks very slowly, with a lisp and a far from perfect English accent, making it agonising to listen to when you just want to hear the information the text provides.
As already alluded to in the beginning of the text, the information itself is also by and large unsuited for people with congenital blindness. In the first text I listened to, there were many references to sighted phenomena, like 'black and white', 'out of focus', 'a beam of light directing attention' and things being 'visible through the windows'. In contrast, the quote-unquote normal description of this work had no reference to these kind of purely visual aspects and instead focussed on the movements of the figures in the work and the context in which they were depicted. This provides broader information about the work that is useful to everybody. Instead the MMK's audio description for people with visual impairments is a list of things only sighted people can see. Which might make sense on paper, but by doing that in a way that mostly references phenomena that can only be understood through sight, they completely missed the mark.

With these observations about the web environment of the MMK, it should come as no surprise that navigation inside the museum is likewise poorly managed.
On their website the museum boasts that that they are 'pleased to receive the certification from Reisen für Alle.' They go on to say that 'Reisen für Alle is a nationally valid label in the field of accessibility'. If you, or the museum, would actually read the rapport that Reisen für Alle made, it quickly becomes clear that there is still a lot of improvements that can be made. To give a few quotes from the rapport: 'The entrance area is not recognizable by a tactile change in the floor covering', 'The door or door frame is not visually contrasted with the surroundings',' There is no tactile information about the floor at the beginning and end of the flights of stairs', 'The walkway from the entrance door to the counter/desk/cash register is not marked with visually contrasting markings (e.g. carpet)', 'There are obstacles, e.g. columns, in the room', and so on, and so on. 

The critical remarks that Reisen für Alle have made in their report are very much in line with the things I noticed during my visit. 

This is an image of the entrance to the museum. There are many columns in front of the entrance, a number of unmarked steps, and the entrance itself is a revolving door. This already makes the regular entrance a small obstacle course for unsighted navigation.

Inside, some tactile floor markings are placed immediately after the revolving door. But it's only the warning kind, with nothing following it. They also weren't present on the outside of the entrance. So instead of providing a route to the next important step in the visit, like an information point, an unguided blind visitor is greeted only by a single confusing floor marking and then a large open space with no other indicators.
It must also be said that in the back of this picture there is a 'regular' door for entering the museum. This door, as far as I understood, is closed unless some employee of the museum opens it. This alternative entrance also has a single strip of tactile floor marking on the inside of the building, but for some inexplicable reason this is covered up with a floor mat.

Near the desk is a muted, but subtitled, video of a woman providing the exhibition text in sign language. I personally don't really see the point to show providing two different ways of visual textual information, but hey, that might be me. Sign language provides the benefits of spoken language, such as facial expressions, body language, intonation and so forth. None of these things are essential to an informative text. 

Moving on from this sidenote to the exhibition floors, we see that the mistakes continue. To illustrate this, I would like to focus on the tactile floor plans that are placed on each floor of the MMK:

There are a number of problems with this 'aid' and it has clearly been created by, and for, sighted people.
Firstly, the effectiveness of such a floorplan without any (references to) guiding floormarks in the surrounding area is questionable. Visual impairments don't come with a magical intuition for, and perfect recollection of, distance and proportion.
But let's presume it could be a useful guide. In that case, the only properly marked and textured area is called the 'luftraum', which is translated as 'outdoor'. This really doesn't mean outdoor at all, but simply indicates which part of the building have extremely high ceilings. As most of the other rooms are already four to six meters in height, such a distinction on a floorplan for non-sighted navigation is pointless.
Furthermore, all walls have been rendered as single lines in the floorplan, so that a row of open windows and a row of pillars both appear as single dots. Yet the tactile sensation of a hole in the wall and a solid column is markedly different.
The floorplan also does not account for temporary changes to the layout of the rooms. The presence of sculptures or other obstacles on the floor are not marked, for example. During my visit, there was a temporary wall built right behind the floorplan, and this wall is not marked on this floorplan. As a guide for self-guided movement through the space this floorplan is thus entirely useless.

As we have seen, the MMK has done very little to make its facilities more accessible to visually impaired visitors and in their attempts they might have even actively worsened the experience.
It might seem a bit of a stretch to chastise a museum of visual art for not being attuned to the needs of those with visual impairments. Indeed, I personally believe the only adjustment an art museum should ever make to visually impaired visitors is the availability of well-trained guides who are able to both physically and intellectually walk them through the exhibitions, and wherever possible supervise some amount of physical interaction with the works.

My point is that if one wishes to make lofty claims about accessibility in their promotional material, it is shameful and despicable to merely inconsistently implement a number of measures where the visual design takes precedence over practical use.

Tuesday, 4 November 2025

I can't stop reading!

I don't think I ever spoke about it here, but I don't particularly enjoy the writing process. In all honestly, I don't really like reading, neither.
Yet I like to learn, so I'm forced to read, and I believe things need to be expressed that aren't said elsewhere, so I'm compelled to write.

In the last couple of weeks, I've bought more books than I have time to read, while checking out some books from the library to boot. My own irrational behaviour puzzled me, until I realised I was feeling particularly anxious and troubled by the world. Ever since I was a child, I've tried to soothe my worries by gaining knowledge, and a greater comprehension would often lead to me to feel separated from the rest of the world. This isolation led me to seek a greater understanding and that greater understanding would make me feel more isolated.

So, for this, my 200th post on this blog, let me paraphrase the lament uttered by Fat Bastard in Austin Powers: The Spy Who Shagged Me:

I read because I'm unhappy.
I'm unhappy because I read. 

It's a vicious cycle.

Monday, 3 November 2025

Contributing Factors in a Cheerios-based Adhesive

One day, a few years ago, I ate a bowl of cereal and I haphazardly forgot one piece of cereal in the bowl. I then also neglected to wash the bowl for a number of days, causing the milk to dry out. When I picked up the bowl again, I noticed that the piece of cereal, Cheerios, was stuck firmly to both the spoon and the bowl.

Being interested in the potential of such a Cheerios-based adhesive, I decided to 'glue' a spoon to a window by dipping the Cheerios in milk and clamping it between a spoon and a piece of glass:


 Seen from the side it would look like this:

I didn't really have any idea of how it worked at the time, but knowing that both metal and glass are two materials that are often difficult to stick together, it was striking to me that Cheerios, when combined with milk, would be able to act as an adhesive for these two objects. 

I never quite figured out how to clearly show that it was in fact the combination of milk and Cheerios that kept the spoon in its unusual place, so it never went very far as an artwork.
However, it still intrigued me from a chemical point of view, as it was an odd combination of materials to be stuck together so easily. Adhesion of such dissimilar materials is often very much influenced by mechanical adhesion. In mechanical adhesion, the (invisible) roughness of a surface is filled up with a material that is liquid at first but then hardens to a solid. These two materials are then not chemically bonded together, yet they can't move as there is no physical space to do so. 

It was however unlikely that this is the full story in this particular case, as both glass and metal have relatively smooth surfaces and nothing in milk actively polymerises as it dries. There are therefore very few cavities to fill and no obvious substance to fill them with.

Being interested in surface interactions for another project, it occurred to me that the electrostatic activity on the surface of the metal, combined with the free electron pairs in the silicon dioxide of the glass, could perhaps create non-trivial hydrogen bonds with the sugar molecules in the milk. The Cheerios are in turn largely comprised of long chains of sugars, so that the sugar from the milk can form hydrogen bonds with those and possibly have an intertwining crystallisation structure, providing rigidity. 
This combination is partly illustrated in the following diagram, where (1) denotes the crystal lattice of the metal and the free electron pairs on the surface, (2) are hydrogen bonds with the sugar molecules, (3) are the sugar molecules that are left over when the water has evaporated from the milk, (4) are the hydrogen bonds between the sugar from the milk and the polysaccharides from the cereal, and (5) are those polysaccharides.


 

To test the plausibility of this hypothesis, I devised several experiments where different combinations of materials were tried out iin order to isolate and test a number of variables.

For these experiments, single Cheerios were placed in liquid and left to soak for 30 minutes. The liquids used were semi-skimmed milk, water, or water with an amount of sugar dissolved in it.
The wet Cheerios were then placed on a glass or plastic surface, and a spoon was placed on top. The spoons were balanced so that their own weight pressed upon the Cheerios.
This was then left to dry for ~3 days.
The degree of adhesion was then determined by the experimenter through detaching the materials from each other. This could result in either low, or no, tack (denoted as --), some tack (denoted as +/-) or high tack (denoted as ++).

The results of the various experiments can be found in the following table:

Materials Result,
Expected
Result,
Observed
Glass, Spoon (std), Cheerios, Milk ++ ++
Glass, Spoon (std), Cheerios, Water -- --
Glass, Spoon (std), Cheerios, Sugar Water ++ ++
PolyPropylene, Spoon (std), Cheerios, Milk -- --
PolyMethyl MethAcrylate, Spoon (std), Cheerios, Milk ++ +/-
Glass, Spoon (smth.), Cheerios, Milk +/- ++
Glass, Spoon (std.), Milk +/- +/-
Glass, Spoon (std.), Sugar Water +/- --
Glass, Spoon (std.), Kitchen Paper, Milk ++ -- & ++


The expected result was the result based on the theory as outlined above and the observed result is what actually was the case. It's clear to see that the expected and observed results match each other closely.
There were a couple of instances where the observed result differed from the expectation, however, namely in the case of the PMMA substrate, a smooth metal spoon, sugar water in the absence of cereal and the substitution of Cheerios for kitchen paper.

The observation that there was high tack in the combination of Cheerios with both milk and sugar water, while there was no adhesion at all when the Cheerios was only soaked in water, shows that the presence of sugar is very important in the adhesive properties of this combination of materials.
That the Cheerios with milk showed high tack on glass, some tack on PMMA and no tack on polypropylene also indicates that hydrogen bonding is very important to the adhesion to the glass substrate, as was expected.

An experiment done with a spoon that had a very smooth surface also shows that the observed adhesion is chemical or electrostatic, rather than mechanical, in nature. It was expected that a smoother surface would give less adhesion to the spoon, yet no discernible difference was observed between a well-used spoon and a new, smooth, spoon.
Two experiments performed with only milk and sugar water in the absence of Cheerios showed that sugar alone can't act as an effective adhesive for these materials. While the sugar stuck firmly to the glass, likely through hydrogen bonding, it showed virtually no adhesion to the metal spoon. Nevertheless, a thin droplet of milk did have some tack to the metal, so that some other component of the milk must be the substance that binds to the metal. The most likely candidate is calcium, as calcium ions are very large and able to form complexes with a high coordination number, thereby binding various molecules together.

To examine the influence of the Cheerios, an experiment was performed where a wad of kitchen paper, made out similarly long polysaccharides, was soaked in milk.
This gave an interesting result, where this wad strongly adhered to the glass, but showed no tack on the metal surface. This is most likely caused by the greater absorbance of kitchen paper, so that the sugars or ions in the milk where in little contact with the metal as the water evaporated.

In conclusion, when using Cheerios and milk as an adhesive for metal and glass, all four components are important contributors to the overall effect. A major contributor to the adhesive strength is the large amount of sugar found in milk, which is aided by other components, where an abundancy of calcium likely aids in bonding to the metal of the spoon. The combination of milk and Cheerios binds to the glass through hydrogen bonding and to the metal by some other chemical or electrostatic force, where mechanical adhesion only has a limited contribution.

Friday, 12 September 2025

Party Pooper

The above photograph comes from the series 'The Action of Matchmaking Photons in Bars' by Voebe de Gruyter. In a conversation with Maria Barnas, titled 'On Art and Science', she says the following about it:

'The.photos I took in the café are real spots of light. I had stuck reflective tape on the people and the interior and took pictures using the flash. I see the spots of light as proof of light's return.'

The premise here is that light originates from the flash, hits an object and then returns to the camera lens and its sensor, rendering the image. But if we assume that this is the case, as the artist does, then the rest of the photograph, or even any photograph taken with a flash, surely is an equal 'proof of light's return'?

Tuesday, 9 September 2025

Methyl Mercaptan

Artists like to use molecular models for making sculptures. This has already been covered on this blog, but I'd like to expand on the subject a little further in this post.
Molecules have certain stable configurations, which are governed by the distribution of their electrons. This is described by something called valence shell electron pair repulsion theory. It's somewhat complicated, but just imagine that electrons are magnets on a sphere that want to be as close to the centre as possible, while being as far apart from each other as possible. So while atoms are always in motion, this means that on average they are found in only a small number of configurations in molecules:

 

This kind of spatial configuration is correctly rendered in the large sculpture 'Gas Molecule' commissioned from Marc Ruygrok by the NAM:

This sculpture is supposed to depict methane, or CH4, with a central carbon atom connected to four hydrogen atoms. Ruygrok has largely copied the common 'ball-and-stick' molecular model, only taking some liberty with the colour scheme.
Although molecules don't have a 'real' colour, there is a convention, called Corey-Pauling-Koltun colouring, for using certain colours for certain atoms. The central atom in Ruygrok's model is carbon, which in this convention is always associated with black, while blue is always associated with nitrogen. If the shiny purple-ish hue of the central atom is considered significant, then this is traditionally linked to phosphorous, but is today more commonly associated with potassium.
These colours are nothing but conventions, so it's not that Ruygrok's choice is wrong per se, but it also isn't 'right' to use blue in this case. Without any other information, any chemist will think this model represents ammonium, not the intended methane.

As already stated, this example uses the so-called ball and stick model, but a more realistic space filling model exists where atoms are depicted as overlapping spheres representing their Van der Waals surface. Molecules in this model consist of interconnected spheres, so that a good separation through size and colour becomes even more important than it is in the ball and stick model. With this in mind, let me present to you 'Calcium 4-[4-(2-methylaninlino)-2,4-dioxobutyl]diazenyl-3-nitrobenzenesulfonate (C.I.13940)' by Jean-Luc Moulène:

This is supposedly a model of the molecular structure of a pigment, Yellow 62, which is then painted in the colour of this pigment. I already pointed out that without adequate differentiation through colour, such a model is hardly able to serve its clarifying function.
It is however clear that Moulène didn't correctly render the molecule he meant to render. When I looked up and drew a model of the pigment, I came up with the following structure:

Even without knowing anything about chemistry, it's obvious that these are are two different structures. In the correct model, there are 41 spheres present, while in Moulène's sculpture one only counts 29 spheres. I did notice that in Moulène's sculpture no hydrogen atoms were depicted, which is somewhat common practice. I therefore counted the amount of hydrogen atoms that should be present, of which there are 15, so if the difference came from the absence of hydrogen, then the amount of spheres would be 26. I therefore have no explanation of where the artist went astray in rendering his model, but it is clear that the molecular model doesn't depict the pigment that he claims.


This could also already be gleaned from the inclusion of 'Calcium' in the sculpture's title. Organocalcium compounds are very uncommon and so the inclusion of calcium in the name most likely means that this is a salt. The SO31- sulphonate group in the molecule, shown in yellow with red, is very reactive  and needs to be ionically bonded to a positively charged atom, Ca2+ in this case, to be stable. The double positive charge on the calcium ion is paired with two single negative charges on the other compound, which means that there must be two of the previously shown molecule in the following configuration:

This is of course looks nothing like the molecule in Moulène's sculpture and anybody with knowledge of chemistry could have spotted the error merely from the first word of the title. 

I then noticed the following drawing on the cover of Keith Tyson's publication 'Molecular Compound No 4.':


Comparing this image with the VSEPR models at the beginning of this post, it should be clear that this drawing is not based on any existing molecule. Upon consulting the book, it turned out to contain no further references to reality and consist only of the fantastical imaginings of the artist, so I won't make any further comment on this publication.

I could list more examples of artists that have attempted to employ molecular models, but in short all of these sculptures I've encountered forgone scientific accuracy in some way.
The only one I know of that isn't necessarily wrong was a sculpture that simply used nothing but a commercially available molecular modelling kit. So while this was possibly accurate, it's artistic value was also negligible.


And the reason I've written all this is because I researched the subject while making the following model of a molecule called methyl mercaptan:

Methyl mercaptan, or CH3SH, is one of the molecules that make farts smell. This model is made of a tennis ball, a black golf ball and four small roulette balls. These generic, store bought, balls are both the right colour and approximately the right size for a CPK-model for a molecular structure, as can be seen in this rendering taken from a molecular drawing program:

This is thus an indication that it's possible to have a novel approach to creating a molecular model without necessarily having to significantly compromise its scientific accuracy.

Friday, 29 August 2025

Paper Plane

At Stephan Balkenhol's exhibiton 'Something is Happening' at the Kunsthal, Rotterdam, a sculpture was on show with the straightforward title 'Paper Plane'.
In this sculpture a man is holding a simple paper airplane above his head. This paper airplane has a bit of an unusual, squarish, shape, that's very different from the pointed darts one usually sees paper airplanes depicted like.
The shape of this airplane is however very similar to the design created by aeronautical engineer Ken Blackburn, which earned him the world record for the longest flight time from the 1980's until the early 2000's: 

 

If we believe that Balkenhol was aware of this airplane design, then the pose of the man in his sculpture becomes interesting. It's a relatively passive pose, vaguely reminiscent of how a child with a kite would stand, holding the thing that is supposed to 'fly' high up in the air. 


 

Yet part of the reason Blackburn held his record for so long was the combination of his throwing technique with the design of the plane. He threw the plane nearly vertically with a speed of close to 100 km/h to get it as high in the air as possible. From there the plane stabilised and had a slow descend.
This is no mean athletic feat and the intensity of the movement is of course very different from the idle attitude commonly associated with a paper airplane.