Monday, 23 March 2026

Building a Spectrophotometer

In the autumn of 2025 I attempted to build a spectrophotometer by myself.

A spectrophotometer is a scientific instrument that measures the amount of electromagnetic radiation, or light as it is commonly known, that is absorbed by a sample. As different molecular bonds absorb light at different wavelengths, the absorption of light says something about your molecular composition. The most practical use for this is the determination of the quantity of a known substance in a sample.

In order to measure the absorption accurately, the light that passes through a sample is ideally only comprised of a single wavelength. This is a major difficulty in the design of the instrument, which can be overcome by something called a monochromator, of which the Czerny-Turner monochromator is the most common design.
In a Czerny-Turner monochromator, light from a white light source is aimed a concave mirror, which sends the light towards a movable grating that diffracts the light and breaks it up into individual wavelengths. These are then focussed by another concave mirror and aimed at a narrow slit, which in theory only lets one wavelength of light through. This light then passes through the sample and the reduction in intensity of the light is measured:

So while the principle of a spectrophotometer is simple, manufacturing its parts to analytical standards requires high precision, and therefore these machines are expensive. Brand new instruments are several thousand euro's and even decades-old equipment can still fetch prices of several hundred euro's. For example, this beauty from the 1970's is listed for 500 euro's today:


Because its principles are relatively straightforward and can be observed with the naked eye, the spectrophotometer is often an early introduction to scientific equipment within an educational context. Indeed, many teaching kits are commercially available, to make it possible to see the inner workings of the instrument and freely manipulate the individual components around. However, such teaching kits still aspire to the same level of quality as the commercial equipment and so the prices are still high, often exceeding a thousand euros for a basic model.

Due to the accessible nature of the machine's workings, there have also been many published instances of spectrophotometers built out of simple(r) materials and on a small(er) budget. Examples include Peiera, et al. (2019), Kovarik, et al. (2020), Shin, et al. (2022), Osterheider, et al., (2022), and Poh, et al. (2021).
However I noticed a pattern in these suggestions. They tend to be either limited in functionality, restricted to light of a single wavelength, or are only applicable to a small number of known analytes.
The more general purpose designs I've encountered on the other hand tend to incorporate at least one 'cheap' component that is nevertheless a considerable expense, such as a professional grade grating mirror, access to a 3d-printer, or a smartphone equipped with a camera. While such items are somewhat commonplace, if one has to purchase one specifically for the project, it quickly drives up the cost to 100+ euro's. 

With this in mind I went to make a spectrophotometer of my own design, based on the Czerny-Turner monochromator. My first goal was to make a functional general purpose spectrophotometer and the second goal was simply to spend as little money as possible.

In order to achieve this I aimed my attention at the cheapest materials I could think of that could perform the required function in my design.
For the monochromator I therefore used a rechargeable LED-flashlight as the light source. The price of the flashlight was seven euro's. Its light was reflected by two plastic make-up mirrors and broken up with a cd, for a total cost of another seven euro's. The whole thing was made of recycled wood, with the slit being a cut in a thin piece of veneer, attached by some tape. The wood consisted of scrap material from other projects, but let's value it at a generous five euro's.
The detector consists of an Arduino board with a € 0,40 phototransistor and a two-line lcd-screen. Together with a breadboard and some other bits and pieces this came in at a total of € 15,05.

The total cost of my spectrophotometer, if one had to build it from scratch, thus came in at 34 euro's and five cents. For this money you get a design that is compatible with standard (disposable) cuvettes that are used throughout the industry:

And an overhead view of the instrument in operation:

Of course the instrument I built is not plug and play and there are a few things I learned about its limitations.
The light yield is low due to the low quality of the mirrors that lack a uniform focal point. Therefore the amount of diffusion is high towards both ends of the visible spectrum, making the instrument the most effective in the green to orange colour range.
I also found out that a phototransistor was much more sensitive in this case than a photoresistor, and the transistor also had a more consistent output throughout the whole spectrum, while the sensitivity of the photoresistor I tried was greatly reduced above ~600nm.
Also the slit in the veneer is still somewhat broad, even if it was cut with a sharp scalpel. Therefore one can only measure the absorption in a broad-ish range of about 50 nm instead instead of a single wavelength.

In terms of its practical use, the calibration calculations have to be performed by hand. First the maximum absorption of the sample is determined, before a blank and a series of standards are measured against that point of maximum absorption.
As there is no (reliable) way to record this specific maximum, repeatability of experiments is a possible issue, as the measurement cannot be repeated exactly with known wavelengths.

Nevertheless, I found the instrument to be accurate and reliable with solute concentrations as low as 1 mg/mL. This is much less sensitive than commercial models, which can often reliably detect concentrations of 1 mg/L or even lower, but it's perfectly useable for my personal applications.

By not attempting to adhere to modern analytical standards, I have thus been able to build a functional general purpose spectrophotometer compatible with standard single-use cuvettes for about the same price as a single package of these cuvettes.

Saturday, 28 February 2026

Ab die ecke und schäm dich

It's physically impossible to see all the exhibitions in today's global art world. Therefore much of the goings-on at exhibition venues are photographically documented and distributed on websites like contemporaryartdaily and contemporaryartlibrary.org, which gives people the impression that they have seen all there is to see and are in the know. I've personally disliked the widespread prevalence of this practice for over a decade now, because the installation views used on these websites tend to give an illusion of an overview, the art equivalent of omniscient narrator, where works are depicted in relations that are impossible for any physical visitor to encounter.
In this method it is forgotten, or ignored, that a visitor to an exhibition space is a physical entity, and one that takes the world in with their eyes. Eyes that are different from a camera lens because they have a wide field of view, but a narrow and continuously shifting focus. The body has ears, it has a nose, it has legs that walk and arms that reach. 
And so a picture hung at a height of 110 cm appears completely different from a picture hung at a height of 155 cm. Navigating through a room that is 4 by 5 meters is very different from navigating a room that is 16 by 20 meters. Yet the prevailing standard of documentation has the camera at the height level of the pictures and depicts the room from corner to corner, making such marked differences appear identical.

A recent example of this that I encountered was the show Day for Night by João Maria Gusmão at Sies+Höke. I had seen installation views of the exhibition on the gallery's website, showing the works neatly arranged in a single line and in a clean space, like one is 'supposed' to hang a proper gallery show:

When I visited the exhibition, however, the installation of the works was strikingly peculiar to me, with all the works being hung very low. The top of works were hung slightly below shoulder height for me, being 186 cm tall. I therefore took the following photo with the camera at my eye level:

This snapshot is much closer to the reality of my experience in the exhibition. The works no longer appear as grand statements like in the gallery's documentation, but rather as small and humane hand-made experiments, full of flaws and imperfections.
And this is just one example of when 'good and proper' documentation leads to a distorted view of what the exhibition factually is. Such practices undermine the intellectual honesty of art, and so debase the entirety of art as a noble pursuit.

At the same time, this practice of showing an, at times physically impossible, overview has influenced on how artists and curators alike install their exhibitions. When photographing exhibitions, the camera is often placed in the corners of the room in order to obtain such an overview. Consequently, I've seen many curators, and artists, 'instinctively' walk to the corners of a space while installing exhibitions.
There is little rhyme or reason to this practice, as I've never seen any visitor to any exhibition voluntarily stand in the corner like a punished child, so what the exhibition looks like from that vantage point should be of little concern.

Yet this is a common occurence to the detriment of all exhibition making. I personally encountered a very clear example of this practice in the 2023 exhibition Channeling at the MMK in Frankfurt am Main.
In one room of this exhibition the very wide spacing of the works made no sense to me as a visitor walking inbetween the works. It was then that I realised that the curators must have only considered the show from the corners of the room. I then proceeded to take photographs from all four corners and indeed the placement of the works appeared to make more sense from there.
I later compared these photographs to the official documenation found on the MMK's website and this confirmed my suspicions. In the following images the official documentation is overlaid on my own documenation from two opposite corners of the room:


As you can see, both of the official documentation photographs are simply two narrow views of what can be seen from the corners of the space. The compression that's especially present in the first photograph also shows that their photographs were taken with a short telephoto lens. Thus the experience the curators were apparently aiming towards in the exhibition was for the visitor to stand in the corner of the space and look at the works with a pair of binoculars...
The wall I was leaning against was also empty in a bad way. This can be clearly seen in a photograph taken from where the security guard was standing in the first photograph:


Instead of a short telephoto lens, the official documentation is now all of a sudden shot with a wide angle lens with a larger field of view than the human eye. It's physically impossible to see both works simultaneously like in the MMK's documentation. Their photograph therefore presents a view of the exhibition that no visitor to the space has factually experienced.
But of course, in todays art world the 'proof' of the documentation is more important than any physical reality, so one of the clearly incompetent curators of the exhibition has since moved on to become the chief curator at the MUMOK in Vienna.

I personally believe that documentation of an exhibition should attempt to capture the experience of walking through the exhibition as accurately as possible.
Unfortunately, few institutions attempt to adhere to reality, prefering the polished and standardised appearance that provides them with greater opportunities founded on ever greater falsehoods.

Friday, 6 February 2026

Tuesday, 30 December 2025

Middlemen

In both art and science, the products of (small groups of) individuals are disseminated to the world by other companies. In the world of art, these companies are the galleries representing artists. In the world of science, they are the publishers and their journals.
In both these situations there is a clear distinction between those who produce the goods and those who distribute them to a wider audience. The presence of such middlemen is common in many industries, but an uncommon aspect found in both art and science is that the financial benefits to the intermediary are far greater than those of the producer. Scientific publishing is now a multi-billion euro industry and the largest of the art galleries have turnovers in the range of tens of millions of euros.
The curious similarities between the two fields are the result of imperfect information on the consumer side, combined with some leftovers from an older world where the financial risks were differently distributed and legally arranged. 

For both scientific publishing and art galleries the most valuable asset is the firm's reputation.
For example, the price of an artwork is linked quite directly to the standing of the gallery it is shown in. Similarly, a scientific discovery is generally considered more impactful if it's published in a journal of significance. It's therefore imperative for both galleries and scientific journals to become, and remain, reputable. It's also easy to see that for both fields there is simultaneously no inherent and necessary connection between the quality of the work and the social standing of the middleman. The intermediary does not change any intrinsic property of the final product. That the perceived quality of the intermediary is nevertheless seen as a useful indication of the quality of the good is due to a characteristic that economists call imperfect information.
In both art and science, there is no information about the quality of a good that is both reliable and readily available. The causes of this imperfect information are different in each field, but over the course of the last century they have led to a similar outcome where the intermediaries have a disproportionate influence on both the kind of goods that get produced as well as which consumers have access to it.

Any consumer needs information about a good in order to make a decision about what is worth spending their money on. They can either have full access to all necessary information, which is called perfect information, or limited access to one or more characteristics of the good, which is called imperfect information.
In both art and science, information about the quality of a good is difficult to ascertain for a large number of interested buyers. Quality in the arts is next to impossible to quantify and subject to changing cultural perceptions. And while scientific merit can be checked in principle, this requires an impossibly large amount of time, money and other resources, so in reality it is unfeasible for any one party to make an objective judgement based on their own experimental knowledge about the quality of all articles published in all journals.
Hence, for both art and science, there is a lot of effort that goes into convincing a potential customer of the value of the good that is being sold. As the goods themselves don't provide accurate clues to their genuine value, this is done through less direct means that convey a perception of longevity. Such (signs of) longevity can ostensibly only be reached by consistently providing quality goods.

Galleries and artists today try to provide credible signals of value by demonstrating a long-term commitment to each other. Until the early 20th century, this meant that dealers were directly buying (nearly) all of an artists output, thereby putting their money where their mouth is. If nothing else, this would at least demonstrate to a potential client that the dealer has a strong belief in the artist. And the dealer is only able to aquire those works today if they have made sound financial decisions in the past. These days the commitment is less strong, expressed by 'representation' of the artist by the gallery. The financial capabilities of the gallery are generally demonstrated through large, and mostly empty, spaces in expensive buildings in desirable locations, as well as participation in ludicrously priced art fairs.
I've written elsewhere on this blog how this shift is likely partially caused by changes in anti-trust legislation in the Western world in the first half of the 20th century, so I won't delve any further into this subject here.

The publishing of scientific works likewise underwent significant changes in the last two centuries.
A scientist is usually employed by a university or some other institution, and when they've made a discovery, they write up what they've done to try and make it known to others what they have discovered. It is of course difficult to reach a broad audience, even in the age of the internet, so this is one thing a publisher can help you with. A publisher possesses ways to reach an audience that any single scientist doesn't have. A publisher also has access to infrastructure. Although these days more and more of scientific publishing is done digitally, physical printing and distribution of materials has historically been a venture with large upfront costs, combined with specialised knowledge and equipment. These upfront costs carry significant financial risks, which can only be borne by a large company that is able to spread such risk over multiple ventures. 

It's also a well known fact that most scientific literature has a very small readership. Current estimates on the audience size of the average journal article range from single to triple digit numbers.
Yet at the same time, there is a great number of scientific articles that are published every day. With such fragmented readership, there is little possibility for scientific texts to gain widespread attention in the same way a newspaper article or a viral video might. Therefore, to a broad audience it is virtually unknown what the value is of any given article relative to all the other articles that are available.
As already stated, some of this uncertainty is remedied through the reputation of the journal the article is published in. This reputation is mostly based on the reception of the works that were published by the journal in the past, as well as the academic standing of its current editor(s). There have been attempts at quantifying this reputation by metrics like a journal's impact factor, which essentially measures how often articles from a journal have been cited by other scientists. But as Goodheart's law states, any measure that becomes a target seizes to be a good measure, so such undertakings merely repackage the problem instead of solving it. 

Both industries thus have a small customer base and these customers ought to be sceptical of the goods they provide and the high prices they ask for them.
So how do these middlemen leverage their position to create profits for themselves?
In the arts it is a simple question of gallerists charging very high commissions for their work, so that a handful of sales can provide an adequate amount of turnover, especially when their risk is spread over a number of artists.
In scientific publishing, exorbitant profit margins only arrived around the turn of the millennium, and to see why this is the case requires a short history lesson on copyright law, and in particular how such laws were implemented in the United States of America.

The foundation of today's copyright legislation was laid at the Berne Convention in 1886. This type of copyright is based on an idea of author's rights, where the creator of intellectual property is also automatically the owner of intangible rights relating to that work. These reproduction rights could then be licensed to a third party, such as a publisher. This can happen in different ways, but it must be noted that a perpetual exclusive license to reproduce the work is an option, even when the author retains the copyright in such a case.

This is in contrast with the common law idea of copyright, which is much more focussed on the economic right to publish and distribute. The United States, which legal system is based on common law, was thus late in incorporating the principles of the Berne Convention. In the early 20th century, copyright for individuals did not exist, but a publisher could register the publication of a work at the copyright office to obtain its copyrights.
It's a bit of an oversimplification, but it wasn't until the Copyright Act of 1976 that the intellectual property laws of the United States became more closely aligned with those of most other countries.

This change has quite directly lead scientific publishers to mandate their authors to sign over their copyrights to the publisher, instead of licensing their papers. At best this can be seen as a good-natured attempt to retain the best publishing standards possible, but it's much more likely that this decision was aimed at retaining control over the substantial captial the publishers had ammassed up until that time.
For example, in the 1966 edition of the Handbook for Authors of Papers in the Journals of the American Chemical Society, the section on 'Liability and Copying Rights' is only half a page long and simply states 'The Society owns the copyright for any paper it publishes'. This was true under the federal copyright laws of the USA at the time, which required registration at the copyright office.
Interestingly, the section on 'Liability and Copying Rights' of the 1978 edition of the Handbook for Authors of Papers in the Journals of the American Chemical Society was nearly twice as long as the previous edition. It now contained the following phrase: 'Under the terms of the Federal copyright law, effective January 1, 1978, scientific publishers who wish to obtain copyright ownership of papers in their journals are required specifically to obtain such ownership from the author of each paper. Since it is necessary for the widest possible dissemination of scientific knowledge that the society own the copyright, authors are required to transfer copyright ownership before publication of their manuscript.'

This last sentence is simply not true. A perpetual, even exclusive and non-restricted, license to publish poses no practical objections. However, such a license would still leave the ultimate ownership in the hands of the author, so that the publisher could not license the work out to third parties. Transfer of copyright ownership thus is an issue of control of the work beyond the any direct publishing efforts in their own journals.

However, it might have been vital for publishing companies to protect those interests. In the 1970's, publishing was still a complex and costly business, with large upfront costs and little or no guarantee that anybody would be interested in the final product. The publishing industry had a high risk of failure and the small number of scientific publishers that have survived, only survived because they originally published books that turned out to be of particular significance and relevance to other scientists. Unlike most of their publications, these tomes had several reprints and made a healthy profit, which could offset the cost of the many failures.
It is, however, impossible to predict which publications become a hit, and if the publisher didn't own the copyright, such a reprint would have most likely have to be renegotiated with the author. This author by then has of course seen how well their book is selling and so they'll likely want a bigger cut for themselves, or could even let the reprint be done by a different publisher altogether. This is therefore debilitatingly risky to a publisher during that time, and so the transfer of copyright ownership may have been a reasonable request in the 1970's. 

This all changes towards the 2000's and the advent of widespread internet access. Through more than a century of publishing and consolidation in the industry, a handful of scientific publishers are in possession of enormous archives and because of their insistence on copyright ownernship, they have full control over them. Through digitisation these materials are now also easily searchable and through the internet they can be distributed at negligible cost to the publishers. In other words, the publishers' material capital is now more valuable than ever, while their operating costs have fallen dramatically.
As a result their profits have risen to extraordinary heights. To illustrate this fact, two of the top ten highest paid CEO's in the Netherlands are scientific publishers. 

In summary, the presence of middlemen is necessary in both art and science to create credible signals about the quality of goods. And in both these markets, the middlemen have understood the necessity of their presence and found ways to leverage that power into great profits by essentially exploiting the weak negotiating position of their suppliers and in some cases those of their clients.
Such predatory practices are much lamented in both industries, yet I'm unaware of any proposed solution that could remedy the problem. Many of such initiatives are focused on the (financial) inequality of the artist/scientist and gallery/publisher, but I believe a solution can only be found in making information about the quality of goods readily available to end users.

As a final remark, it must be noted that book publishing in the arts is a market that functions remarkably well, considering the difficulties that exist in scientific publishing and the sale of artworks. In art publishing, there is a healthy market of buyers and sellers, while risks and profits are usually shared in reasonable terms between the artists and the publishers.
The reason for this is simply that an art book can quite literally be judged by its cover. When selecting which art book to buy, an interested buyer usually can find the books that appeal to them by considering the design of their covers. Art books also retain their value rather well, so that even if a mistake is made, a buyer can still resell the book at only a minimal loss. Therefore information on the quality of a good is widely available, while the cost of misinformation is marginal.
This is the exact opposite of scientific publishing and the market for artworks, where credible information is hard-won and the costs of getting it wrong can be extremely high.

Monday, 29 December 2025

'Use your imagination to find a way into level 7'

When a friend of mine was studying 19th century literature, she and her classmates complained to their professor that there were too many books on the compulsory reading list. They argued that they wouldn't have the time to read multiple 400+ page books in just a few weeks. The professor simply replied that if they would live how the people lived during the period the books were written in; without TV, without radio, without computers and phones, then they would find it easy to consume that much literature.
The takeaway was thus that in order to appreciate something created in a certain time, one also needs to understand the broader context of its creation and reception.

I was reminded of this anecdote when I recently started playing the original The Legend of Zelda. Designed by Shigeru Miyamoto in 1986 for the Famicom, or Nintendo Entertainment System, the game is famous for its open world exploration that helped to shape the way games are made today.

I had already acquired the game about ten years ago, yet I had quickly given up on playing it then. I couldn't get a grip on the game and found it too frustrating to figure out what to do and where to go. My mistake was that I tried to play the 40-year old game like one would approach a modern game; by going in blind, without any prior knowledge, without even reading the instruction manual. Unsurprisingly, the abundant limitations of 1980's technology were unable to properly impart to me the subtleties of the games' design.

So, when I decided to retry the game, I aspired to adhere as closely to the experience and expectations that someone would have when the game first came out.
The internet wasn't yet a presence in people's homes, but printed media were a vital and abundant source of information. I therefore sourced a copy of the game's printed instruction manual and read it thoroughly. Magazines like Nintendo Fun Club News also contained maps of the game that showed the location of many (hidden) aspects, as well as tips about how to approach traversing its landscape. Miyamoto had also meant for kids to collaborate on beating the game by exchanging information, so I found it acceptable to consult a modern internet guide for the beginning of my journey. This meant that I wouldn't spend a long amount of time finding vital items to aid me in the early parts and I could focus my energy on exploring the bulk of the game by myself.

In this manner, I found the game surprisingly forgiving and accommodating to the player. The present-day consensus is that this is a difficult game, but this thus seems to be principally an issue of knowledge. Going in head first is not always the answer, yet learning, or developing, some strategies to overcome obstacles is nevertheless easier than in some later games.

As for the game itself, you play as a boy named Link, trying to rescue princess Zelda and destroy the evil forces of antagonist Ganon. You do this by traversing the world, discovering useful items and weapons in underground labyrinths, and defeating the villains you find there.
Practically, this means that the game is broken up into an overworld, together with nine 'levels', which in theory can be played in any order. 

The overworld, the entrance to a level, and a level, or 'dungeon'.

The manual, however, mentions that 'if Link does not fight in the right Level order, he might meet a miserable end at the end of the labyrinth.' A player is thus warned from the start.
The location of the first two levels, or dungeons as they are now known, are shown in the instruction manual. The third dungeon is easily found, or stumbled upon, by going left instead of right at the starting screen.
In the third dungeon the player also obtains the raft, which according to the instruction manual can be used to 'float across seas and lakes when Link launches this from a dock'. There are only two docks in the game and the closest one to the third dungeon leads directly to the entrance of the fourth.

The first four dungeons are thus straightforward to find, and complete, with only the information found in the instruction manual. It was only at the fifth dungeon that I encountered my first difficulty that had me seek further advice from a guide.
In the fourth dungeon one finds a clue telling you to 'Walk into the waterfall'. With some wandering around the overworld, I found the only waterfall in the game and walked into it. There I was greeted with an old woman who gave me another clue: 'Go up, up, the mountain ahead'. As had I arrived from the right, I proceeded left and there I didn't find any path that led me up any mountain.
Being confused, I found that the shortest route to this point would have had me enter from the left and that the road up the mountain lay on the right, the place where I had come from. If I were a kid in the 1980's, I would have had more spare time and probably figured this out with a bit more trial-and-error, or else I would have seen the position of the fifth dungeon in a map found in the third issue of Nintendo Fun Club News...

Dungeon number six is once again easy to find if one wanders into the new area of the overworld that has become accessible by the acquisition of the ladder in dungeon five, which 'lets Link cross holes or rivers that are as wide as he is', and its workings are demonstrated immediately after it comes into the players possession.
The dungeon is however the first real 'difficult' part of the game, as a greater number of powerful enemies are found in it, as well as an enemy that will eat your defensive shield. Although the player has probably died a few times getting to this point, this is the first part where a number of attempts will be required before proceeding.
However, with the patience, and spare time, of a kid in the 1980's, replaying the dungeon and progressing a little further each time is simply part of the fun. All it takes to beat this challenge is a little bit of practice that comes from a few repeated attempts.

In contemporary commentary on the game, dungeon seven is often considered one of the easiest in the game. I beg to differ and would argue it's the most difficult by some margin.
Dungeon seven is the most puzzle-centric dungeon in the game. Its clues are cryptic, if they are present at all, and they aren't covered by either the instruction manual or the maps found in Nintendo Fun Club News. Even the detailed 108-page book The Legend of Zelda: Tips and Tactics (available for Fun Club Members for $4.99) has only three pages dedicated to dungeon seven and provides no solutions to any of its puzzles.

Finding the entrance to the dungeon is the first chore. A clue is found in dungeon six, which tells you that 'there are secrets where fairies don't live'. Such a location is easily found, as there are two identical ponds where a fairy restores Link's health, while a third pond exists that has no fairy. These ponds are useful places that the player has surely found and noted at this point.
However, nothing happens when the player uses the strategies that have so far led to the discovery of secret passages, like bombing walls or burning bushes. Even the detailed Tips and Tactics book merely tells you to 'use your imagination to find a way into level 7'. If one proceeds to try anything and everything, you'll find that the whistle, which otherwise summons a whirlwind that transports you to different parts of the overworld, now drains the water in the pond and exposes the entrance to the seventh dungeon. Or as the instruction manual clearly states: 'The whistle is the most mysterious of all the treasures in this game. [...] People say it opens up paths for Link.' 
Such leaps of logic are exactly what gives games of this era their punishing reputation a few decades later.
Unfortunately, this is only the first curveball that dungeon seven throws at the player. In previous dungeons, hidden rooms could still be seen on the map of the dungeon. Yet in order to progress to the end of dungeon seven, you need to find the entrance to a room that according to the map doesn't exist. There is also a room with an enemy that is impossible to pass, and a text that says 'grumble, grumble'. The solution here is to use an otherwise completely optional item that can only be bought, at a rather high price, in some of the shops found in the overworld.
The boss of the dungeon is located at the end of a tunnel. The entrances to such tunnels are found by pushing blocks that thus far followed a few clear patterns. Although the location of the room with the tunnel entrance is hinted at in the dungeon, no mention is made on what kind of 'secret' is to be found there, even in the Tips and Tactics book. The frustration of finding this entrance is further exacerbated by the presence of multiple enemies that are difficult to avoid and transport the player back to the beginning of the dungeon.
There is no f-ing way that I would have figured any of this out with the materials that were available to me in the 1980's. The only redeeming quality of this dungeon is that its enemies are easy to beat, which makes the constant retracing of your steps somewhat bearable. Otherwise the dungeon undermines all patterns that the game has shown the player so far and demands of them to make these mental leaps without any external help.
The only reason this dungeon is considered easy today is that its enemies don't put up much of a fight. Therefore it's a straightforward walk to the end if you already know where to go and what to do, but if you don't, then navigating its rooms is a Herculean task. 

Conversely, dungeon eight has a hidden entrance that isn't mentioned in any official materials in- or outside the game. Yet its location looks sufficiently out of place that I deduced its presence when I entered the screen for the first time, just after finishing the second dungeon.
This dungeon is a straightforward fight with many tough enemies. Like dungeon six, it has a reputation for being difficult, but all it takes is some practice and repeated attempts. 

I had some trouble finding the entrance to the ninth, and final, dungeon, because the clue I had been given is that 'spectacle rock is an entrance to death'. Apparently the rock formation is meant to resemble spectacles or something?

Spectacle Rock

In either case, I was intent on finishing the game without any further hints. To my surprise, even the daunting labyrinth of the final dungeon was not too difficult to navigate with some repeated attempts and the aid of a notebook and a pen. The first time I entered the final dungeon I had died 37 times. After recovering all the items hidden in the dungeon and defeating the final boss Ganon, I had only died a further nine times.

So with about ten hours of playtime, I had finished the game that a few years earlier I had given up on in the first ten minutes. By playing the game in the way it was intended to be played, I was able to complete it with relative ease. Everything needed to complete dungeons one through four, or the first half of the game, can be found in the instruction manual that accompanied the game. Dungeons five and six are manageable with some determination or the aid of widely distributed maps. Dungeon eight can be found by simply being observant and the final dungeon is tough, but far from impossible.
The only part of the game that is then poorly designed by any standard is the 7th dungeon. Its puzzles are too esoteric, its logic is too convoluted and there are too many variables to brute force a solution. This is the only point where any player would throw up their hands in frustration if left to their own devices.

Puzzles in a video game are notoriously difficult to design, as it's tricky to imagine the kind of connections a player is (un)able to make with the information they have. For this reason games today are extensively playtested during development. At the time of The Legends of Zelda's development, only a handful of people worked on a game and they were literally figuring out how these things could designed and implemented. It was clear that the developers wanted the players to use their own investigative skills to solve the mysteries of the game, yet at times they ask a more from the player than is reasonable. Their ambition, combined with the novelty of the experience, has left The Legend of Zelda with some flaws that are difficult to ignore and make the game unpalatable to modern audiences.

But in the end such observations are irrelevant. Shigeru Miyamoto himself has said that the inspiration for The Legend of Zelda was the feeling of adventure he had while exploring the forests of the Kyoto countryside as a child. Seen this way, it doesn't matter if you beat the game or not, and it doesn't matter if you discover all of its secrets.
In The Legend of Zelda, there is a strong sense of things to discover and there are genuine obstacles to overcome. No matter how far you progress through the game, it will be an adventure, and getting anywhere at all will leave you with a sense of accomplishment. In 1986, when most games were high score chasers mimicking those designed to gain profits in the arcades, it was an pioneering project and an experience that would leave a lasting impact in the mind of any child who decided to spend their time on it.