'Good Ideas Through The Looking Glass' by Niklaus Wirth
Niklaus Wirth is one of the most prominent Computer Scientists and Tech-Advocates, but do his assertions hold weight today?
February 23, 2016 •9 min read • Literary Review
This paper was originally written for my course CIS*4500 - Programming Languages, taken at the University of Guelph. The paper I'm reviewing is an old Computer Science paper by Niklaus Wirth - Found Here
A spent a significantly larger portion of my time preparing for this assignment, than I did actually writing it. I read large sections of many of the papers and realized how difficult it was going to be to support my review with valid criticisms without being to subjective. This is especially true when reviewing comments that were made by some of the most renown programmers of our time; Djistra, Holt, Wirth, Hoare & Ritchie. For this reason, instead of formal critique, my paper expands on quote’s by the author that may have been vague, unclear or needed a devil’s advocate (such as myself).
“Good ideas, through the looking glass” is a Computer Science paper written by Niklaus Wirth in 2005. Unlike many other popular STEM articles, Wirth didn’t focus on the introduction, benefits or flaws of any specific computer system or paradigm. Instead, Wirth wanted to discuss “an entire potpourri of ideas, listed from the past decades of Computer Science and Computer Technology”. Specifically, he brought his attention to historically unique or novel ideas, and whether or not they withstood the test of time. As Wirth points out, it’s important to “try to learn from the past, be it for the sake of progress, intellectual stimulation, or fun” (Wirth, 2006)
The article is a fantastic read, and brings due credit to many forgotten concepts, though there were some areas where Wirth’s statements are slightly unclear, or exaggerated. Specifically, my review will discuss a single statement from each of the three sections of the article. Within Programming Language Features, I discuss the statement that the equal sign as an assignment was “a notorious example for a bad idea”. In Miscellaneous Techniques I explore the idea that Wizards typically “are obstinate and immortal like devils”. Finally in Computer Architecture, the notion that “the idea of an expression stack proved to be rather questionable”. In no way are these statements wrong, but due to their subjective nature, they deserve to be explored further. My comments should be considered additions to Wirth’s original paper, as opposed to replacements.
“A notorious example for a bad idea was the choice of the equal sign to denote assignment. It goes back to FORTRAN in 1957 and has blindly been copied by armies of language designers. Why is it a bad idea? Because it overthrows a century old tradition to let
=denote a comparison for equality, a predicate which is either true or false”
Wirth comments that there was a traditional and widely accepted usage of the equal sign that Fortran wrongly changed in 1957, causing mass confusion. While it’s true that Fortran (as well as C, Perl, Python, etc) used the equal sign as an assignment, and the Pascal family (Ada, Eiffel, APL) used it as a comparison (Germain, 2006). It’s a drastic simplification to imply that the debate over the proper usage of the equal sign began with these languages. The
= symbol; which is now universally accepted in mathematics for equality was first recorded by Welsh mathematician Robert Recorde in The Whetstone of Witte (Recorde, 1557). Within the book, Recorde explains that his design of the ‘twin lines’ was to avoid the tedious repetition of these words: “is equal to”. He wrote “I will set (as I do often in work use) a pair of parallels, or Gemowe lines, of one length (thus =), because no two things can be more equal” (Recorde, 1557). While it could be considered a conditional assignment (i.e. “Given that x is equal to y”), based
on the context of the symbol it seems that Recorde originally intended the equal sign to be written as an absolute assignment (i.e. “x is equal to y”).
However, as quickly as the symbol was invented, it was misinterpreted. Historically this failed to gain significant attention, as generally the context of an equation implied the intended usage of the equal sign. While it’s true that after the 1800’s the symbol was generally accepted as a conditional, there was not a universal symbol for assignments, and documents were still full of ‘improper’ uses. The distinction gained attention and significance with the introduction of computation. Consider the following code; other than the run on equation in Line 5, most individuals would understand this equation without a problem, as context distinguishes between assignments and conditions. However, a computer needs to explicitly know the difference, therefore most programming language would not be able to compile this code.
Wirth also expands his distaste to other symbols, such as incrementing via
++. This, like Right-Associativity was indeed a poor design decision, as it can’t distinguish between statements and expressions. While there’s no saying which usage of the equal symbol is best (it’s subjective!), it’s easy to see that there should be two universally accepted symbols for conditions and absolute assignments that are mutually exclusive.
“This author’s experiences with wizards were largely unfortunate. He found it impossible to avoid confronting them in text editors. Worst are those wizards that constantly interfere with one’s writing… if at least they could easily be deactivated, but typically they are obstinate and immortal like devils.”
Within Wirth’s Miscellaneous Techniques section he touches on the poor implementation of Wizards, implying that generally do more bad than good. He defends that wizards are indeed necessary, but are largely executed improperly. While I don’t disagree that there was and still is very evil software wizards out there, it’s important to give due credit to the good wizards, if you will. A Wizard (also known as Assistant, or Helper) is a help feature of a software package that automates complex tasks based on user-inputs (Christensson, 2006). Wizards (as opposed to daemons) are processes or features within a program, never a program itself. They are most common when installing new software, as a Setup Assistant. However they are used frequently to automate tasks within software, such as with text-editors; spellchecking, auto-indenting, auto- capitalization, auto-saving etc. More or less any action that occurs without being explicitly told to do so (Scott, 2011).
Wirth’s most significant gripe with Wizards, is that they generally cannot be customized and are permanent. However, this is one of the only features that make wizards, well, wizards! If a user needed to initialize or start-up a wizard, it would simply be a separate task or process. It’s important to realize that we’ve been spoiled by the wonders of wizards. Most individuals recognize when wizards aren’t working properly, such as inaccurate autocorrecting, or automatically changing certain strings into symbols. But no longer recognize it’s plentiful benefits; capitalizing, saving, or using the open/closing quotation points when necessary. I agree with Wirth’s sentiments; Wizards that get in the way of the task at hand should be quickly re-evaluated or destroyed. But that isn’t to say that we shouldn’t be forever grateful to all of the work performed be our wizards. I mean, they are called Wizards for a reason!
“Still, the idea of an expression stack proved to be rather questionable, particularly after the advent of architectures with register banks in the mid 1960s. Now simplicity of compilation was sacrificed for any gain in execution speed. The stack organization had restricted the use of a scarce resource to a fixed strategy. “
Wirth’s section describing the timeless yet flawed ideas of computer architecture was incredibly insightful and well written. I had to read it multiple times, looking for a quotation that I could critique, or at least add to. I chose these lines as I felt he potentially overemphasized the questionability of expression stacks as they stood at that point in time.
F. L. Bauer and E. W. Dijkstra independently proposed the same scheme for the evaluation of arbitrary expressions. When evaluating an expression from left to right, following the priority rules, the last item is the first needed, therefore making a stack. This was a convenient notion at the time, as implementing this strategy into a register bank was straight-forward. This was the introduction of stack computing (Koopman, 1994). The significant problem with stack machines was their need for a predefined register amount, sometimes wasting precious resources. While the architecture was quite successful for a while, it was quickly replaced after the introduction of register machines.
Wirth explains these events quite well in his article, and outlines the advantages that architectures with register banks had over expression stacks. However stack machines do have their advantages, and it wouldn’t be doing expressions stacks any justice to not discuss them. Stack machines may have larger operand loads than register machines, but smaller instructions and smaller total object code (Shannon, 2006). A stack machine’s compact code naturally fits more instructions in cache, and therefore could achieve better cache efficiency, reducing memory costs or permitting faster memory systems for a given cost. Also, what stack machines lacked in performance, they made up for in the efficiency of their compilers. While many of these benefits are overshadowed by the massive performance enhancement provided by register computing, they are not without value. Stack machines are still used today in some research, small hardware and in hybrid machines. Expression stacks served as a novel solution at the time, only retired after better hardware introductions allowed for more dynamic approaches (Sinnathanby 2012). I might argue that instead of “expression stacks [being…] rather questionable”, it was the notion that stack machines were the best architecture for our computing that was questionable.
Niklaus Wirth’s article “Good ideas, through the looking glass” was a fantastic piece of literature. He outlined the most significant ideas of computation within the last century, not by their success, but by their novelty and genius. Through his discussion of computer architectures, language features, and miscellaneous techniques, he does his best to provide as much information about each idea, it’s benefits and downfalls, as possible. While I certainly wouldn’t change anything about the article, there were some topics which I felt could have been explored further, and presented differently. His statements about equal signs as assignments, wizards and expressions stacks were informative and quite accurate. However, he seemed to end the sections by defining them as bad, or questionable ideas (if not outright calling them devils). While, again, potentially valid, I believe these ideas deserve a little more credit, and their benefits could have been further explored. Subjectivity aside however, another fantastic article from Niklaus Wirth.
Christensson, P. (2006). Wizard Definition. Retrieved 2016, Feb 28, from http://techterms.com
Germain, J. (2006). Variable Assignment. Retrieved February 28th, 2016, from http://www.cs.utah.edu/~germain/PPS/Topics/assignment_statement.html
Koopman, P (1994). “A Preliminary Exploration of Optimized Stack Code Generation” (PDF). Journal of Forth applications and Research 6 (3).
Recorde, R. (1557). The Whetstone of Witte. Retrieved February 27th, 2016 from http://www.maa.org/press/periodicals/convergence/mathematical-treasures-robert-recordes- whetstone-of-witte
Scott, B. (2011). Main menu. Retrieved February 29, 2016, from http://designinginterfaces.com/ patterns/wizard/
Shannon, M. Bailey, C (2006). “Global Stack Allocation : Register Allocation for Stack Machines”. Retrieved February 27th, 2016
Sinnathanby, M. (2012). Stack based vs Register based Virtual Machine Architecture, and the Dalvik VM. Retrieved February 29, 2016, from https://markfaction.wordpress.com/2012/07/15/ stack-based-vs-register-based-virtual-machine-architecture-and-the-dalvik-vm/
Wirth, N. (2006). Good ideas, through the looking glass [computing history]. Computer, 39(1), 28-39.