February 7, 2010
Today’s reading material is about another ancient framework for software development — the finite state machine — but with a contemporary twist.
The finite state machine is one conceptual framework for breaking down a system or subsystem into modular, easily-testable, easily-maintainable pieces. It essentially consists of a graph of states and transitions. The machine is always in exactly one state, but inputs can cause it to transition to new states.
There are a number of ways of depicting finite state machines visually, but they typically involve a lot of bubbles and arrows, as in:
Although the end result is a bit glitzy for my tastes, the thought process that went into creating it is so common-sense it’s uncommonly brilliant.
December 31, 2009
Another under-sung hero of computer science: Douglas Engelbart.
One of the most prolific inventors to ever work in the computer industry, Engelbart invented the mouse in 1968, a few years before one of his colleagues took the idea on to Xerox PARC.
Yet the mouse was just a small part of Engelbart’s fascinating “oNLine System” (NLS). In 1968, Engelbart presented his collaborative networked system at a conference, a couple of decades before the World Wide Web had even been conceived.
I can’t help but think of him as a mad genius.
You can read about Engelbart or even watch his 1968 conference presentation, where he introduced NLS (including the mouse) to the world:
December 31, 2009
Today’s topic is ostensibly Data Flow, though the linked article deals with the offshoot dubbed by its inventor “Flow-Based Programming”.
More often than not, data flow programming involves using a GUI to connect boxes to one another. The connections direct the “flow” of data through reusable components to build a software system.
There are some interesting examples in the audio and video worlds, where signal generators or input sources are routed through processors, filters, mixers, and so on. Such systems allow non-computer-scientists and non-hardware engineers to build complex software signal processors quickly, and visualize and test the results along the way.
Business process managers tend to follow the data flow programming model, too.
Although data flow programming often depends on niche proprietary development environments, the concepts involved can be applied when designing any software. The key is to separate the logic from the data structures during system design. This separation facilitates the design of reusable logic components — input sources, data generators, filters, output targets, and so on. Such reusable components can significantly ease the development of new features; and modifying existing logic often becomes a simple matter of surgically inserting or removing reusable logic components.
Of course, object oriented fanatics have a hard time separating logic from data. But an open-minded OO developer might save herself a lot of bad code and software bloat by keeping the data flow paradigm in mind the next time she designs a new software system or reverse-engineers an existing one.
J. Paul Morrison developed a more-or-less data flow system for a Canadian bank (?Bank of Montreal?). He writes about Flow-Based Programming in a quirky style and with a few antiquated ideas thrown in for good measure. Originally his intriguing essay was a book, now out of print:
December 16, 2009
Today’s topic: Alonzo Church.
Church created the Lambda Calculus, which set the ground for functional programming and LISP.
He also supervised Alan Turing, and together they proved the equivalence of the Lambda Calculus and Turing Machine for computing the values of mathematical functions.
He’s one of the grandparents of computer science but he doesn’t seem to get a lot of attention these days.
Princeton in the 1930’s was an exciting place for logic. There was Church together with his students Rosser and Kleene. There was John von Neumann. Alan Turing, who had been thinking about the notion of effective calculability, came as a visiting graduate student in 1936 and stayed to complete his Ph.D. under Church. And Kurt Gödel visited the Institute for Advanced Study in 1933 and 1935, before moving there permanently.
An introduction to the Lambda Calculus:
An interesting rant about how Church’s and Turing’s theories have been twisted in literature over the past few decades:
The Lambda Calculus is especially intriguing to computer scientists because substitution discourages side-effects. Minimizing the side-effects caused by the functions & methods we write is a good way of minimizing bugs, and also makes unit testing a whole lot easier for all you Test Driven Development nuts.
Reading about Church and his Lambda Calculus can give a developer a whole new perspective on her approach to programming.
December 2, 2009
The commercial open source software space has very little in the way of philosophy and critical analysis. There are bloggers, yes. But this paper is one of the few decent essays I’ve ever seen on the topic.
So despite the lack of empirical evidence presented to back up the paper’s claims, I think & hope that 10 years from now Dixon ’s essay will be considered an important step toward understanding how to run a business which revolves around open source software.
Also, for whatever it’s worth, my suspicion is that understanding the future of “commercial open source software” is also understanding the distant future of commercial software…
November 29, 2009
Today’s article describes the Actor Model.
The Actor Model is one part methodology, one part framework for building parallel, distributed systems. Each “Actor” is an agent which does its own thing, and communicates with neighbouring Actors. Both object-oriented programming and peer-to-peer networks share many of the concepts and constraints of Actor model systems.
From one of the early papers on the Actor Model (“Viewing Control Structures as Patterns of Passing Messages”, Carl Hewitt, 1976):
The actor metaphor for problem solving is a large human scientific society: each actor is a scientist. Each has his her own duties, specialties, and contracts. Control is decentralized among the actors. Communication is highly stylized and formal using messages that are sent to individual actors.
To me the most startling and interesting concept of the Actor model is that communications are carried out between Actors A and B by creating a “messenger” – itself an actor! – and sending the messenger to actor B.
In object oriented programming, this would be a bit like calling a method with the ability to decide what parameters to pass after the method has started executing. This would be a hardcore drug for dependency injection’ists!
Hewitt’s 1976 paper is seminal:
However I do think there is better reading on the topic, and Gul Agha’s articles in particular is a much more enjoyable read:
There is also a great little collection of links on the “E programming language” website: