AJAX in Seaside
So, in yet another post on a series about Pharo and Seaside, I thought I’d highlight a great strength in Seaside: it’s incredibly powerful support for building rich, AJAX-enabled web applications.
As any web developer today knows, if you’re building rich web apps with complex user interactions, you’d be remiss not to look at AJAX for facilitating some of those interactions. AJAX makes it possible for a rendered web page, in a browser, to interact with the server and perform partial updates of the web page, in situ. This means that full page loads aren’t necessary to, say, update a list of information on the screen, and results in a cleaner, more seamless user experience (Gmail was really an early champion of this technique).
Now, traditionally, an AJAX workflow involves attaching Javascript functions to page element event handlers, and then writing those functions so that they call back to the web server using an XmlHttpRequest object, after which the results are inserted into an element on the screen. Of course, doing this in a cross-browser way is pretty complex, given various inconsistencies in the DOM and so forth, and so the web development world birthed libraries like jQuery and Prototype, and higher-level libraries like Script.aculo.us. But in the end, you still have to write Javascript, create server endpoints by hand, and so forth. Again, we’re back to gritty web development. And that makes me a sad panda.
Of course, this post wouldn’t exist if Seaside didn’t somehow make this situation a whole lot simpler, and boy does it ever. To illustrate this, I’m going to demonstrate an AJAX-enabled version of the counter program mentioned in my first post on Seaside. So, instead of doing a full page refresh to display the updated counter value, we’re simply going to update the heading each time the value changes. Now, again, imagine what it would take to do this is a more traditional web framework. Then compare it to this:
renderContentOn: html | id counter | counter := 0. id := html nextId. html heading id: id; with: counter. html anchor onClick: ( html scriptaculous updater id: id; callback: [ :ajaxHtml | counter := counter + 1. ajaxHtml text: counter. ] ); url: '#'; with: 'Increase'. html space. html anchor onClick: ( html scriptaculous updater id: id; callback: [ :ajaxHtml | counter := counter - 1. ajaxHtml text: counter. ] ); url: '#'; with: 'Decrease'.
That’s it. The full script.
Now, a little explanation. The script begins with a little preamble, initializing our counter, and allocating an ID, which we then associate with the header when we first render it. Pretty standard fare so far. The really interesting bit comes in the anchor definition, and in particular the definition of the onClick handler. Of course, this bit bares a little explanation.
The various tag objects in Seaside respond to selectors that correspond to the standard DOM events. When sending such a message, the parameter is an instance of a JSFunction object, which encapsulates the actual javascript that will be rendered into the document. Now, in this particular example, we’re actually using part of the Scriptaculous library wrapper to create an “updater” object, a type of JSFunction, which takes the ID of a page element, and a callback, and when invoked, causes the callback to be triggered. Upon invocation, this callback is passed an HTML canvas, and when the callback terminates, the contents of that canvas are used to replace the contents of the indicated page element. Neat!
So in this particular case, we have two anchor tags, each of which has an onClick event registered which, when invoked, updates the counter value and then updates the heading on the page.
By the way, there’s also a little bit of extra magic going on here. You’ll notice the ‘counter’ variable is local, while in the original example it was an instance variable. But this works, here, because those callbacks are actually lexical closures, and so the ‘counter’ variable sticks around, referenced by those closures, even though the function itself has returned, and the variable technically has gone out of scope.
To me, the really amazing thing, here, is that never once do I, as a developer, have to even touch HTML or Javascript. The entire thing is written in clean, readable Smalltalk, and it’s the underlying infrastructure that translates my high-level ideas into a functional, cross-browser implementation. Once again, Seaside let’s me forget about all those annoying, gritty little details. I just write clean, expressive Smalltalk code, and it Just Works, exactly as I would expect it should.
Update:
If you want to see the above application running live, you can find it here.
Glorp - Early Impressions
Well, this was meant to be a shorter post, but alas, I’ve failed miserably. Oh well, suck it up. Well, assuming anyone’s out there and actually reading this…
Anyway, the topic today is… well, it should be evident from the post title: my initial impressions of Glorp. No, Glorp is not just the sound I make in the back of my throat while considering whether or not to ride the kiddie rollercoaster at West Edmonton Mall. It is, in fact, an object-relational mapping package for Smalltalk, which attempts to bridge the rather deep divide between the object-oriented and relational data modeling worlds.
Now, generally speaking, I tend to be a fan of ORM’s. Of course, that’s probably because I’ve never really used one heavily in a production environment. But, generally speaking, the idea of describing the relationship between objects and their tables in code, and then having the code do all the work to generate a schema seems like a really nice thing to me. Of course, the real question then becomes, how hard is it to set up those mappings? And it turns out, in Glorp, the answer is: well, it’s a pain in the ass.
Okay, to be fair, there’s a reason it’s a pain in the ass: Glorp is designed to be incredibly flexible, and so it’s designed for the general case. Unfortunately, that means added complexity. What kind of complexity, you ask? Well, allow me to demonstrate, using my little toy project as an example. This little project of mine is an online Go game record repository. As such, I need to store information about users, games, players, and so forth (well, there’s not much more forth… other than tags, that’s actually it). So, suppose we want to define a Game object and a User object, such that a Game contains a reference to the User that submitted it.
Now, before I begin, you need to understand that a database is generally represented by a single Repository class of some kind. That Repository class, which must be a subclass of DescriptorSystem, defines the tables in the database schema, their relationships, and how those tables map to the various objects in your system. This information is encapsulated in methods with a standard naming convention (how very Rails-esque), so if some of this looks a tad funny, it’s not me, it’s the naming convention.
So, let’s begin by defining a User. First, we need to describe the table schema where the User objects will come from:
tableForUSERS: aTable aTable createFieldNamed: 'UserID' type: platform sequence; createFieldNamed: 'Name' type: platform text; createFieldNamed: 'Password' type: platform text. (aTable fieldNamed: 'UserID') bePrimaryKey.
This code should be pretty self-explanatory (a side-effect of Smalltalk’s lovely syntax). This method takes a blank DatabaseTable instance and populates it with the fields that define the User table. Additionally, it sets the PK for the table to be UserID. Easy peasy. Now, assuming the Users table maps to a class called GRUser, we define the class model that this table will map to.
classModelGRUser: model model newAttributeNamed: #userid; newAttributeNamed: #name; newAttributeNamed: #password; newAttributeNamed: #games collectionOf: GRGame.
Also straightforward. This specifies the various attributes that make up the GRUser class. Incidentally, you still need to declare a real GRUser class… all this code does is tell Glorp what attributes it should be aware of, and what they are.
Lastly, we need to defined a “descriptor” for the Users -> GRUser mapping. The descriptor basically defines how the various attributes in the model map to fields in the table. Additionally, it defines the relations between the tables. So, here we go:
descriptorForGRUser: description | table | table := self tableNamed: 'Users'. description table: table. (description newMapping: DirectMapping) from: #userid to: (table fieldNamed: 'UserID'). (description newMapping: DirectMapping) from: #name to: (table fieldNamed: 'Name'). (description newMapping: DirectMapping) from: #password to: (table fieldNamed: 'Password'). (description newMapping: ToManyMapping) attributeName: #games; referenceClass: GRGame; collectionType: OrderedCollection; orderBy: #additionTime.
So, for each field, we define a mapping. A DirectMapping instance maps an attribute to a field… err… directly. The ToManyMapping, on the other hand, sets up a relation, and maps the #games attribute of the GRUser class to the GRGame class. But how does it figure out how to do the join? That’s in the table and descriptor definitions for the Games table and GRGame object (note, I’m going to leave out the extra junk):
descriptorForGRUser: description | table | table := self tableNamed: 'Users'. description table: table. (description newMapping: DirectMapping) from: #userid to: (table fieldNamed: 'UserID'). (description newMapping: DirectMapping) from: #name to: (table fieldNamed: 'Name'). (description newMapping: DirectMapping) from: #password to: (table fieldNamed: 'Password'). (description newMapping: ToManyMapping) attributeName: #games; referenceClass: GRGame; collectionType: OrderedCollection; orderBy: #additionTime.
So as you can see, in the table definition, we establish a foreign key from the Games table to the Users table, and then in the descriptor, we define a RelationshipMapping (which is a synonym for a OneToOneMapping) from GRGame -> GRUser.
I hope at this point you can see the one big problem with Glorp: It’s really really complicated. Worse, it’s not particularly well documented, which makes it a bit of a challenge to work with, and means that if you want to do something “interesting” it can be a bit of a challenge. As a quick example, in my schema, the Games table has two references to the Players table, one for the white player, and one for the black player. This greatly confuses Glorp, which means I had to do a bit of manual work to get the relationships set up. Here’s how the black player relation is established (there may be a better way, but I don’t know what it would be):
blackField := table fieldNamed: 'Black'. playerIdField := (self tableNamed: 'Players') fieldNamed: 'PlayerID'. mapping := (description newMapping: RelationshipMapping) attributeName: #black; referenceClass: GRPlayer. mapping join: ( self joinFor: mapping toTables: { self tableNamed: 'Players' } fromConstraints: { } toConstraints: { ForeignKeyConstraint sourceField: blackField targetField: playerIdField } ).
And then it’s basically the same thing for the white player. Mmmm… ugly.
But, all that said, once the mappings are set up, suddenly Glorp can be a real joy to work with. Here’s the code necessary to add a user, and then query him back out:
| user | user := GRUser withName: 'shyguy' andPassword: 'secret'. self session inUnitOfWorkDo: [ self session register: aGRUser ]. self session readOneOf: GRUser where: [ :each | each name = 'shyguy' ].
The query is of particular interest. That looks an awful lot like a straight select block, but it is, in fact, translated into an SQL query, which is then run against the database. And that is pretty darn cool. It almost looks like a pure object store, ala Magma, and that’s mighty impressive.
Persistence in Squeak
Ah, deliciously punny post title. You’ll see, assuming you make it to the end of this thing… don’t worry, I won’t blame you if you don’t.
Over the last week or so, I’ve been working on a little toy project, partly to fill a need I have, and partly to fiddle around with developing a web application in Seaside and Smalltalk, and specifically the Squeak implementation of Smalltalk.
Now, there are many parts needed to build a functional web-based application. Obviously you need a web server to actually serve the application. You need some sort of language to implement the application in. You need a framework in which to actually build that application (okay, sure, back in the days of the wild west, people built their own, but you’d be a fool to do that today given the plethora of frameworks available which simplify the web development process). And last but not least, in all probability, you need a data persistence solution.
Of course, the first thing that comes to mind when turning ones thoughts to persistence is a good old fashioned relational database, which has been the cornerstone of data persistence for many a decade now. But when one is working in a deeply object-oriented language like Smalltalk, working with a relational database becomes rather cumbersome due to the rather substantial impedance mismatch between relational and object-oriented data modeling. As a result, we as an industry have turned to tools such as automated object-relational mappers (eg, Hibernate, etc) to try and ease the pain of this mismatch, but in general, the results aren’t what I would call pretty.
Which is why, during my first hack at leveraging a persistence solution for my little Seaside application, I decided to try something entirely different: an object-oriented database called Magma. Unfortunately, it didn’t go too well.
On Magma
Magma is a very interesting project. As a persistence solution, it really aims for the same space occupied by Gemstone/S: to act as a completely transparent persistence solution for object-oriented data models. By that I mean the idea is that you hand Magma an object graph, and it persists it to it’s own custom data format on disk. When you pull it back out, Magma reifies parts of the object graph you’re interested in, and when you modify the graph, Magma spots the changes and reflects them back into the persistent store.
Of course, on the face of it, this seems like absolute magic. You simply work with your objects. When you want to persist a change, you just do something like:
session commit: [ model doStuff ].
And voila, everything just, well, works. Of course, persistence is about more than just simple object storage, in that you also need to be able to query the data model, and be able to do so in an efficient manner. To that end, Magma provides a few specialized collection objects, such as the MagmaCollection class, which provide interfaces for applying indexes, querying, sorting, and so forth.
So, on it’s face, Magma looks like a fantastic solution! The transparent persistence model makes it dead easy to manipulate your data model, and you no longer have to jump through all the object-relational modeling hoops that one would normally have to deal with.
But, alas, it’s just not that easy.
Unfortunately, Magma has one serious fault that rules it out for all but the most basic data-driven applications: It’s slow. Additionally, because Magma absolutely requires per-attribute indexes for any collection you want to query, the number of indexes in a data model can grow substantially, particularly in data mining/exploration tools. Worse, Magma steps on a rather nasty performance problem in Squeak whereby large numbers of files in a single directory (as in, thousands) causes the FileDirectory class to bog down… and guess what happens when you create a large number of indexes? That’s right, a lot of files get created in a single directory, and so you get utterly dismal performance when any index is initially opened.
And as if that weren’t enough, in order to really squeeze decent performance out of Magma, you must start tweaking what are called “read strategies”. See, when you start reifying an object graph, you have to make a decision on how deep to go before you stop. After all, if you have a deep tree of objects, unless you plan to traverse that whole tree at some point, it’s a waste of time to load the whole thing all at once. So the “read strategy” dictates at what depth various parts of the object graph are read. But ultimately, what this equates to is deep micromanagement of the database behaviour, and, quite frankly, I have absolutely no interest in that.
Thus, after many days of fighting, I’ve decided to throw out Magma. Which is rather painful, as I already have an object model built up assuming it’s use. Fortunately, the very nature of Magma means you don’t really tailor the object model too tightly to the database, but things do leak through here and there, and the model itself must, to some extent, be designed to facilitate querying, traversals, and so forth. Thus, any movement away from an RDBMS will necessitate rethinking my data model.
A Way Forward?
So what now? Well, I’ve decided to take the hit and switch to a solution based on Glorp, an object-relational mapping system for Smalltalk, and PostgreSQL, that venerable RDBMS. Of course, this will likely come with it’s own issues, first and foremost one of installation…
Unfortunately, while Squeak package management has taken a step forward with Monticello, the management of dependencies between packages, and inconsistencies between platforms (eg, Pharo vs Squeak) means that things are a lot harder for the user than they need to be. In this particular case, the original Glorp port is rather old. So the folks developing SqueakDBX have worked to port over the latest version to Squeak, with some success. Unfortunately, their installation script doesn’t appear to work in Pharo. So I had to resort to pulling in their loader classes and then manually executing the installation steps by hand. Tedious, to say the least.
But, on the bright side, I have a Pharo image that seems to have a functional Postgres client and Glorp install, so I can start fiddling with those tools to see if they can meet my needs.
Which brings me back to the double entendre. Because returning to the Squeak world has reminded me of one thing: Occasionally the tools get in your way as much as they clear it out for you, and so sometimes you really do need to be incredibly… yes, I’m gonna say it, get ready… here it comes… persistent.
Why Developers Should Be Writers
In my many years in the software development industry, not to mention my many years in the software development education industry, I’ve been continually amazed by the tacit acceptance of the fact that many (most?) software developers are terrible writers. The university programmes don’t require anything beyond a simple English 101 class, and companies simply accept the fact that many of their people are, at best, barely literate. It’s a sad, stupid state of affairs, and I figured I’d take a few minutes to explain why I think it’s a detriment to the industry as a whole.
You see, in my mind, at it’s core, software development is fundamentally an act of communication. Of course, there’s the obvious fact that a developer must take their ideas and communicate them to the computer, which then executes them. But as developers, we must also communicate ideas to our users, through the user interfaces we build. And we must also communicate ideas to other developers through the code itself, not to mention the comments therein (after all, as any developer will tell you, development is as much, if not more, about reading code as it is writing it).
Similarly, writing is, obviously, an act of communication. When a writer writes, their goal is to take amorphous, ephemeral ideas, and turn them into concrete, written words which preserve the essence of those ideas and communicates them to the reader.
Now, in order to communicate complex ideas through written word, one must master some very basic skills:
- The ability to clearly conceptualize an idea and transform it into a more concrete expression.
- The ability to break down that idea into simple parts that can be easily explained.
- The ability to explain those parts in a way the reader can understand.
- The ability to take those parts, now explained, and to synthesize them into a coherent whole.
Does this sound anything at all like software development?
Furthermore, a capable writer pays attention to detail. He is as much concerned with the way an idea is expressed as he is with communicating the idea itself. For example, I could’ve written this entire post in short, terse sentences with no paragraph breaks. But I care as much about how these ideas are communicated as I do about the actual act of communicating them.
Similarly, in the area of software development, while two developers may derive the same solution to a problem, one may choose to write terse, difficult to read code that’s poorly formatted and organized, and consequently difficult to maintain, while the other may produce code that’s precisely the opposite.
By now you can probably guess what I’m getting at. I would surmise that you would find a correlation between developers who are skilled writers, and those who produce code that’s clean, readable, and maintainable. Now, that’s not to say there aren’t exceptions. I’m sure there are many many developers out there that are great writers yet terrible developers, and vice versa. But I would contend that, statistically, you would find a correlation between writing skill and development skill, and at their core, these two disciplines are really very similar.
So why is it that we accept such poor writing skill in the development community? Quite honestly, I’m not sure. I think part of the issue is the fundamental belief that software development is an engineering skill, a process that’s dominated purely by technological problems that must be solved with technological solutions. I suspect it’s also driven by a false dichotomy, the idea that writers are “thinkers” and technologists are “doers”. But I truly believe it needs to change. Meanwhile, the next time I interview someone, I may be tempted to ask them to write a short essay on a topic of my choice…