The Seaside Web Framework
While I’m aware that I have, what, maybe two readers of this blog, I thought I might actually start regularly writing a few posts on some of my recent work in the realm of software development. Why? Well, I enjoy writing, and I enjoy… let’s call it “self-gratification”, so posting on my blog seems like a great way to satisfy both of those needs.
So, with all that said, I bring you the kickoff post, covering Seaside.
A Little Introduction
Anyone who’s done any amount of serious web development understands what an absolutely horrible place we, as a development community, find ourselves in. We’re still manually authoring HTML, hacking Javascript, writing AJAX callback hooks by hand, and generally doing all the nasty, gritty, ugly work to make rich web applications possible. Of course, frameworks and abstraction layers have come along to make this a bit easier (Google’s GWT is a great example), but in the end, many of us are still stuck in the dark ages when it comes to web development.
Enter Seaside.
Okay, no, wait, let’s back up one step further.
A Little Pre-Introduction
You all know what Smalltalk is, right? For those not in the know, it’s a nice, high-level, consistent, clean object-oriented programming language that is really the grandfather for many of the programming languages we see today.
Of course, if that were it, we’d probably all be using Smalltalk today. But, alas, the history of Smalltalk is a messy one, sharing many similarities with the Unix battles of old, plagued by myriad, incompatible, expensive implementations that drove away developers to other solutions.
Furthermore, it’s a little strange in at least one respect: rather than code being stored in files, and compiled into binaries, the entire environment, including all your code, is composed into a single “image” from which you must do all you work, including editing, debugging, and so forth. This has great advantages, for example:
- The entire environment is available to you and can be inspected and modified as you desire.
- Deploying an application involves just copying over an image and firing up a VM.
But there’s also major disadvantages:
- You must use the tools provided in the environment (ie, editor, debugger, etc).
- Integration with version control systems isn’t necessarily that great.
- It can be tough to figure out where your code ends and the system begins.
So the picture is certainly mixed. But the sheer power of Smalltalk, the language, and the encompassing environment makes it, at the very least, incredibly intriguing.
As for implementations, for hobbyists, the most commonly used environment is Squeak, or it’s more professional cousin Pharo. I’ve settled on the latter, as it seems to be taking a more professional tack, but it’s really a matter of preference.
By the way, what I’ve said isn’t actually true of GNU Smalltalk, but having never used it, I can’t really speak to it’s viability as a platform. Of course, feel free to take a look at it and let me know what you think!
Where Were We
Oh yeah. Enter Seaside.
So what’s Seaside? Well, it provides an advanced web development framework for Smalltalk that allows the developer to just, you know, get on with it already.
Yeah yeah, I know, you’ve heard that before. So let me illustrate an example for you, and perhaps you’ll see why Seaside excites me so much.
The Example
The program we want to develop is incredibly simple:
- It presents a counter to the user.
- It presents a “decrease” link which lowers the counter.
- It presents an “increase” link which increases the counter.
That’s it. Now imagine, in a traditional web framework, how you would do this. Well, obviously, you need some amount of state, here, in order to track the counter. You could squirrel the value away in a hidden field in a page form (seriously ugly). Or you could assign the user some kind of session ID, and then track the state on the server, using that session ID as a reference (somewhat complicated). Either way, you, the developer, have to focus on how, exactly, that state will be managed.
Now let’s look at how this program would be expressed in Seaside. First, a class declaration:
WAComponent subclass: #Counter instanceVariableNames: 'count' classVariableNames: '' poolDictionaries: '' category: 'Counter'
This is a simple class declaration describing a subclass of WAComponent named Counter, and containing an instance variable called ‘count’. Okay, so now we need an initializer:
Counter>>initialize super initialize. count := 0.
Again, nothing too special here, we just want to initialize our superclass and our counter. But now comes the meat of the program, and the magic:
Counter>>renderContentOn: html html heading: counter. html anchor callback: [ counter := counter + 1 ]; with: 'increase'. html space. html anchor callback: [ counter := counter - 1 ]; with: 'decrease'.
Voila, that’s the entire application, including links and state management.
No, really, that’s it. The whole thing.
So, how does it work? Well, first…
A Bit On Blocks
Like other high-level languages such as Perl, C#, and others, Smalltalk supports the concept of a closure, which is called a block, encapsulating a chunk of code along with it’s lexical scope. That code can then later be invoked at your leisure. For example:
| var block | var := 5. block := [ Transcript show: 'Hello world, my value is '; show: var; cr ].
The variable ‘block’ now contains a reference to a closure which we can then invoke later with:
block value.
This block remembers everything in it’s lexical scope, so, for example, the variable ‘var’ will retain it’s value, 5, and be emitted on the transcript. This fact, that closures are stateful code objects, is key to the way Seaside works.
Back To The Example
So, in Seaside, you never hand-write HTML. There aren’t even any templating languages. You generate all your HTML with code.
Yes, I know, this is weird, but bear with me.
You see, this has a major advantage. Consider the following piece of code from the example:
html anchor callback: [ counter := counter + 1 ]; with: 'increase'.
Of course, this spits out an anchor. Nothing fancy there. But notice how we didn’t specify a URL? That’s weird enough. But notice something else? There’s an argument called ‘callback’, and we’re providing it a block of code. Can you guess what’s happening here?
That’s right. Under the covers, Seaside generates a URL for us. When the link is clicked, Seaside invokes the callback automatically. And because the block remembers the lexical scope, it can fiddle with the counter variable, incrementing it.
So because we let Seaside generate the HTML, suddenly our program is incredibly simple. Under the covers, Seaside manages all our state for us, associating an instance of the Counter object with our browser session. When those links are clicked, the callbacks are invoked in the context of that Counter instance and can manipulate the state of the system. Suddenly we’re no longer hacking HTML, parsing CGI parameters, and all that hideous garbage. We simply write what we want (‘when the user clicks this link, increment the counter’), and Seaside does the rest.
Conclusion
So there you go. A really quick intro to Smalltalk and Seaside. As you can tell, this is incredibly exciting to me. Why? Well, developing web applications has always struck me as incredibly tedious. Rather than just being able to write my damn application, I’m stuck parsing query parameters, managing state, manually handling state transitions, and a whole bunch of other garbage that’s really only peripheral to the actual act of building an application. Seaside, on the other hand, gets rid of all that tedium and lets me focus on the important thing: building a powerful application.
And note, I’ve only just scratched the surface here. Among Seaside’s other powerful features, it has cleanly integrated:
- JQuery
- Prototype
- Scriptaculous
- A general AJAX framework for doing partial page updates
- And probably a whole bunch of other stuff.
Mighty cool if you ask me.
So, all this said, again, the picture isn’t completely rosy. As with all things, there are many issues that Seaside developers must face:
- Myriad persistence solutions that are of mixed quality.
- Code management issues.
- Deployment issues.
- Scaling and performance challenges.
And probably other stuff, too. Which will, of course, be fodder for further posts on this topic.
Git Lesson 2 - Pushing a local repo into SVN
For some time now I’ve been using git as my front end to the Subversion server at work, and I’ve never looked back. And as a result, one of the things I occasionally find myself doing is creating a local git repository in order to manage little side projects I happen to be working on. But, of course, eventually those projects need to be pushed into SVN, and in the process, it’s nice if one can preserve the local commit logs (it’d be trivial to just push the blob of code into SVN and then create a new, local git-svn repo, but that’s not nearly as nice).
Fortunately, git makes this remarkably easy. First, in your git repo, rename trunk so you can get it out of the way:
git branch -m master local
Next, you need to configure your git-svn bridge. My last blog entry on git covers this topic, and it’ll probably look something like this:
git config --add svn-remote.trunk.url svn+ssh://svn/repo/ git config --add svn-remote.trunk.fetch trunk/project:refs/remotes/trunk
Then, fetch the new git-svn bridged repo:
git svn fetch trunk
When you do this, because you don’t have a master, git will kindly create one for you corresponding to the new git-svn bridged branch. Lucky! So now we just need to get the local branch changes into master.
Ah, but there’s some trickery, here. If you were to just do a naive merge from local to master, the root commit on master would end up getting tacked onto the end of the local branch, which is exactly not what we want to happen. The solution is to rebase local to master first:
git checkout local git rebase master
Then you can merge and dcommit:
git checkout master git merge local git svn dcommit
Git will then proceed to push each of your local commits into SVN, and voila, you’re done! Then you can just delete the local branch, as you obviously don’t need it anymore.
Blog-2010-03-22
On my MythTV Backend, I find there are a number of error conditions that I want to monitor and be alerted about if they should happen. For example, as of late, I’ve been having issues with the one of the drives in my RAID configuration (under load I’m getting errors that I think are a result of an old SATA controller), which causes the RAID to drop into degraded mode, and error messages to be logged by the kernel. In a situation like this, I wanted a tool that could monitor my log files and email me if “interesting” things happen.
Now, the first thing I did was search the web for something that would do the job. swatch popped up immediately as one alternative. It’s a nice, simple Perl script which takes a configuration file that defines a log file to monitor, and a series of rules which define what to look for. Unfortunately, it can only monitor one log file at a time (you need to run multiple instances and have multiple configuration files if you want to monitor multiple files), and it has to run continuously in the background. And, quite frankly, the configuration file is a tad byzantine for my taste.
Another common option is logwatch. This application is definitely a lot more flexible, but the configuration is, again, rather complicated. And, at least as far as I can tell, it’s really meant to be run once a day for a given date range, as opposed to operating as a regular, polling application.
And thus ended my search, with the conclusion that it’d really be a lot simpler just to write my own tool. And this pwatch was born. pwatch is a simple Perl script that takes an Apache-style configuration file and processes your log files. Each matching event triggers an action, and then the event is recorded in an SQLite database. Run pwatch again and it’ll skip any events it’s seen before and only report new ones. The result is that you can just fire off pwatch in a cronjob on a regular basis (I run it every five minutes), and it can alert you if something interesting has happened.
Now, pwatch is pretty basic at this point, and I probably won’t add much more to it unless people ask for it (or unless I need it). For example, at this point, the only action it knows how to take on an event is to send out an email. But adding new features should be trivial enough, so if anyone has any ideas, let me know. And if you find pwatch useful, send me an email!
Using IPv6 to mitigate SSH attacks
So, one of the ongoing issues that anyone with a public-facing server has to deal with is a barrage of SSH login attempts. Now, normally this isn’t a problem, as a decent sysadmin will use fairly strong passwords (or disable password-based logins entirely), disable root logins, and so forth. But it’s certainly an irritant, and so it’s worth implementing something to mitigate the issue.
Now, traditionally, there are a few general approaches people take:
- Use iptables or something similar to throttle inbound ssh connection attempts.
- Coupled with the previous, implement tarpitting (this slows down ssh responses, which means the attacker wastes resources on your server).
- Implement something like fail2ban to automatically detect attacks and dynamically add them to a set of block rules (managed with something like iptables).
- Move SSH to a non-standard port.
All of these work reasonably well, and particularly for the lazy, something like fail2ban on Ubuntu is dead easy to deploy and works quite nicely. Of course, there’s always the chance that you lock yourself out if you fail at a few login attempts, so it’s not without it’s risks.
But I recently discovered a fifth option which, at least at this stage of IPv6 growth, works incredibly well: disable inbound SSH over IPv4. See, most attackers aren’t v6 connected. Meanwhile, acquiring v6 connectivity remotely is usually just a matter of running a Teredo tunneling client. The result is perfectly workable remote accessibility, while the number of SSH attacks is cut down to essentially zero.
Of course, this won’t last forever. In the future, v6 is likely to get deployed more widely, and I suspect I’ll start seeing v6-based ssh attacks. But until then, this solution is dead simple to deploy and works great!
Update:
And naturally, just a day after I finish writing this, I decided to fiddle around with NX for remotely accessing this server, and lo and behold, NX doesn’t support IPv6. :) So, I’m back to using fail2ban, until NX can get their act together (though, to be fair, latency over my v6 tunnel has an unfortunate negative impact on NX performance, and so I’m not sure I’d use v6 even if I could).