The Fight has been revived!

So I’m just going to pretend I was on a “sabbatical” for about a year.

I found blogging difficult because with only so much time in the day, I wanted to spend it actually working on something instead of just writing about it. My other big problem was that I actually wanted to supply blog posts with substance, and not just regurgitate other crap. I did that for a couple posts and it just felt dirty. Then I had some posts about the whole Spindle language thing, which honestly was not quite on topic either.

Eventually I just gave up and stopped caring. When it came time to renew the domain and hosting service, I felt stupid paying the money. Looking back, I only actually wrote 10 posts!


Basically, two things happened. The first was that I had done enough thinking and learning and soul searching with Spindle that I needed an outlet and feedback, and that of course means the internet.

The other thing that happened was that I saw this posting by Dion Almaer of ajaxian fame. I wanted that job. Like, I realy wanted that job. But I would never leave the job I have now. However, it did get me thinking about my hireability. Sure I’m doing something right now that shows off some pretty good programming chops, but what do I have to show for it. The framework I built isn’t open source, and the project I’m developing isn’t finished. The fact is, I have never really worked on a project that I would really consider finished or successful. Sure a couple minor projects here and there, but I mean, there’s nothing I could point Dion to and say, look at that! I did that!

I started making a list of things I thought would help me feel confident in applying for this job. (Again, not actually doing it, but what would make me confident if I were going to.) Among that list of things was some kind of a web presence. I realized that now that I took down, I didn’t even have a site to point to. Being a web developer, that seemed really bad.

Once more, with feeling

So for take two, I’m trying to learn from my mistakes.

  1. Don’t spend money – Until I can be sure I’m on a roll with this, I don’t want to spend any money, that way I won’t end it for fiscal reasons.
  2. Keep it focused – I have a lot to say, but I felt that RE:The Fight was not the right place for most of those thoughts. So I started another blog to keep each one focused on its specific purpose. That one is Spindle Journal as seen on the left.
  3. Just write – I don’t want to just dump crap here, but I’d like to think that if I lower my standards a little, I’ll write more, and then I’ll naturally get better at writing more quality posts faster.

Stay tuned for more on the fight. There has been a lot brewing, and I’ve got a lot to say!


So I looked at Mashups, and I looked at my own personal gripes about application development on the web, and I started trying to find a solution. I began at the language level, because that is the level at which I work (as a programmer), and also the level at which I think.

The problem space of creating a nice application development platform is one with a lot of answers. The problem of securely distributed computing with untrusted sources less so. So that’s the part I’m approaching first.

Erlang, Gears, Isolation

Mashups are happening on the web now (not securely) and Gears is an attempt to fix, but let’s be realistic. We cannot keep putting band-aids on this. (Well. We can. I just hope we don’t.) Without straying too far from what has made it successful on the web, what else is out there to see what works?

Erlang comes to mind. (I know I’ve said this stuff before in previous posts, but I thought it would be good to refresh). The message passing model works well for concurrency, but also for security. Both are going to be important for us. There is no way to avoid concurrency in a system built for network communications. The internet is one fat concurrency issue. The single threaded JavaScript model doesn’t work, and asynchronous callbacks are clumsy. Additionally, isolating processes with no shared state and a message passing model is a great way to keep untrusted sources from wreaking havoc.  The smart people at Google recognized this and created Gears workers. The question you may be asking is, “Well if Gears does this, why do we need a whole new language?” I’ll tell you. JavaScript was never built to use a message passing model. Using strings to pass messages around sucks. The lack of message pattern matching is a pain. It still doesn’t solve the major problem of accessing the dom. And asynchronous callbacks are still clumsy.

So let’s say we put Erlang in the browser. Would that actually work? Doubtful. Erlang unmodified has plenty of it’s own problems, or rather, it has some design choices that would not work well for the browser. For one thing, it’s strict immutability and almost purely functional syntax would not fly in the general public. For another, it just has no concept of a sandbox. It’s all or nothing.

The Sandbox

Many languages have implementations that operate in a sandbox. Java prides itself on sandboxing. It has those marvelous confirmation dialogs. JavaScript is also sandboxed. In both of these cases the sandboxing is really only protecting the user and the system from the program. What both of them lack is a way of protecting one part of the program from another part of the program.

The first concept when conceiving of Spindle was that it must have the fundamental concept of a sandbox. A barrier protecting one part of an application from another. A sandbox is more than a process. A sandbox is basically defined as the place where code lives. No code is shared between sandboxes. A function in one sandbox is not available in the other unless it is explicitly loaded in that one as well. It goes without saying that they do not share state. A sandbox can spawn one or more lightweight processes to actually execute code. These processes share a code base. This way code does not have to be loaded in duplicate into each process. The only communication between sandboxes is through messaging. In this way, it is the same as between processes.

Breaking up data from behavior

So I was thinking about this message thing. I thought about my own experience passing json between client and server. Luckily for my team, we built a system from automatically marshalling and unmarshalling so we never dealt with the raw json. It seemed to me that if messages were basically json objects, that should be built in for free.

So I thought that it might be a good idea to clearly divide the data from the behavior. An object has certain “properties”. When turned into message format, these properties become the data structure a la json. When accepting a message object, it should be just as easy to flip it back.

Imagine you send a Person object from one sandbox to another.  Or even from server to client. Using json notation, lets say it just looks like:

 {"firstName":"Russell", "lastName":"Leggett"}

But you actually have a Person class (or prototype) with some functions. Instead of having to create a new Person object and transfer the data, it would be nice to be able to ‘imbue’ the object with Person functionality. Like so:
personVar <- Person

Now this might seem like something as simple as copying functions over like many utilities do, but there’s more to it. It would be more like swapping out the prototype on the fly.

This thought process brought me around to the idea that the specification for functions should stay away from properties. This breaks the fundamental concept of prototypes, but hey, I’m not trying to build JS2.

Those were my main initial thoughts going into Spindle. There are a lot more ideas I’ve come up with since that starting point, and I’ll get to those next.


So I tried to think about the web from the perspective of an application platform. Having built what we have, what could we do differently if we could do it from scratch? Clearly, security is one of the biggest problems. It is a problem with existing applications, but many of those security holes can be protected against with effort. The even harder security problems are the ones that have no solution. The kind that are attempting to be solved by Google Gears. Mashups have become a popular idea that cannot meet their true potential due to insecure connections between domains/owners/code bases.

In a recent presentation by Douglas Crockford, he talks about moving the web forward. There is a lot of great stuff here. There always is. But there are two major things that I wanted to address.

  1. “The next great leap [in software] might realize the dream of assembling software like Lego.” He further stated that that leap was being realized already through Mashups.
  2. Mr. Crockford goes on to describe how the current web technologies (both open AND closed) are far too insecure for anything but the most trivial Mashups. He then lays out how the web can move forward:
    • Safe JavaScript subsets (Caja, Cajita, ADsafe)
    • Communicating Vats (Gears)
    • Secure Programming Language (?????)

Interestingly enough, I recently saw something on InfoQ about “Lego” software in a presentation called, “The Lego Hypothesis” by James Noble. His presentation is worth a listen, but it is long and rambling and a little hard to summarize as easily as Douglas Crockford’s. However, the subject matter was relative. He discusses the history and feasibility of the dream of Lego block software. The same concept as described by Crockford in his presentation that he believes will be the “next great leap”. Noble demonstrated the complication in the Lego dream. Mostly that it’s a lot more complicated than plugging different parts together. You cannot build a complete application out of simple reusable parts. Some things have far too many dependencies to be simply abstracted into a reusable plug interface. In contemporary programming the best we can hope for is to glue together what we can to reuse.  Near the end of the presentation, he did in fact point out how Mashups have the right idea and that more and more software will go in that direction.

I agree with Crockford and Noble. The fact is that modern software and the web are fully intertwined. There is no going back to the dark ages of isolated machines. It’s not just about RIA anymore. And the fact is, as we move more and more towards networked information, we will need the ability to integrate between parties that have to operate under mutual suspicion. Crockfords third point about how the web can move forward has some question marks next to it, but I hope that Spindle can be a possible solution. A language with the goals of secure distributed computing.

I know I keep putting this off. I started this post with the intent of actually describing Spindle, but I guess it’ll have to wait till next time.

State of the Web

Things just keep getting better. What can I say. A lot of the fear I had about the open web shriveling up an dying has been all for naught. The Open Web Foundation is pushing to keep ideas and technology open, Google has thrown in their hat with Chrome, and Harmony has spread across the web. yay. Heck, there’s even an Open Web Podcast doing a much better job than me in keeping an accurate post of what’s going on.

But enough of all that! This meaningless blog was never about that stuff. It’s about my practically non-existent pursuit of building the next generation of web technology. I just wrote this post to say that I think we’re moving away from the fears that haunted me in the night. However, I think my gripes about HTML5 are still pretty valid. I’ve been watching the mailing list a lot and I really do think they’re making great progress. What with the video tag and the inter-document communication and all that other crap. I think it’s going to make a lot of people happy. Including me. I just don’t think it’s enough. But I said that already.

Sooo… I’m gonna start talking about what I’VE been doing lately in my tiny tiny tiny amount of free time. At the moment, it’s called Spindle, and it’s a programming language that I’m designing to better handle the problem space of the web and the untapped potential of distributed computing.

A Web Application Platform

The Layers

Ok, so it’s another ambitious future looking post. I’ve been thinking a lot about the architecture of a browser platform that fits what I’ve got in mind. Looking at the HTML5 mailing list largely confirms my thought process. I noticed a few weeks ago a thread in the list regarding text in the canvas element.

> I still think by introducing the drawString() method into Canvas we are
> opening the same can of worms that was open in SVG.
> If we go that way we will need a drawParagraph() method to draw multi
> line strings or blocks of text with wrapping and a bounding width. We
> also need to be able to stylize the text, i.e. changing the font-weight
> / color / font-style … of any word.
> The list goes on and on … and HTML and CSS already cover it all.
> The HTMLElement.drawElement() method should be no problem to implement
> since userAgents already do render HTMLElements.
> Having it return an ImageData object will make it insanely simple to
> manipulate in Canvas. The text elements/contents can easily be in the
> fall back content of the Canvas tag thus keeping it accessible. Getting
> the bounding box of an HTMLElement is no problem either in JavaScript.
> And applying gradients and patterns can be done using a fillRect() with
> the appropriate globalCompositeOperation.
> Everything (almost) is there. Let’s not re-invent a square wheel.

Let me just summarize what I think is important here. Text on the canvas highlights something very key, the browser is in many ways a specialized graphics engine. Under the hood it is capable of a lot of things, but through html we are given just a small subset. SVG and Canvas are also subsets of the capabilities built for other purposes. Really, one can think of each one of these, in addition to CSS as Domain-Specific languages (well, Canvas is really more an API) that specialize in accessing certain portions of the browsers abilities, and there is a lot of cross-over. SVG is markup based vector graphics and Canvas is command based, but they can both draw arbitrary shapes and create complex graphics. And if you look at what Webkit has been up too, you can see that they’re pushing CSS to the next level as well.  They’ve got support for CSS gradients, canvas drawing, reflections, and masks.

Cutting Through The Layers

I am reminded of the blind men and the elephant. Each man feels a different part and thinks it is a different animal, because they do not see the whole. Each of these languages gives us a piece of the elephant, but wouldn’t it be nice to leverage the whole damn thing? I am a huge proponent of domain-specific languages, but they can’t work in a bubble. Imagine defining rails without ruby. Maybe rails can cut it for the 80%, but you need the general purpose language when you have advanced logic rails doesn’t account for.

But Keeping Them

Let me stop for a second and once more reiterate that the concerns of a web application may not be the concerns of a web page. Advancing the abilities of html and css alone would go a long way for those concerned only about documents. The document aspect of html and css are extremely important and should not be marginalized. Html and css should live on as specifications independent of the browser. That is the beauty of the open web. It is more accessible than just through an application on your pc at home. Content on the internet is accessible through any number of devices, and the specifications that we’ve built for the internet can live on without it. JavaScript as a programming language, html and css as a document format, and the whole ball of wax used for various widget platforms.

Bringing It All Together

So what do we do with this disparate collection of specifications that overlap and work with each other in various ways? For developers targeting the mainstream, it would be most advantageous to have a single, solid development platform. This is the draw of Flash and other plugins. So, for the sake of argument, let’s say we start with that. This hypothetical development platform would be designed to be completely on par with (or I dare say better than) the offerings of other plugins/RIA platforms. The open web provided the seeds of innovation that have spurned the next wave of software. It should not be relegated to the back seat when RIAs become the norm.

I come from a Java world (I know, I know). While it is not perfect, it has a history of multiple implementations on multiple platforms by multiple vendors with a high degree of compatibility. The write once, run anywhere promise used to be something of a joke, but now it holds true more than any other platforms I can think of.

I don’t want the open web to become Java. Far from it. I simply think it is a technology that took a similar problem and came up with a fairly successful solution.

So What Am I Suggesting?

What we as open web application developers need is a true Web Application Platform.  The same way that Java, Mac, Windows etc. provide a complete platform for robust applications, we need something capable of similar capabilities, but solving a slightly different problem. The Web Application Platform needs to be safe, loadable from the web without installation, and fluidly communicate with the web while taking advantage of the power of the local machine. I want to be able to dig deep and have access to painting apis, layout managers, and low level loading apis. HTML and CSS are high level abstractions that can be layered over this, and rarely should the lower level stuff be needed, but to truly be powerful, it needs to be there. Adobe Air is getting close to this type of power, but clearly it is proprietary, and is heavily dependent on flash. I just want something more.

Posted in ideas. Tags: , , . 1 Comment »

Where our hero does some hand waving…

…and pretends like he didn’t disappear into the void for over a month.

I hate to say it, but this blogging thing is tough. Between spring cleaning, a five month old baby, and a startup company, time can sometimes be a problem :) As for actually making any progress on code. I think that might be a pipe dream at the moment. Oh well. I’ll do what I can.

In my absence, there has been SOOO much great stuff going on, and I just wanted to mention them.

  • YAHOO! BrowserPlus was released. Its kind of like Gears but with a different slant. Different goals. It hasn’t completely opened up yet, but they claim it will soon, so that’s exciting. Unlike Gears, I’m pretty sure Y! is not as concerned about implementing/creating new standards. Maybe something like JSONRequest could go that direction, but certainly not FlickrUploader. I would actually say that if there was anything that might be considered a “new browser standard”, it would be the idea of cross-browser plugins. Unlike Gears which is closed to plugins by design, BrowserPlus is specifically built for the purpose of being pluggable. Imagine if you could write a Firefox plugin that could be used cross-browser, wouldn’t you be more likely to write one?
  • Google Gears turned one and became just Gears! – I’m really happy about this. After talking to Brad, I could really tell that the Gears team’s vision was set on helping bring all browsers up to speed and focus on new and old standards. I thought that was great but it always really bugged me that it was closely affiliated with Google. Dropping the “Google” part of the name is the first step, so “Good job guys!”. The next step would be to move the governance of the project outside of Google.
  • SquirrelFish – So awesome. Those webkit guys just make my day every frickin time. Too lazy to click the link? SquirrelFish is a new superfast JS vm runtime. Benchmarks show it faster than Tamarin at the moment even.  Not much need for explanation here. The better performance runtimes we get for the open web, the better it can compete against proprietary competition!
Ok, I guess that’s enough for now. I really don’t want to turn this into a news aggregation blog, regurgitating things that I think are cool. You can just go to Ajaxian to see where I get MY news from. However, news regurgitation is easy, and I needed to write something. Also, I feel like such a negative nancy sometimes and I thought a positive post would be nice for a change.

*Web* Developers vs Web *Developers*


I’m not trying to make the distinction a hierarchical one.  There is a whole spectrum of equally valid web developers.  Some come from a software engineering background and work mostly on the server, some don’t know much about programming, but they are whiz-tastic at crafting semantic markup and css.  Don’t forget the thousands of n00bs, the WYSIWYG users, and everything in between.  The web is for everyone. That is one of the most important aspects of the Open Web.


Let’s just break things down for the sake of a simplified discussion. The people developing for the web (whether it be sites, applications, or something in between) either have a background in programming or they don’t.  Looking at the origins of the Open Web technologies, specifically HTML, it was intended as a way of formatting and linking documents in a very simple way. No programming experience needed. As that changed, web technologies became more complex and programmers were needed.

So, following logic:

  1. Is it important to continue supporting both programmers and non-programmers?
  2. Can we have a single, unified model that supports both at once?
  3. If we had to choose a group as the highest priority, which would it be?

sigh – ok let’s give it a shot.  Yes. Hopefully. The programmer group (don’t hurt me).  Basically, my argument would be this – you can’t squeeze blood from a stone.


Lets say there is a set of functionality available to language X. [When I say language, I mean its syntax, but also its libraries and system apis.] X is tedious to work with, but represents all possible functionality.

Y is a language built on X that can accomplish 90% of what X can do, but in a way that is much easier to work with.  It still takes a trained user to work with Y, but they can get a lot accomplished.

Finally, language Z is a domain specific language built on Y that is simplified for a specific use that takes very little training to use. It is a lot more forgiving to its users, and while it can only accomplish 20% of what Y can do, it can do 80% of what the users can possibly want.

I’m sure you already picked up on what I’m putting down, but I’ll elaborate anyway.  XYZ is a process that happens all over the place with great success.  In fact, all modern computing is built on this very process. Machine instructions are like X.  Anything the computer does must eventually boil down to it.  But we build abstraction layers on top of it because its the only way to build scaleable software. Then we add more layers as it makes sense.  One important thing to remember, though, is that you always build on any layer.  In the example, Z built on Y and Y built on X, but maybe language W also builds on X.  Maybe W has a different idea of what would be the important 90%, and what the syntax should look like.  Maybe T, U, and V are all small languages like Z, but they just want to cover a different 20%.  The problem is, it’s very hard to reverse the order and build Y on top of Z instead of Z on top of Y.

Enough letters already!

I know, sorry.  Let’s apply this logic to the Open Web.  Right now, the basic building blocks that we have available are HTML, CSS, and JavaScript.  These were all built with the end user in mind. At the time of creation, the end user was not software engineers, and the purpose was documents, not applications.  I guess what I’m trying to say is that right now, we might be trying to build Y on top of Z.  Throw in a lot of workarounds and hacks and we’re getting there, but there’s just no substitution for a more solid foundation.  I will present just a couple of examples.

  1. Security.  I cannot stress this one enough. We need to be ready for secure mashups. They are coming and they are important. Having a global space is a big problem.  Not just javascript objects, but also the DOM.  And the DOM is not very secure.  We need the foundation to ALLOW better security.  Even if the higher level opts not to use it, if it is impossible to achieve at the base level, it is impossible to achieve at a higher level.
  2. That damned file input! This is exactly the kind of constraint that is still worked around through hacks, but it seriously needs improvement.  The point is, without lower level control, there is nothing you can really do about it.
I’m going to end the post here.  These thought are stewing around in my brain, and I’ll post something else when the they’ve simmered long enough.