Max and I started a company last year and we’re getting very close to releasing our beta.
This seemed like a good time to take all the libraries we’ve written in the process of building our app and make them proper open source projects. Some of them Max has already “leaked” by using them in other projects and even publishing a few to npm, but there hasn’t been a repo or docs for any of them until today.
I’ve been talking for a while about how applications are changing and about what I think the place of a web framework is in node.js. During this time we’ve been using and refining a web framework I built called tako.
tako includes all the features we needed from a web framework to build our app. It’s not a middleware or plugin system and doesn’t include one. It’s a tool for handling HTTP requests in a sensible way.
It has a composable API around routes. A route is an object and based on what kinds of handlers you add and conditions you might set tako can respond properly to various HTTP methods and content-type requests in an appropriate manner.
It can also serve files sensibly using filed which already streams and returns/responds to proper etag and if-modified headers.
It already includes socket.io. It includes JSON support. It can serve buffers from cache.
I tried using a bunch of different loggers and got really angry. I had what I thought was a fairly trivial requirement list:
Surprisingly, no loggers did this. So, a little angrily, I wrote stoopid, and named it stoopid because loggers are stupid. At one point it supported loggly but then I pulled it out in a fit of rage dealing with loggly, so now it doesn’t support loggly.
Yup, I did what I always said I wouldn’t do, I wrote a CouchDB library :)
After a few weeks I got tired of the boilerplate in all of our code that was checking response codes in request callbacks to Couch and I wrote this little guy. It’s actually grown a few interesting features but the main reason I wrote this instead of using an existing one was that I wanted to type less and this is the tersest API I could get to. Having so much experience with CouchDB I knew that I could write this quickly and that it could support features from CouchDB that most other authors are unlikely to be aware of.
We are using CouchDB, obviously. But, we have a bunch of questions we need to ask our database that CouchDB is just way to slow to ask as often as we need to so I started to transition those questions to Redis.
I really didn’t want to worry about another database. Redis has more than one strategy for longer term storage with various tradeoffs. Even with Redis’ backups I’ve also gotta worry about having a backup of that backup and so on.
Being that we’ve already solved all these backup issues in CouchDB (more accurately Jason Smith already solved them for us on IrisCouch) it seemed sane to use CouchDB for safe long term storage and to use Redis as a high performance cache.
redcouch is a client that writes to both CouchDB and Redis but does all of its lookups in Redis alone. It can also fill a new Redis database will all the keys from CouchDB in the event of a failover to provision a new database.
Our app has a lot of “fanout” work. Something happens and then anywhere from 1 to 100K people need to be notified.
Obviously, it was a good idea to break these out of the main API code into a background task system. A few people have these kinds of state machines, a bunch of them even use CouchDB. The issue I had with most of them was that every tiny operation you would do in the state machine would incur a DB write or even in some cases the generation of another task.
That was just too much load, especially on one database, so I wrote a simpler tool with less discrete writes. I also wrote a “promise” system (where promises are just returned named callbacks following the standard node callback pattern) so that any time you do IO the task knows about it and won’t resolve itself until they all are complete.
After cursing a few cloud logging solutions I decided to just write the logs to a damn file. Then I wanted to view the logs on a webpage, grrr.
We have multiple processes all writing to an append only log file so what I really needed was something that could check the size of the log at an interval and read any new data and display it on the web page.
siofile is a way to read and watch files from socket.io. It’s what I use currently to read our logs and it works pretty well, the only hangup I have currently is that when we deploy the connection is cut, then it reconnects, but the server doesn’t know about the session any longer so I have to refresh the page to get updates working again.
Don’t use this software.
If you can, never write your own deployment system. It’s not that it’s super hard, or even all that complicated, it’s that any bug you have is a critical failure and debugging it is a horrible pain in the ass.
Again, we have a set of requirements that are not yet served by other deployment software for node.js:
I put this together and it’s about 80% done and it’ll take us through beta. It uses a lot of substack’s modules: dnode, upnode, bouncy, git-emit, pushover. It’s working, but I’d love to replace it with someone else’s code.
Max wrote this one. It's a router on top of tako and request that lets you 'mount' applications by stream proxying static files and REST resources through node. It was designed specifically to allow CouchApps (portable HTML5 apps traditionally served from CouchDB) to be served from node which allows for socket.io integration and other goodies you don't get with Couch.
It sports a few nice features such as the ability to validate requests (for things disallowing POST/PUT, checking sessions or even inspecting request bodies and validating based on body content) while avoiding abstraction on top of tako's lightweight streaming semantics.