Re: End result of Wiki-ish system design + final question
[prev]
[thread]
[next]
[Date index for 2005/02/14]
On Feb 14, 2005, at 10:40 AM, Martin Moss wrote:
> I have a few thoughts on this... In my experience
> writing a daemon process is easy (well ish) but then
> configuring your system to manage them (have they
> died, have they crashed etc..) is more trouble than
> its worth.
Maybe -- although thttpd has a great wrapper script which I promptly
"appropriated." It goes something like this:
#!/bin/sh
while true ; do
/usr/local/etc/apache2/perl/regend
sleep 10
done
If regend ever exits, it's up again within 10 seconds -- no constant
polling necessary. And the chances of a shell script this simple
crashing are presumably quite small. The preforking process also does a
good job of managing its children (it comes from Perl Cookbook recipe
17.12). However, firing up one regen server for every httpd child is
taking up too much RAM, so I'm switching to a select() based server, a
la recipe 17.13.
> Is it possible to use some kind of cronjob based
> system, which runs a script every minute, picks up a
> list of things to process from say a database (which
> your Handler writes to, to communicate to the backend
> processes)...
The idea of storing a list of items which need updating in the db is
appealing (that's how I handle RSS renders), but this does three
things: it puts the weight of the "do we need to regen?" code back onto
httpd, it delays regens for a minute, and it does those regens all at
once, which could cause a slowdown every 60 seconds if the site gets
busy. The way it works now is kind of nice, because if the user *knows*
that a link should have shown up on a given page, they can hit "reload"
and see the just-regenerated version.
- ben