Re: [SOLVED] mp2 + worker mpm + threads + threads::shared + PerlChildInitHandler
[prev]
[thread]
[next]
[Date index for 2005/01/17]
--=-JtvVfeXtq9IvH/zFZg2J
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable
Hi,
Well, the problem was my fault. :/ I had a bug in a generic base class
I use that makes it easier to build classes that work both inside and
outside mod_perl.
For those of you who are interested, this solutions works well. By
using PerlChildInit handler to create a thread to maintain a shared
global hash of hashes with some small portion of a database, and some
information acquired with XML::RPC, my systems perform much better.
We were having problems with availability and performance of the
external data sources (eg MySQL or remote XML::RPC servers) that would
cause our apache instances to wait around timing out for each request.
Yay.
Here is a demonstration module for those of interest:
package TestPerlChildInit;
use strict;
use lib '/opt/whenu/lib/whenu-perl-lib';
use threads;
use threads::shared;
use vars qw(@ISA @EXPORT @EXPORT_OK %EXPORT_TAGS);
use vars qw($DEBUG *DBG $CFG);
use vars qw(%SHARED);
@ISA =3D qw(Exporter);
@EXPORT =3D qw();
@EXPORT_OK =3D qw();
%EXPORT_TAGS =3D (':DEFAULT' =3D> [qw()],
':handler' =3D> [qw()]);
BEGIN {
use mod_perl;
use Apache2;
use Apache::Const qw(:common :http);
use Apache::RequestRec qw();
use Apache::RequestIO qw();
use Apache::Connection qw();
use Apache::ServerUtil qw();
use Apache::Module qw();
use Apache::Util qw();
use Apache::URI qw();
use Apache::Log qw();
use APR::OS qw();
use APR::Table qw();
share(%SHARED);
$SHARED{'test'} =3D &share({});
$SHARED{'test'}->{'count'} =3D 1;
my $res =3D Apache->server->push_handlers(PerlChildInitHandler =3D> \&mod_=
perl_ChildInitHandler);
print STDERR "Testing[$$]: Installed ChildInitHandler result '$res'\n";
}
sub mod_perl_ChildInitHandler {
print STDERR "mod_perl_ChildInitHandler\n";
## Start a thread to restart other thread...
threads->new( sub {=20
while(1) {
my $ovs =3D threads->new(\&overseer);
print STDERR "Testing[$$]: Started overseer thread\n";
$ovs->join();
print STDERR "Testing[$$]: Joined overseer thread (probably bad)\n";
## Add backoff for spawning too quickly etc.
}
})->detach;
return &Apache::OK;
}
sub overseer {
print STDERR "Testing[$$]->", threads->self->tid, " Overseer Startup...\n"=
;
while(sleep 2) {
lock(%{$SHARED{'test'}});
print STDERR "Testing[$$]->", threads->self->tid, ": \$SHARED{'test'}->{'=
count'} =3D $SHARED{'test'}->{'count'} \n";
## here is where you can do more interesting things such as
## get data from databases, external sources, or update them.
}
}
sub handler : method {
my $class =3D shift;
my $r =3D shift;
lock(%{$SHARED{'test'}});
$SHARED{'test'}->{'count'}++;
$r->no_cache();
$r->err_headers_out->{"Expires"} =3D "Sat, 1 Jan 2000 00:00:00 GMT";
$r->err_headers_out->{"Pragma"} =3D "no-cache";
$r->err_headers_out->{"Cache-Control"} =3D "no-cache";
$r->err_headers_out->{"Location"} =3D 'http://www.google.com';
$r->status(&Apache::REDIRECT);
$r->rflush();
return &Apache::OK;
=09
}
On Mon, 2005-01-17 at 13:59 -0500, Richard F. Rebel wrote:
> Another good idea... :)
>=20
> But I am transfixed by this problem... I can't seem to get each forked
> apache server to have both a shared global hash between all cloned
> interpreters, *and* one thread in each process that runs in the
> background doing housekeeping. I can think of numerous things that this
> would be useful for.
>=20
> I know I am close, but I can't seem to quite grasp what I am missing. I
> thought PerlChildInit's were called for each forked child from it's
> first/main interpreter (the one that all the others are cloned from).
>=20
>=20
> On Mon, 2005-01-17 at 13:59 -0500, Perrin Harkins wrote:
> > On Mon, 2005-01-17 at 11:25 -0500, Richard F. Rebel wrote:
> > > Unfortunately, it's high volume enough that it's no longer possible t=
o
> > > keep these counters in the databases updated in real time. (updates =
are
> > > to the order of 1000's per second).
> >=20
> > I would just use BerkeleyDB for this, which can easilly keep up, rather
> > than messing with threads, but I'm interested in seeing if your
> > threading idea will work well.
> >=20
> > > * A overseer/manager thread that wakes up once every so often and
> > > updates the MySQL database with the contents of the global shared has=
h.
> >=20
> > Rather than doing that, why not just update it from a cleanup handler
> > every time the counter goes up by 10000 or so? Seems much easier to me=
.
> >=20
> > - Perrin
> >=20
--=20
Richard F. Rebel
cat /dev/null > `tty`
--=-JtvVfeXtq9IvH/zFZg2J
Content-Type: application/pgp-signature; name=signature.asc
Content-Description: This is a digitally signed message part
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
iD8DBQBB7EFgx1ZaISfnBu0RAt8zAJ9Gs0cICECn2sl7M5mGUQ1dT0nhbACfTzw4
n/mekFOeawRFPPFfBafXopY=
=hZh1
-----END PGP SIGNATURE-----
--=-JtvVfeXtq9IvH/zFZg2J--