Hash Cache using Require

[prev] [thread] [next] [Date index for 2004/12/03]

From: Bill Whillers
Subject: Hash Cache using Require
Date: 19:17 on 03 Dec 2004
I recently ported a set of old CGI applications for mod_perl. Each script 
loads a few hashes from flat files (using "open") filled with plain rows of 
delimited data that rarely changes (~50k each). The hashes are built by 
iterating thru each file, row by row, assigning values to keys.

All the file data is permanently stored in a non-local sql database (mysql) 
and anytime the hash data is updated in the database, the flat file content 
is simply refreshed using results from a simple query that joins a couple 
tables.

QUESTION:

Am I more or less efficient than just creating the hashes directly from 
hitting the database everytime the application needs it? The database is 
*not* under heavy load, at all.

I'd like to efficiently cache this same data (somehow) now that I've ported 
the applications for a mod_perl environment for future scaling; Is this 
pointless (or more expensive doing file access). What if the data in each 
file increases from 50k to 500k or even 1-2 MB giving larger hashes?

I've considered writing the cache file as a require-able file knowing that it 
won't be reloaded unless updated; would this be more reasonable or more 
pointless? Can anyone suggest a better solution?


Thanks for your suggestions!


        -- 
        Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html

Hash Cache using Require
Bill Whillers 19:17 on 03 Dec 2004

Re: Hash Cache using Require
Perrin Harkins 19:39 on 03 Dec 2004

Re: Hash Cache using Require
Tom Schindl 12:51 on 04 Dec 2004

Generated at 11:26 on 21 Dec 2004 by mariachi v0.52