Record caching

Upon each load of a page template, the template engine initiates a number of requests against the dmBridge HTTP API - sometimes 40 or more, depending on the template. Every time the API receives one of these requests, the PHP engine has to compile and run the API application. Generally, this takes less than a tenth of a second per request; but even then, all 40 of the requests could still take four seconds or more, during which the web server is bogged down and the patron is waiting impatiently.

Under heavy traffic, these effects are multiplied. If two patrons were to hit a template at the same time, for example, they would each have to wait 8 seconds. This is obviously not acceptable.

To work around this problem, the template engine employs an API record cache. When it is enabled, the template engine saves most of the records it has received to disk. Upon each subsequent patron request, it checks to see if it has already downloaded the record, and if so, loads it from disk instead of requesting it from the API. Server load as well as response times are dramatically reduced.

The main drawback of any sort of cache is that changes to data that has been cached are not immediately reflected in the cached version of the data. This means that if a metadata record gets updated in CONTENTdm®, the changes will not appear in the dmBridge templates until the cache has been refreshed. Certain records, such as comment and tag lists that need to stay fresh, are never cached; but there is no way to completely circumvent this problem without disabling caching.

The record cache can be configured in the dm/objects/config.xml file, using the several parameters that begin with api.cache. Feel free to delete any of the .tmp files in the cache folder to force an update of particular (or all) records.

After a cached record exceeds duration days old, it gets overwritten with a fresh copy from the server. If your collections and metadata change often and those changes need to be public immediately, it may be best to reduce the duration parameter. If not, by all means, increase it, as the higher it is, the more performance benefit you will receive from caching.