By caching content at geographically distributed servers, content delivery applications can achieve scalability and reduce wide-area network traffic. However, each deployed cache has an associated cost. As the request rate from a region varies (e.g., according to a daily cycle), there may be periods when the request rate is high enough to justify this cost, and other periods when it is not. Cloud computing offers a solution to problems of this kind, by supporting the dynamic allocation and release of resources. In this paper, we analyze the potential benefits from dynamically instantiating caches using resources from cloud service providers. We develop novel analytic caching models that accommodate time-varying request rates, transient behavior as a cache fills following instantiation, and selective cache insertion policies. Using these models, within the context of a simple cost model, we then develop bounds and compare policies with optimized parameter selections to obtain insights into key cost/performance tradeoffs. We find that dynamic cache instantiation can provide substantial cost reductions, that potential reductions strongly dependent on the object popularity skew, and that selective cache insertion can be even more beneficial in this context than with conventional edge caches.
Funding Agencies|Swedish Research Council (VR)Swedish Research Council; Natural Sciences and Engineering Research Council (NSERC) of CanadaNatural Sciences and Engineering Research Council of Canada (NSERC)