I have an application that often retrieves remote websites(via cURL), and I was wondering what are my options regarding to caching of those HTTP requests. For example:
application -->curl-->www.example.com
The issue is that cURL could be called hundreds of times in an hour, and every time would need to make hundreds of HTTP requests, that are basically the same. So, what I could to speed things up ? I was experimenting with Traffic server but wasn't very satisfied with results. I guess DNS caching is a must, but what else I can do here? The system that the app is running on is CentOS.
I don't know why Traffic Server didn't provide satisfiable results, but in general a Forward Proxy setup with caching is the way to do this. You would of course make sure that the response from www.example.com is cacheable, either via configurations on the caching proxy server, or directly on the origin (example.com). This is probably the biggest confusion in the proxy caching world, the expectations of what is cacheable or not does not meet the requirements.