The Event MPM is not exactly the same design as Nginx, but was clearly designed to make keepalives more tenable and sending static files faster. My understanding is that the Event MPM is a bit of a misnomer because:
- Although the connection is passed to kqueue/epoll,
- certain very important modules such as mod_gzip and mod_ssl will block/consume a thread until the response is done,
- and that is an issue for large files, but probably not for PHP-generated HTML documents, etc.
Unfortunately, Apache keeps losing marketshare, and most benchmarks are damning for the event MPM. Are the benchmarks flawed, or does the event MPM really do so poorly against Nginx? Even with these limitations, under normal traffic (non-malicious) and smaller files, it should be somewhat competitive with Nginx. For example, it should be competitive serving PHP-generated documents via php-fpm on slow connections because the document will be buffered (even if being ssl'd and gzip'd) and sent asynchronously. Both SSL and non-SSL connections using compression or not should not work meaningfully differently than they would in Nginx on such a workload.
So why does it not shine in various benchmarks? What's wrong with it? Or what's wrong with the benchmarks? Is a major site using it as an appeal to authority that it can perform?
To me, the dominating operative differences are that in event:
That's why at very high volumes servers like nginx (or Apache Traffic Server or any modern commercial/high performance proxy) usually comes out ahead.
IMO The bullets in your question are a bit off the mark, SSL and deflate are not really contributing much to the differences here as they are both filters that don't really contribute to scalability problems or even tie httpd to its traditional API guarantees about the lifecycle of a request or connection. Filters like these (vs. handlers, or the core filter responsible for the low-level I/O) are probably the least of things tied to the processing model.
But I also don't think it peforms so poorly by comparison for all but the most extreme workloads or extremely constrained systems. Most of the benchmarks I've seen are of extremely poor quality, for one reason or another.
I think largely people want what they call a webserver today to be a proxy to a more sophisticated application server (Java EE, PHP, etc) and a server designed to move I/O around most efficiently without API baggage is going to have the edge.