I don't understand why Intel MPI use DAPL, if native ibverbs are faster than DAPL, OpenMPI use native ibverbs. However, in this benchmark IntelMPI achieves better performance.
http://www.hpcadvisorycouncil.com/pdf/AMBER_Analysis_and_Profiling_Intel_E5_2680.pdf
Intel MPI uses several interfaces to interact with hardware, and DAPL is not default for all cases. OpenMPI will select some interface for current hardware too, it will be not always ibverbs, there is shared memory API for local node interactions and TCP for Ethernet-only hosts.
List for Intel MPI (Linux):
https://software.intel.com/en-us/get-started-with-mpi-for-linux
Interface to fabric can be selected with I_MPI_FABRICS environment variable: https://software.intel.com/en-us/node/535584