TL;DR Map/reduce is slower than Aggregation framework.
Map/reduce is awesome but using it adds additional JavaScript VM overhead. Aggregation framework has all the benefits of map/reduce, it has a range of pre-made aggregation functionalities, with their implementations being written in C++ thus resulting in faster aggregation.
I have listed benchmarks of both for calculating simple aggregate values in their purest view in my recent blog post. Aggregation framework there proved to be about 10x faster than m/r on a collection of 1M documents.
I have also compared current Aggregable implementation that uses m/r with my proposed one, and I have seen 3-10x speed increase.
(As there is no DSL for aggregation currently and as MONGOID-2720 wasn't merged yet, this implementation handles this job itself. It's still worth merging in as it improves performance.)