Under heavy load the cursor manager lock causes negative scalability. –
on power8 firestone (2 socket, 20 cores, 160 hwtreads) running redhat 7.2, mongo 3.2-pre, 4 10Gig E's hard wired to another firestone running 4 copies of ycsb with between 64 and 400 threads each. Throughput peaks at the lower end (64 threads each on ycsb) and has negative scaling with the larger numbers. Tracing data and profile (from perf) information indicate that the bottleneck is on the cursor manager mutex lock. a prototype (read hack) replacing the mutex lock with a mongo spinlock increased peak performance from 240K ops to 320K ops and minimized the negative scaling
- is related to
-
SERVER-29462 Eliminate possibility of false sharing in partitioned lock
- Closed
- related to
-
SERVER-30085 Increase read scalability with many cores
- Backlog