-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: 1.6.2, 1.7.0
-
Component/s: None
-
None
-
Linux
in best of the best cases it takes at least several seconds.
When the amount of data is substantial, it can block for more than an hour, pretty much rendering the whole cluster useless and making queries pile up in the queue.
It does some very intensive I/O. Does it do data compaction of some sort?
Since we have chunks of data roughly of the same size, could we just mark it free and than later rewrite it?